2026-03-09T17:13:46.256 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T17:13:46.260 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T17:13:46.284 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583 branch: squid description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} email: null first_in_suite: false flavor: default job_id: '583' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: client: debug ms: 1 global: mon election default strategy: 3 ms bind msgr1: false ms bind msgr2: true ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on pool no app: false osd: debug ms: 1 debug osd: 20 osd class default list: '*' osd class load list: '*' osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - reached quota - but it is still running - overall HEALTH_ - \(POOL_FULL\) - \(SMALLER_PGP_NUM\) - \(CACHE_POOL_NO_HIT_SET\) - \(CACHE_POOL_NEAR_FULL\) - \(POOL_APP_NOT_ENABLED\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - CEPHADM_STRAY_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: root install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFEwcypBjmMnfrwYsDATwO0OHl3Q4hz5ndWNMvSTDQ5jHRv8oAmwTjP2ZcH3OQAr6ZnVMcpfQADiwZKGtPN26Og= vm02.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAKV15g4+82rrgE5eLAlsU16cd/QptAH3dFr4YPrnXHT1BPgaoOAM888GAH81m7Zi145secbNhJxDXbe8nj+CNI= tasks: - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test.sh - rados/test_pool_quota.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T17:13:46.285 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T17:13:46.285 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T17:13:46.285 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T17:13:46.285 INFO:teuthology.task.internal:Checking packages... 2026-03-09T17:13:46.285 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T17:13:46.286 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T17:13:46.286 INFO:teuthology.packaging:ref: None 2026-03-09T17:13:46.286 INFO:teuthology.packaging:tag: None 2026-03-09T17:13:46.286 INFO:teuthology.packaging:branch: squid 2026-03-09T17:13:46.286 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:13:46.286 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T17:13:46.895 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:13:46.896 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T17:13:46.897 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T17:13:46.897 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T17:13:46.898 INFO:teuthology.task.internal:Saving configuration 2026-03-09T17:13:46.902 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T17:13:46.903 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T17:13:46.911 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 17:12:37.149496', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFEwcypBjmMnfrwYsDATwO0OHl3Q4hz5ndWNMvSTDQ5jHRv8oAmwTjP2ZcH3OQAr6ZnVMcpfQADiwZKGtPN26Og='} 2026-03-09T17:13:46.916 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm02.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 17:12:37.149923', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:02', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAKV15g4+82rrgE5eLAlsU16cd/QptAH3dFr4YPrnXHT1BPgaoOAM888GAH81m7Zi145secbNhJxDXbe8nj+CNI='} 2026-03-09T17:13:46.917 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T17:13:46.918 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-09T17:13:46.918 INFO:teuthology.task.internal:roles: ubuntu@vm02.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-09T17:13:46.918 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T17:13:46.924 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-09T17:13:46.928 DEBUG:teuthology.task.console_log:vm02 does not support IPMI; excluding 2026-03-09T17:13:46.929 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f566691a170>, signals=[15]) 2026-03-09T17:13:46.929 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T17:13:46.947 INFO:teuthology.task.internal:Opening connections... 2026-03-09T17:13:46.949 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-09T17:13:46.949 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T17:13:47.011 DEBUG:teuthology.task.internal:connecting to ubuntu@vm02.local 2026-03-09T17:13:47.011 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T17:13:47.068 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T17:13:47.070 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-09T17:13:47.083 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-09T17:13:47.103 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:NAME="Ubuntu" 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="22.04" 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_CODENAME=jammy 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:ID=ubuntu 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE=debian 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T17:13:47.129 INFO:teuthology.orchestra.run.vm00.stdout:UBUNTU_CODENAME=jammy 2026-03-09T17:13:47.129 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-09T17:13:47.135 DEBUG:teuthology.orchestra.run.vm02:> uname -m 2026-03-09T17:13:47.143 INFO:teuthology.orchestra.run.vm02.stdout:x86_64 2026-03-09T17:13:47.143 DEBUG:teuthology.orchestra.run.vm02:> cat /etc/os-release 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:NAME="Ubuntu" 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_ID="22.04" 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_CODENAME=jammy 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:ID=ubuntu 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:ID_LIKE=debian 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T17:13:47.189 INFO:teuthology.orchestra.run.vm02.stdout:UBUNTU_CODENAME=jammy 2026-03-09T17:13:47.189 INFO:teuthology.lock.ops:Updating vm02.local on lock server 2026-03-09T17:13:47.193 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T17:13:47.196 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T17:13:47.197 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T17:13:47.197 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-09T17:13:47.198 DEBUG:teuthology.orchestra.run.vm02:> test '!' -e /home/ubuntu/cephtest 2026-03-09T17:13:47.233 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T17:13:47.234 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T17:13:47.234 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-09T17:13:47.242 DEBUG:teuthology.orchestra.run.vm02:> test -z $(ls -A /var/lib/ceph) 2026-03-09T17:13:47.244 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T17:13:47.277 INFO:teuthology.orchestra.run.vm02.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T17:13:47.278 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T17:13:47.287 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-09T17:13:47.290 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:13:47.636 DEBUG:teuthology.orchestra.run.vm02:> test -e /ceph-qa-ready 2026-03-09T17:13:47.639 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:13:48.131 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T17:13:48.132 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T17:13:48.133 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T17:13:48.134 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T17:13:48.137 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T17:13:48.138 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T17:13:48.139 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T17:13:48.139 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T17:13:48.178 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T17:13:48.184 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T17:13:48.186 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T17:13:48.186 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T17:13:48.223 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:13:48.223 DEBUG:teuthology.orchestra.run.vm02:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T17:13:48.226 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:13:48.226 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T17:13:48.266 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T17:13:48.273 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T17:13:48.278 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T17:13:48.279 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T17:13:48.282 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T17:13:48.282 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T17:13:48.284 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T17:13:48.284 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T17:13:48.322 DEBUG:teuthology.orchestra.run.vm02:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T17:13:48.335 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T17:13:48.338 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T17:13:48.338 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T17:13:48.378 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T17:13:48.381 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T17:13:48.425 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T17:13:48.472 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:13:48.472 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T17:13:48.521 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T17:13:48.524 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T17:13:48.568 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:13:48.568 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T17:13:48.616 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-09T17:13:48.617 DEBUG:teuthology.orchestra.run.vm02:> sudo service rsyslog restart 2026-03-09T17:13:48.671 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T17:13:48.673 INFO:teuthology.task.internal:Starting timer... 2026-03-09T17:13:48.673 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T17:13:48.675 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T17:13:48.678 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-09T17:13:48.678 INFO:teuthology.task.selinux:Excluding vm02: VMs are not yet supported 2026-03-09T17:13:48.678 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T17:13:48.678 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T17:13:48.678 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T17:13:48.678 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T17:13:48.679 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T17:13:48.680 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T17:13:48.681 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T17:13:49.302 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T17:13:49.307 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T17:13:49.308 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryd8pceijm --limit vm00.local,vm02.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T17:18:22.614 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm02.local')] 2026-03-09T17:18:22.614 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-09T17:18:22.615 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T17:18:22.672 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-09T17:18:22.768 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-09T17:18:22.769 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm02.local' 2026-03-09T17:18:22.769 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T17:18:22.840 DEBUG:teuthology.orchestra.run.vm02:> true 2026-03-09T17:18:23.040 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm02.local' 2026-03-09T17:18:23.040 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T17:18:23.049 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T17:18:23.050 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T17:18:23.050 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T17:18:23.051 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T17:18:23.051 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T17:18:23.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T17:18:23.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Command line: ntpd -gq 2026-03-09T17:18:23.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: ---------------------------------------------------- 2026-03-09T17:18:23.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T17:18:23.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T17:18:23.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: corporation. Support and training for ntp-4 are 2026-03-09T17:18:23.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: available at https://www.nwtime.org/support 2026-03-09T17:18:23.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: ---------------------------------------------------- 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: proto: precision = 0.029 usec (-25) 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: basedate set to 2022-02-04 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: gps base set to 2022-02-06 (week 2196) 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stderr: 9 Mar 17:18:23 ntpd[16164]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Listen normally on 3 ens3 192.168.123.100:123 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Listen normally on 4 lo [::1]:123 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:0%2]:123 2026-03-09T17:18:23.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:23 ntpd[16164]: Listening on routing socket on fd #22 for interface updates 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Command line: ntpd -gq 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: ---------------------------------------------------- 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: corporation. Support and training for ntp-4 are 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: available at https://www.nwtime.org/support 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: ---------------------------------------------------- 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: proto: precision = 0.029 usec (-25) 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: basedate set to 2022-02-04 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: gps base set to 2022-02-06 (week 2196) 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stderr: 9 Mar 17:18:23 ntpd[16088]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T17:18:23.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T17:18:23.098 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T17:18:23.098 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Listen normally on 3 ens3 192.168.123.102:123 2026-03-09T17:18:23.098 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Listen normally on 4 lo [::1]:123 2026-03-09T17:18:23.098 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:2%2]:123 2026-03-09T17:18:23.098 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:23 ntpd[16088]: Listening on routing socket on fd #22 for interface updates 2026-03-09T17:18:24.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:24 ntpd[16164]: Soliciting pool server 5.45.97.204 2026-03-09T17:18:24.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:24 ntpd[16088]: Soliciting pool server 51.75.67.47 2026-03-09T17:18:25.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:25 ntpd[16164]: Soliciting pool server 129.70.132.35 2026-03-09T17:18:25.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:25 ntpd[16164]: Soliciting pool server 162.19.170.154 2026-03-09T17:18:25.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:25 ntpd[16088]: Soliciting pool server 5.45.97.204 2026-03-09T17:18:25.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:25 ntpd[16088]: Soliciting pool server 144.76.76.107 2026-03-09T17:18:26.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:26 ntpd[16164]: Soliciting pool server 78.46.87.46 2026-03-09T17:18:26.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:26 ntpd[16164]: Soliciting pool server 144.76.167.162 2026-03-09T17:18:26.066 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:26 ntpd[16164]: Soliciting pool server 158.180.28.150 2026-03-09T17:18:26.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:26 ntpd[16088]: Soliciting pool server 162.19.170.154 2026-03-09T17:18:26.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:26 ntpd[16088]: Soliciting pool server 129.70.132.35 2026-03-09T17:18:26.098 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:26 ntpd[16088]: Soliciting pool server 77.42.16.222 2026-03-09T17:18:27.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:27 ntpd[16164]: Soliciting pool server 212.132.97.26 2026-03-09T17:18:27.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:27 ntpd[16164]: Soliciting pool server 78.46.204.247 2026-03-09T17:18:27.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:27 ntpd[16164]: Soliciting pool server 51.75.67.47 2026-03-09T17:18:27.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:27 ntpd[16164]: Soliciting pool server 94.130.35.4 2026-03-09T17:18:27.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:27 ntpd[16088]: Soliciting pool server 158.180.28.150 2026-03-09T17:18:27.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:27 ntpd[16088]: Soliciting pool server 78.46.87.46 2026-03-09T17:18:27.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:27 ntpd[16088]: Soliciting pool server 144.76.167.162 2026-03-09T17:18:27.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:27 ntpd[16088]: Soliciting pool server 62.108.36.235 2026-03-09T17:18:28.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:28 ntpd[16164]: Soliciting pool server 185.168.228.59 2026-03-09T17:18:28.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:28 ntpd[16164]: Soliciting pool server 116.203.151.74 2026-03-09T17:18:28.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:28 ntpd[16164]: Soliciting pool server 185.125.190.57 2026-03-09T17:18:28.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:28 ntpd[16088]: Soliciting pool server 212.132.97.26 2026-03-09T17:18:28.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:28 ntpd[16088]: Soliciting pool server 78.46.204.247 2026-03-09T17:18:28.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:28 ntpd[16088]: Soliciting pool server 91.189.91.157 2026-03-09T17:18:29.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:29 ntpd[16164]: Soliciting pool server 185.125.190.56 2026-03-09T17:18:29.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:29 ntpd[16164]: Soliciting pool server 185.248.188.98 2026-03-09T17:18:29.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:29 ntpd[16164]: Soliciting pool server 77.42.16.222 2026-03-09T17:18:29.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:29 ntpd[16088]: Soliciting pool server 185.125.190.57 2026-03-09T17:18:29.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:29 ntpd[16088]: Soliciting pool server 116.203.151.74 2026-03-09T17:18:30.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:30 ntpd[16164]: Soliciting pool server 185.125.190.58 2026-03-09T17:18:30.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:30 ntpd[16164]: Soliciting pool server 62.108.36.235 2026-03-09T17:18:30.065 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:30 ntpd[16164]: Soliciting pool server 2a01:4f8:140:1321::2 2026-03-09T17:18:30.096 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:30 ntpd[16088]: Soliciting pool server 185.125.190.56 2026-03-09T17:18:30.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:30 ntpd[16088]: Soliciting pool server 240b:4005:12b:fb00:d11d:fbb7:f895:76be 2026-03-09T17:18:31.096 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:31 ntpd[16088]: Soliciting pool server 185.125.190.58 2026-03-09T17:18:32.093 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 17:18:32 ntpd[16164]: ntpd: time slew +0.000975 s 2026-03-09T17:18:32.093 INFO:teuthology.orchestra.run.vm00.stdout:ntpd: time slew +0.000975s 2026-03-09T17:18:32.097 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:32 ntpd[16088]: Soliciting pool server 2620:2d:4000:1::40 2026-03-09T17:18:32.114 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T17:18:32.114 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-09T17:18:32.114 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.114 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.114 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.114 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.114 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.125 INFO:teuthology.orchestra.run.vm02.stdout: 9 Mar 17:18:32 ntpd[16088]: ntpd: time slew +0.004193 s 2026-03-09T17:18:32.125 INFO:teuthology.orchestra.run.vm02.stdout:ntpd: time slew +0.004193s 2026-03-09T17:18:32.143 INFO:teuthology.orchestra.run.vm02.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T17:18:32.143 INFO:teuthology.orchestra.run.vm02.stdout:============================================================================== 2026-03-09T17:18:32.143 INFO:teuthology.orchestra.run.vm02.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.143 INFO:teuthology.orchestra.run.vm02.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.143 INFO:teuthology.orchestra.run.vm02.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.143 INFO:teuthology.orchestra.run.vm02.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.143 INFO:teuthology.orchestra.run.vm02.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:18:32.143 INFO:teuthology.run_tasks:Running task install... 2026-03-09T17:18:32.145 DEBUG:teuthology.task.install:project ceph 2026-03-09T17:18:32.145 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T17:18:32.145 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T17:18:32.145 INFO:teuthology.task.install:Using flavor: default 2026-03-09T17:18:32.147 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T17:18:32.147 INFO:teuthology.task.install:extra packages: [] 2026-03-09T17:18:32.148 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-key list | grep Ceph 2026-03-09T17:18:32.148 DEBUG:teuthology.orchestra.run.vm02:> sudo apt-key list | grep Ceph 2026-03-09T17:18:32.194 INFO:teuthology.orchestra.run.vm00.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T17:18:32.213 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T17:18:32.213 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T17:18:32.213 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T17:18:32.213 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T17:18:32.213 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:18:32.223 INFO:teuthology.orchestra.run.vm02.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T17:18:32.242 INFO:teuthology.orchestra.run.vm02.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T17:18:32.242 INFO:teuthology.orchestra.run.vm02.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T17:18:32.242 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T17:18:32.242 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T17:18:32.242 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:18:32.818 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T17:18:32.818 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:18:32.874 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T17:18:32.874 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:18:33.347 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:18:33.347 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T17:18:33.355 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-09T17:18:33.398 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:18:33.398 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T17:18:33.405 DEBUG:teuthology.orchestra.run.vm02:> sudo apt-get update 2026-03-09T17:18:33.648 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T17:18:33.680 INFO:teuthology.orchestra.run.vm00.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T17:18:33.708 INFO:teuthology.orchestra.run.vm02.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T17:18:33.708 INFO:teuthology.orchestra.run.vm02.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T17:18:33.716 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T17:18:33.888 INFO:teuthology.orchestra.run.vm00.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T17:18:33.903 INFO:teuthology.orchestra.run.vm00.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T17:18:33.979 INFO:teuthology.orchestra.run.vm02.stdout:Ign:3 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T17:18:34.016 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T17:18:34.096 INFO:teuthology.orchestra.run.vm02.stdout:Get:4 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T17:18:34.133 INFO:teuthology.orchestra.run.vm00.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T17:18:34.214 INFO:teuthology.orchestra.run.vm02.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T17:18:34.247 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T17:18:34.313 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 25.8 kB in 1s (31.5 kB/s) 2026-03-09T17:18:34.332 INFO:teuthology.orchestra.run.vm02.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T17:18:34.884 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T17:18:34.896 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:18:34.926 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T17:18:35.076 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T17:18:35.077 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T17:18:35.160 INFO:teuthology.orchestra.run.vm02.stdout:Hit:7 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T17:18:35.197 INFO:teuthology.orchestra.run.vm02.stdout:Hit:8 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout:The following additional packages will be installed: 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:18:35.199 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout:Suggested packages: 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: smart-notifier mailx | mailutils 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout:Recommended packages: 2026-03-09T17:18:35.200 INFO:teuthology.orchestra.run.vm00.stdout: btrfs-tools 2026-03-09T17:18:35.242 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-09T17:18:35.242 INFO:teuthology.orchestra.run.vm00.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T17:18:35.242 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout: socat unzip xmlstarlet zip 2026-03-09T17:18:35.243 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be upgraded: 2026-03-09T17:18:35.244 INFO:teuthology.orchestra.run.vm00.stdout: librados2 librbd1 2026-03-09T17:18:35.308 INFO:teuthology.orchestra.run.vm02.stdout:Fetched 25.8 kB in 2s (14.7 kB/s) 2026-03-09T17:18:35.335 INFO:teuthology.orchestra.run.vm00.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:18:35.336 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 178 MB of archives. 2026-03-09T17:18:35.336 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T17:18:35.336 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T17:18:35.371 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T17:18:35.372 INFO:teuthology.orchestra.run.vm00.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T17:18:35.382 INFO:teuthology.orchestra.run.vm00.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T17:18:35.411 INFO:teuthology.orchestra.run.vm00.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T17:18:35.427 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T17:18:35.441 INFO:teuthology.orchestra.run.vm00.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T17:18:35.442 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T17:18:35.443 INFO:teuthology.orchestra.run.vm00.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T17:18:35.443 INFO:teuthology.orchestra.run.vm00.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T17:18:35.444 INFO:teuthology.orchestra.run.vm00.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T17:18:35.446 INFO:teuthology.orchestra.run.vm00.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T17:18:35.447 INFO:teuthology.orchestra.run.vm00.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T17:18:35.448 INFO:teuthology.orchestra.run.vm00.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T17:18:35.448 INFO:teuthology.orchestra.run.vm00.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T17:18:35.449 INFO:teuthology.orchestra.run.vm00.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T17:18:35.457 INFO:teuthology.orchestra.run.vm00.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T17:18:35.459 INFO:teuthology.orchestra.run.vm00.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T17:18:35.461 INFO:teuthology.orchestra.run.vm00.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T17:18:35.461 INFO:teuthology.orchestra.run.vm00.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T17:18:35.462 INFO:teuthology.orchestra.run.vm00.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T17:18:35.462 INFO:teuthology.orchestra.run.vm00.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T17:18:35.462 INFO:teuthology.orchestra.run.vm00.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T17:18:35.463 INFO:teuthology.orchestra.run.vm00.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T17:18:35.463 INFO:teuthology.orchestra.run.vm00.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T17:18:35.464 INFO:teuthology.orchestra.run.vm00.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T17:18:35.471 INFO:teuthology.orchestra.run.vm00.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T17:18:35.471 INFO:teuthology.orchestra.run.vm00.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T17:18:35.473 INFO:teuthology.orchestra.run.vm00.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T17:18:35.474 INFO:teuthology.orchestra.run.vm00.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T17:18:35.474 INFO:teuthology.orchestra.run.vm00.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T17:18:35.474 INFO:teuthology.orchestra.run.vm00.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T17:18:35.475 INFO:teuthology.orchestra.run.vm00.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T17:18:35.475 INFO:teuthology.orchestra.run.vm00.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T17:18:35.476 INFO:teuthology.orchestra.run.vm00.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T17:18:35.480 INFO:teuthology.orchestra.run.vm00.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T17:18:35.481 INFO:teuthology.orchestra.run.vm00.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T17:18:35.485 INFO:teuthology.orchestra.run.vm00.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T17:18:35.486 INFO:teuthology.orchestra.run.vm00.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T17:18:35.486 INFO:teuthology.orchestra.run.vm00.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T17:18:35.486 INFO:teuthology.orchestra.run.vm00.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T17:18:35.487 INFO:teuthology.orchestra.run.vm00.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T17:18:35.489 INFO:teuthology.orchestra.run.vm00.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T17:18:35.490 INFO:teuthology.orchestra.run.vm00.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T17:18:35.492 INFO:teuthology.orchestra.run.vm00.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T17:18:35.497 INFO:teuthology.orchestra.run.vm00.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T17:18:35.498 INFO:teuthology.orchestra.run.vm00.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T17:18:35.522 INFO:teuthology.orchestra.run.vm00.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T17:18:35.525 INFO:teuthology.orchestra.run.vm00.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T17:18:35.525 INFO:teuthology.orchestra.run.vm00.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T17:18:35.557 INFO:teuthology.orchestra.run.vm00.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T17:18:35.557 INFO:teuthology.orchestra.run.vm00.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T17:18:35.558 INFO:teuthology.orchestra.run.vm00.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T17:18:35.558 INFO:teuthology.orchestra.run.vm00.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T17:18:35.558 INFO:teuthology.orchestra.run.vm00.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T17:18:35.559 INFO:teuthology.orchestra.run.vm00.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T17:18:35.560 INFO:teuthology.orchestra.run.vm00.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T17:18:35.561 INFO:teuthology.orchestra.run.vm00.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T17:18:35.562 INFO:teuthology.orchestra.run.vm00.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T17:18:35.568 INFO:teuthology.orchestra.run.vm00.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T17:18:35.570 INFO:teuthology.orchestra.run.vm00.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T17:18:35.573 INFO:teuthology.orchestra.run.vm00.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T17:18:35.574 INFO:teuthology.orchestra.run.vm00.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T17:18:35.574 INFO:teuthology.orchestra.run.vm00.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T17:18:35.579 INFO:teuthology.orchestra.run.vm00.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T17:18:35.580 INFO:teuthology.orchestra.run.vm00.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T17:18:35.583 INFO:teuthology.orchestra.run.vm00.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T17:18:35.583 INFO:teuthology.orchestra.run.vm00.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T17:18:35.584 INFO:teuthology.orchestra.run.vm00.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T17:18:35.584 INFO:teuthology.orchestra.run.vm00.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T17:18:35.585 INFO:teuthology.orchestra.run.vm00.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T17:18:35.586 INFO:teuthology.orchestra.run.vm00.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T17:18:35.594 INFO:teuthology.orchestra.run.vm00.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T17:18:35.594 INFO:teuthology.orchestra.run.vm00.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T17:18:35.595 INFO:teuthology.orchestra.run.vm00.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T17:18:35.597 INFO:teuthology.orchestra.run.vm00.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T17:18:35.600 INFO:teuthology.orchestra.run.vm00.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T17:18:35.619 INFO:teuthology.orchestra.run.vm00.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T17:18:35.873 INFO:teuthology.orchestra.run.vm00.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T17:18:35.967 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T17:18:35.980 DEBUG:teuthology.orchestra.run.vm02:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:18:36.012 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T17:18:36.175 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T17:18:36.176 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T17:18:36.294 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout:The following additional packages will be installed: 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T17:18:36.302 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout:Suggested packages: 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: smart-notifier mailx | mailutils 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout:Recommended packages: 2026-03-09T17:18:36.303 INFO:teuthology.orchestra.run.vm02.stdout: btrfs-tools 2026-03-09T17:18:36.334 INFO:teuthology.orchestra.run.vm02.stdout:The following NEW packages will be installed: 2026-03-09T17:18:36.334 INFO:teuthology.orchestra.run.vm02.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T17:18:36.334 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T17:18:36.334 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T17:18:36.334 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T17:18:36.334 INFO:teuthology.orchestra.run.vm02.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T17:18:36.334 INFO:teuthology.orchestra.run.vm02.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: socat unzip xmlstarlet zip 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be upgraded: 2026-03-09T17:18:36.335 INFO:teuthology.orchestra.run.vm02.stdout: librados2 librbd1 2026-03-09T17:18:36.545 INFO:teuthology.orchestra.run.vm02.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:18:36.546 INFO:teuthology.orchestra.run.vm02.stdout:Need to get 178 MB of archives. 2026-03-09T17:18:36.546 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T17:18:36.546 INFO:teuthology.orchestra.run.vm02.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T17:18:36.721 INFO:teuthology.orchestra.run.vm02.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T17:18:36.726 INFO:teuthology.orchestra.run.vm02.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T17:18:36.737 INFO:teuthology.orchestra.run.vm00.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T17:18:36.762 INFO:teuthology.orchestra.run.vm02.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T17:18:36.870 INFO:teuthology.orchestra.run.vm00.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T17:18:36.871 INFO:teuthology.orchestra.run.vm02.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T17:18:36.875 INFO:teuthology.orchestra.run.vm02.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T17:18:36.884 INFO:teuthology.orchestra.run.vm00.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T17:18:36.889 INFO:teuthology.orchestra.run.vm02.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T17:18:36.890 INFO:teuthology.orchestra.run.vm00.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T17:18:36.891 INFO:teuthology.orchestra.run.vm00.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T17:18:36.893 INFO:teuthology.orchestra.run.vm02.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T17:18:36.894 INFO:teuthology.orchestra.run.vm02.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T17:18:36.894 INFO:teuthology.orchestra.run.vm00.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T17:18:36.894 INFO:teuthology.orchestra.run.vm02.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T17:18:36.895 INFO:teuthology.orchestra.run.vm02.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T17:18:36.895 INFO:teuthology.orchestra.run.vm00.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T17:18:36.901 INFO:teuthology.orchestra.run.vm00.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T17:18:36.907 INFO:teuthology.orchestra.run.vm02.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T17:18:36.909 INFO:teuthology.orchestra.run.vm02.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T17:18:36.911 INFO:teuthology.orchestra.run.vm02.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T17:18:36.936 INFO:teuthology.orchestra.run.vm02.stdout:Get:15 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T17:18:36.947 INFO:teuthology.orchestra.run.vm02.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T17:18:36.947 INFO:teuthology.orchestra.run.vm02.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T17:18:36.949 INFO:teuthology.orchestra.run.vm02.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T17:18:36.951 INFO:teuthology.orchestra.run.vm02.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T17:18:36.952 INFO:teuthology.orchestra.run.vm02.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T17:18:36.952 INFO:teuthology.orchestra.run.vm02.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T17:18:36.952 INFO:teuthology.orchestra.run.vm02.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T17:18:36.953 INFO:teuthology.orchestra.run.vm02.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T17:18:36.989 INFO:teuthology.orchestra.run.vm02.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T17:18:36.990 INFO:teuthology.orchestra.run.vm02.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T17:18:36.990 INFO:teuthology.orchestra.run.vm02.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T17:18:37.028 INFO:teuthology.orchestra.run.vm02.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T17:18:37.028 INFO:teuthology.orchestra.run.vm02.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T17:18:37.028 INFO:teuthology.orchestra.run.vm02.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T17:18:37.030 INFO:teuthology.orchestra.run.vm02.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T17:18:37.034 INFO:teuthology.orchestra.run.vm02.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T17:18:37.034 INFO:teuthology.orchestra.run.vm02.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T17:18:37.035 INFO:teuthology.orchestra.run.vm02.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T17:18:37.065 INFO:teuthology.orchestra.run.vm02.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T17:18:37.065 INFO:teuthology.orchestra.run.vm02.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T17:18:37.067 INFO:teuthology.orchestra.run.vm02.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T17:18:37.102 INFO:teuthology.orchestra.run.vm02.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T17:18:37.102 INFO:teuthology.orchestra.run.vm02.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T17:18:37.106 INFO:teuthology.orchestra.run.vm02.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T17:18:37.107 INFO:teuthology.orchestra.run.vm02.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T17:18:37.107 INFO:teuthology.orchestra.run.vm02.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T17:18:37.107 INFO:teuthology.orchestra.run.vm02.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T17:18:37.108 INFO:teuthology.orchestra.run.vm02.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T17:18:37.140 INFO:teuthology.orchestra.run.vm02.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T17:18:37.141 INFO:teuthology.orchestra.run.vm02.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T17:18:37.145 INFO:teuthology.orchestra.run.vm02.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T17:18:37.179 INFO:teuthology.orchestra.run.vm02.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T17:18:37.180 INFO:teuthology.orchestra.run.vm02.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T17:18:37.205 INFO:teuthology.orchestra.run.vm02.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T17:18:37.206 INFO:teuthology.orchestra.run.vm02.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T17:18:37.208 INFO:teuthology.orchestra.run.vm02.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T17:18:37.239 INFO:teuthology.orchestra.run.vm00.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T17:18:37.239 INFO:teuthology.orchestra.run.vm00.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T17:18:37.243 INFO:teuthology.orchestra.run.vm00.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T17:18:37.875 INFO:teuthology.orchestra.run.vm02.stdout:Get:52 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T17:18:38.008 INFO:teuthology.orchestra.run.vm02.stdout:Get:53 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T17:18:38.024 INFO:teuthology.orchestra.run.vm02.stdout:Get:54 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T17:18:38.031 INFO:teuthology.orchestra.run.vm02.stdout:Get:55 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T17:18:38.032 INFO:teuthology.orchestra.run.vm02.stdout:Get:56 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T17:18:38.036 INFO:teuthology.orchestra.run.vm02.stdout:Get:57 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T17:18:38.038 INFO:teuthology.orchestra.run.vm02.stdout:Get:58 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T17:18:38.042 INFO:teuthology.orchestra.run.vm02.stdout:Get:59 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T17:18:38.286 INFO:teuthology.orchestra.run.vm02.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T17:18:38.323 INFO:teuthology.orchestra.run.vm00.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T17:18:38.331 INFO:teuthology.orchestra.run.vm02.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T17:18:38.347 INFO:teuthology.orchestra.run.vm02.stdout:Get:62 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T17:18:38.355 INFO:teuthology.orchestra.run.vm02.stdout:Get:63 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T17:18:38.379 INFO:teuthology.orchestra.run.vm02.stdout:Get:64 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T17:18:38.397 INFO:teuthology.orchestra.run.vm02.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T17:18:38.459 INFO:teuthology.orchestra.run.vm02.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T17:18:38.497 INFO:teuthology.orchestra.run.vm00.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T17:18:38.499 INFO:teuthology.orchestra.run.vm00.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T17:18:38.502 INFO:teuthology.orchestra.run.vm00.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T17:18:38.505 INFO:teuthology.orchestra.run.vm02.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T17:18:38.564 INFO:teuthology.orchestra.run.vm02.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T17:18:38.575 INFO:teuthology.orchestra.run.vm00.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T17:18:38.716 INFO:teuthology.orchestra.run.vm02.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T17:18:38.765 INFO:teuthology.orchestra.run.vm02.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T17:18:38.811 INFO:teuthology.orchestra.run.vm02.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T17:18:38.851 INFO:teuthology.orchestra.run.vm02.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T17:18:38.912 INFO:teuthology.orchestra.run.vm02.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T17:18:38.966 INFO:teuthology.orchestra.run.vm02.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T17:18:38.968 INFO:teuthology.orchestra.run.vm00.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T17:18:39.007 INFO:teuthology.orchestra.run.vm02.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T17:18:39.516 INFO:teuthology.orchestra.run.vm02.stdout:Get:76 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T17:18:39.742 INFO:teuthology.orchestra.run.vm02.stdout:Get:77 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T17:18:39.746 INFO:teuthology.orchestra.run.vm02.stdout:Get:78 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T17:18:39.748 INFO:teuthology.orchestra.run.vm02.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T17:18:39.766 INFO:teuthology.orchestra.run.vm02.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T17:18:39.867 INFO:teuthology.orchestra.run.vm00.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T17:18:39.868 INFO:teuthology.orchestra.run.vm00.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T17:18:39.951 INFO:teuthology.orchestra.run.vm00.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T17:18:40.016 INFO:teuthology.orchestra.run.vm02.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T17:18:40.056 INFO:teuthology.orchestra.run.vm00.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T17:18:40.078 INFO:teuthology.orchestra.run.vm00.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T17:18:40.080 INFO:teuthology.orchestra.run.vm00.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T17:18:40.194 INFO:teuthology.orchestra.run.vm00.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T17:18:40.514 INFO:teuthology.orchestra.run.vm00.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T17:18:40.514 INFO:teuthology.orchestra.run.vm00.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T17:18:41.068 INFO:teuthology.orchestra.run.vm02.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T17:18:41.068 INFO:teuthology.orchestra.run.vm02.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T17:18:41.160 INFO:teuthology.orchestra.run.vm02.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T17:18:41.275 INFO:teuthology.orchestra.run.vm02.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T17:18:41.289 INFO:teuthology.orchestra.run.vm02.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T17:18:41.291 INFO:teuthology.orchestra.run.vm02.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T17:18:41.405 INFO:teuthology.orchestra.run.vm02.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T17:18:41.687 INFO:teuthology.orchestra.run.vm02.stdout:Get:89 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T17:18:41.753 INFO:teuthology.orchestra.run.vm02.stdout:Get:90 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T17:18:41.775 INFO:teuthology.orchestra.run.vm02.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T17:18:41.775 INFO:teuthology.orchestra.run.vm02.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T17:18:41.792 INFO:teuthology.orchestra.run.vm02.stdout:Get:93 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T17:18:41.849 INFO:teuthology.orchestra.run.vm02.stdout:Get:94 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T17:18:41.887 INFO:teuthology.orchestra.run.vm02.stdout:Get:95 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T17:18:41.926 INFO:teuthology.orchestra.run.vm02.stdout:Get:96 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T17:18:41.969 INFO:teuthology.orchestra.run.vm02.stdout:Get:97 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T17:18:42.014 INFO:teuthology.orchestra.run.vm02.stdout:Get:98 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T17:18:42.054 INFO:teuthology.orchestra.run.vm02.stdout:Get:99 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T17:18:42.126 INFO:teuthology.orchestra.run.vm02.stdout:Get:100 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T17:18:42.164 INFO:teuthology.orchestra.run.vm02.stdout:Get:101 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T17:18:42.281 INFO:teuthology.orchestra.run.vm02.stdout:Get:102 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T17:18:42.324 INFO:teuthology.orchestra.run.vm02.stdout:Get:103 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T17:18:42.363 INFO:teuthology.orchestra.run.vm02.stdout:Get:104 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T17:18:42.465 INFO:teuthology.orchestra.run.vm02.stdout:Get:105 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T17:18:43.271 INFO:teuthology.orchestra.run.vm00.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T17:18:43.365 INFO:teuthology.orchestra.run.vm00.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T17:18:43.365 INFO:teuthology.orchestra.run.vm00.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T17:18:44.337 INFO:teuthology.orchestra.run.vm00.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T17:18:44.644 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 178 MB in 9s (19.6 MB/s) 2026-03-09T17:18:44.948 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T17:18:44.984 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T17:18:44.987 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T17:18:44.989 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T17:18:45.006 INFO:teuthology.orchestra.run.vm02.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T17:18:45.006 INFO:teuthology.orchestra.run.vm02.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T17:18:45.007 INFO:teuthology.orchestra.run.vm02.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T17:18:45.016 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T17:18:45.023 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T17:18:45.024 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T17:18:45.040 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T17:18:45.046 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T17:18:45.047 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T17:18:45.067 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T17:18:45.073 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:18:45.077 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:45.118 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T17:18:45.123 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:18:45.124 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:45.144 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T17:18:45.149 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:18:45.150 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:45.181 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T17:18:45.189 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T17:18:45.190 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T17:18:45.215 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.218 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T17:18:45.302 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.304 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T17:18:45.371 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libnbd0. 2026-03-09T17:18:45.376 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T17:18:45.376 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T17:18:45.390 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T17:18:45.393 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.394 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:45.417 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rados. 2026-03-09T17:18:45.421 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.421 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:45.438 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T17:18:45.441 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:45.442 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:45.453 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T17:18:45.456 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.457 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:45.472 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T17:18:45.475 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:45.476 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:45.495 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T17:18:45.498 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T17:18:45.499 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T17:18:45.518 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T17:18:45.523 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T17:18:45.524 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T17:18:45.541 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T17:18:45.547 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.548 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:45.569 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T17:18:45.575 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T17:18:45.576 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T17:18:45.597 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T17:18:45.603 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T17:18:45.603 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T17:18:45.623 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T17:18:45.628 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T17:18:45.629 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T17:18:45.649 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua5.1. 2026-03-09T17:18:45.655 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T17:18:45.656 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T17:18:45.674 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-any. 2026-03-09T17:18:45.679 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T17:18:45.680 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T17:18:45.698 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package zip. 2026-03-09T17:18:45.703 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T17:18:45.703 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T17:18:45.721 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package unzip. 2026-03-09T17:18:45.726 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T17:18:45.727 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T17:18:45.746 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package luarocks. 2026-03-09T17:18:45.751 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T17:18:45.752 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T17:18:45.802 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librgw2. 2026-03-09T17:18:45.808 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.809 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:45.836 INFO:teuthology.orchestra.run.vm02.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T17:18:45.938 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T17:18:45.944 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.945 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:45.963 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T17:18:45.969 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T17:18:45.970 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T17:18:45.986 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T17:18:45.992 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:45.992 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:46.016 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-common. 2026-03-09T17:18:46.021 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:46.022 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:46.159 INFO:teuthology.orchestra.run.vm02.stdout:Fetched 178 MB in 10s (18.7 MB/s) 2026-03-09T17:18:46.260 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T17:18:46.291 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T17:18:46.293 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T17:18:46.387 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T17:18:46.408 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T17:18:46.410 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-base. 2026-03-09T17:18:46.414 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T17:18:46.415 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T17:18:46.416 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:46.421 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:46.431 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T17:18:46.437 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T17:18:46.437 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T17:18:46.458 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T17:18:46.464 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:18:46.468 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:46.527 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T17:18:46.530 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T17:18:46.532 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:18:46.533 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:46.536 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T17:18:46.537 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T17:18:46.553 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T17:18:46.553 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T17:18:46.560 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:18:46.560 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T17:18:46.561 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:46.561 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T17:18:46.583 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T17:18:46.585 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T17:18:46.588 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T17:18:46.589 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T17:18:46.591 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T17:18:46.592 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T17:18:46.605 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T17:18:46.610 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T17:18:46.611 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T17:18:46.614 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:46.617 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T17:18:46.626 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T17:18:46.631 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T17:18:46.632 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T17:18:46.646 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T17:18:46.651 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T17:18:46.652 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T17:18:46.683 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-portend. 2026-03-09T17:18:46.689 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T17:18:46.690 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T17:18:46.695 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:46.697 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T17:18:46.707 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T17:18:46.712 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T17:18:46.713 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T17:18:46.729 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T17:18:46.736 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T17:18:46.749 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T17:18:46.768 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libnbd0. 2026-03-09T17:18:46.773 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T17:18:46.774 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T17:18:46.782 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T17:18:46.787 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T17:18:46.788 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T17:18:46.788 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T17:18:46.794 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:46.795 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:46.805 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T17:18:46.810 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T17:18:46.812 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T17:18:46.825 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-rados. 2026-03-09T17:18:46.828 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-mako. 2026-03-09T17:18:46.831 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:46.832 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:46.834 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T17:18:46.835 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T17:18:46.854 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T17:18:46.854 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T17:18:46.859 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T17:18:46.859 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T17:18:46.861 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:46.863 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:46.875 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T17:18:46.877 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T17:18:46.879 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T17:18:46.880 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T17:18:46.882 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:46.883 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:46.893 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webob. 2026-03-09T17:18:46.898 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T17:18:46.899 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T17:18:46.899 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T17:18:46.904 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:46.905 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:46.920 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T17:18:46.925 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T17:18:46.925 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T17:18:46.928 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T17:18:46.930 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T17:18:46.931 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T17:18:46.947 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T17:18:46.951 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T17:18:46.952 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T17:18:46.953 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T17:18:46.956 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T17:18:46.957 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T17:18:46.970 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-paste. 2026-03-09T17:18:46.975 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T17:18:46.976 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T17:18:46.976 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T17:18:46.982 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:46.983 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.010 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T17:18:47.011 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T17:18:47.017 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T17:18:47.017 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T17:18:47.018 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T17:18:47.018 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T17:18:47.033 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T17:18:47.039 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T17:18:47.040 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T17:18:47.043 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T17:18:47.049 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T17:18:47.050 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T17:18:47.059 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T17:18:47.065 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T17:18:47.066 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T17:18:47.073 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T17:18:47.078 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T17:18:47.079 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T17:18:47.085 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T17:18:47.092 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T17:18:47.092 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T17:18:47.099 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package lua5.1. 2026-03-09T17:18:47.106 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T17:18:47.107 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T17:18:47.125 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T17:18:47.126 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package lua-any. 2026-03-09T17:18:47.130 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T17:18:47.131 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T17:18:47.131 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T17:18:47.132 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T17:18:47.143 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package zip. 2026-03-09T17:18:47.147 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T17:18:47.148 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T17:18:47.156 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T17:18:47.162 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:47.163 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.167 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package unzip. 2026-03-09T17:18:47.172 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T17:18:47.173 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T17:18:47.193 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package luarocks. 2026-03-09T17:18:47.197 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T17:18:47.198 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T17:18:47.203 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T17:18:47.209 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.210 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.228 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T17:18:47.233 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.234 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.246 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package librgw2. 2026-03-09T17:18:47.250 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.251 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.265 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T17:18:47.270 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.271 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.396 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T17:18:47.397 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T17:18:47.400 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.401 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.402 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T17:18:47.403 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T17:18:47.423 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T17:18:47.424 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T17:18:47.427 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T17:18:47.428 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T17:18:47.429 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.430 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.445 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T17:18:47.450 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.451 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.475 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-common. 2026-03-09T17:18:47.481 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.482 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.726 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph. 2026-03-09T17:18:47.732 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.834 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.864 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T17:18:47.870 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.870 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-base. 2026-03-09T17:18:47.871 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.876 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.881 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.908 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T17:18:47.913 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.914 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:47.989 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package cephadm. 2026-03-09T17:18:47.994 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T17:18:47.995 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:47.996 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:48.000 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T17:18:48.001 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T17:18:48.016 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T17:18:48.020 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T17:18:48.020 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T17:18:48.021 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T17:18:48.026 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T17:18:48.027 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T17:18:48.046 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T17:18:48.048 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T17:18:48.050 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:48.050 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:48.054 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T17:18:48.055 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T17:18:48.073 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T17:18:48.074 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T17:18:48.077 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T17:18:48.077 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T17:18:48.080 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T17:18:48.081 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T17:18:48.094 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-routes. 2026-03-09T17:18:48.097 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T17:18:48.098 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T17:18:48.099 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T17:18:48.104 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T17:18:48.105 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T17:18:48.122 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T17:18:48.124 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T17:18:48.127 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T17:18:48.128 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:48.128 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T17:18:48.129 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:48.148 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-portend. 2026-03-09T17:18:48.153 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T17:18:48.154 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T17:18:48.179 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T17:18:48.185 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T17:18:48.186 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T17:18:48.204 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T17:18:48.209 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T17:18:48.210 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T17:18:48.244 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T17:18:48.250 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T17:18:48.251 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T17:18:48.275 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T17:18:48.280 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T17:18:48.281 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T17:18:48.300 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-mako. 2026-03-09T17:18:48.305 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T17:18:48.306 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T17:18:48.330 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T17:18:48.335 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T17:18:48.336 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T17:18:48.468 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T17:18:48.473 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T17:18:48.473 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T17:18:48.490 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-webob. 2026-03-09T17:18:48.495 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T17:18:48.495 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T17:18:48.496 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T17:18:48.501 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T17:18:48.502 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T17:18:48.518 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T17:18:48.523 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T17:18:48.525 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T17:18:48.558 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T17:18:48.563 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T17:18:48.563 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T17:18:48.563 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T17:18:48.568 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T17:18:48.569 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T17:18:48.581 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-paste. 2026-03-09T17:18:48.586 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T17:18:48.588 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T17:18:48.609 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T17:18:48.616 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T17:18:48.616 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T17:18:48.622 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T17:18:48.627 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T17:18:48.627 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T17:18:48.635 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T17:18:48.642 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T17:18:48.643 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T17:18:48.644 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T17:18:48.650 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T17:18:48.651 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T17:18:48.671 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T17:18:48.677 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T17:18:48.678 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T17:18:48.698 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T17:18:48.704 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T17:18:48.705 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T17:18:48.752 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T17:18:48.758 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T17:18:48.759 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T17:18:48.774 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T17:18:48.779 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:48.780 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:48.784 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T17:18:48.790 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:48.791 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:48.836 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T17:18:48.842 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:48.844 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:48.862 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T17:18:48.869 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:48.870 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:48.902 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T17:18:48.907 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:48.908 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.075 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T17:18:49.078 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T17:18:49.080 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T17:18:49.080 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T17:18:49.084 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T17:18:49.085 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T17:18:49.099 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T17:18:49.100 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T17:18:49.105 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:49.105 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.106 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T17:18:49.107 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T17:18:49.126 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T17:18:49.131 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T17:18:49.132 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T17:18:49.151 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T17:18:49.156 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T17:18:49.158 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T17:18:49.177 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T17:18:49.183 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T17:18:49.184 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T17:18:49.204 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T17:18:49.210 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T17:18:49.223 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T17:18:49.419 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph. 2026-03-09T17:18:49.424 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:49.425 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.440 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T17:18:49.445 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:49.446 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.448 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T17:18:49.454 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:49.455 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.476 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T17:18:49.477 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T17:18:49.483 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:49.483 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T17:18:49.484 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.484 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T17:18:49.503 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T17:18:49.509 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T17:18:49.518 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T17:18:49.531 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package cephadm. 2026-03-09T17:18:49.535 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:49.536 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.536 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package jq. 2026-03-09T17:18:49.543 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T17:18:49.544 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T17:18:49.555 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T17:18:49.559 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T17:18:49.559 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T17:18:49.560 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package socat. 2026-03-09T17:18:49.566 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T17:18:49.567 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T17:18:49.590 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T17:18:49.592 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T17:18:49.595 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:49.596 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.598 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T17:18:49.599 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T17:18:49.712 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T17:18:49.718 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T17:18:49.719 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T17:18:49.723 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-test. 2026-03-09T17:18:49.729 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:49.730 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:49.736 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-routes. 2026-03-09T17:18:49.742 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T17:18:49.743 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T17:18:49.768 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T17:18:49.774 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:49.774 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:50.150 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T17:18:50.157 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T17:18:50.157 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T17:18:50.219 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T17:18:50.225 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T17:18:50.226 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T17:18:50.442 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T17:18:50.447 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T17:18:50.515 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T17:18:50.533 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T17:18:50.534 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T17:18:50.538 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:50.539 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:50.540 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T17:18:50.540 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T17:18:50.568 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T17:18:50.573 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:50.574 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:50.590 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T17:18:50.595 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T17:18:50.596 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T17:18:50.619 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T17:18:50.624 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T17:18:50.625 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T17:18:50.657 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T17:18:50.664 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T17:18:50.665 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T17:18:50.680 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T17:18:50.686 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:50.686 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:50.706 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package pkg-config. 2026-03-09T17:18:50.713 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T17:18:50.713 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T17:18:50.730 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T17:18:50.736 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T17:18:50.736 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T17:18:50.780 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T17:18:50.787 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T17:18:50.787 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T17:18:50.805 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T17:18:50.812 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T17:18:50.812 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T17:18:50.835 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T17:18:50.841 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T17:18:50.842 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T17:18:50.933 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T17:18:50.939 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T17:18:50.943 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T17:18:50.963 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T17:18:50.966 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-py. 2026-03-09T17:18:50.967 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T17:18:50.968 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T17:18:50.972 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T17:18:50.973 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T17:18:50.982 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T17:18:50.988 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T17:18:50.989 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T17:18:50.997 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T17:18:51.003 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T17:18:51.004 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T17:18:51.006 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T17:18:51.013 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T17:18:51.014 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T17:18:51.033 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T17:18:51.038 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T17:18:51.039 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T17:18:51.067 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T17:18:51.073 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T17:18:51.073 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T17:18:51.084 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T17:18:51.091 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T17:18:51.092 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T17:18:51.096 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T17:18:51.101 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T17:18:51.110 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-toml. 2026-03-09T17:18:51.114 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T17:18:51.115 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T17:18:51.116 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T17:18:51.132 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T17:18:51.137 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T17:18:51.138 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T17:18:51.165 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T17:18:51.170 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T17:18:51.171 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T17:18:51.190 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T17:18:51.196 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T17:18:51.196 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T17:18:51.295 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T17:18:51.296 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:51.296 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:51.306 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package radosgw. 2026-03-09T17:18:51.311 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T17:18:51.312 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:51.313 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:51.315 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T17:18:51.316 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T17:18:51.334 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T17:18:51.340 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T17:18:51.340 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T17:18:51.358 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package jq. 2026-03-09T17:18:51.364 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T17:18:51.365 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T17:18:51.381 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package socat. 2026-03-09T17:18:51.389 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T17:18:51.389 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T17:18:51.415 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T17:18:51.421 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T17:18:51.421 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T17:18:51.519 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T17:18:51.524 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:51.524 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-test. 2026-03-09T17:18:51.525 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:51.529 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:51.530 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:51.542 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package smartmontools. 2026-03-09T17:18:51.547 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T17:18:51.555 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T17:18:51.605 INFO:teuthology.orchestra.run.vm00.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T17:18:51.849 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T17:18:51.849 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T17:18:52.297 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T17:18:52.319 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T17:18:52.324 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:18:52.325 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:52.353 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T17:18:52.359 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:52.360 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:52.367 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T17:18:52.369 INFO:teuthology.orchestra.run.vm00.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T17:18:52.377 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T17:18:52.384 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T17:18:52.384 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T17:18:52.410 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T17:18:52.416 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T17:18:52.417 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T17:18:52.434 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T17:18:52.436 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T17:18:52.441 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T17:18:52.442 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T17:18:52.482 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package pkg-config. 2026-03-09T17:18:52.487 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T17:18:52.488 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T17:18:52.505 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T17:18:52.512 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T17:18:52.513 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T17:18:52.555 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T17:18:52.561 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T17:18:52.561 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T17:18:52.576 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T17:18:52.582 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T17:18:52.582 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T17:18:52.604 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T17:18:52.610 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T17:18:52.611 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T17:18:52.627 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T17:18:52.632 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T17:18:52.633 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T17:18:52.650 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T17:18:52.653 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-py. 2026-03-09T17:18:52.659 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T17:18:52.660 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T17:18:52.681 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T17:18:52.687 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T17:18:52.687 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T17:18:52.749 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T17:18:52.754 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T17:18:52.755 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T17:18:52.769 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-toml. 2026-03-09T17:18:52.774 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T17:18:52.775 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T17:18:52.792 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T17:18:52.798 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T17:18:52.799 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T17:18:52.826 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T17:18:52.831 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T17:18:52.832 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T17:18:52.853 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T17:18:52.860 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T17:18:52.860 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T17:18:52.970 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package radosgw. 2026-03-09T17:18:52.976 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:52.977 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:53.019 INFO:teuthology.orchestra.run.vm00.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T17:18:53.025 INFO:teuthology.orchestra.run.vm00.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T17:18:53.027 INFO:teuthology.orchestra.run.vm00.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:53.073 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user cephadm....done 2026-03-09T17:18:53.082 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T17:18:53.164 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T17:18:53.176 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T17:18:53.182 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:18:53.183 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:53.198 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package smartmontools. 2026-03-09T17:18:53.203 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T17:18:53.210 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T17:18:53.228 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T17:18:53.231 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T17:18:53.257 INFO:teuthology.orchestra.run.vm02.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T17:18:53.294 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T17:18:53.362 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T17:18:53.364 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T17:18:53.462 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T17:18:53.523 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T17:18:53.523 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T17:18:53.583 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T17:18:53.650 INFO:teuthology.orchestra.run.vm00.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T17:18:53.658 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T17:18:53.730 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T17:18:53.801 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:53.871 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T17:18:53.874 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T17:18:53.877 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T17:18:53.879 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T17:18:53.882 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T17:18:53.884 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T17:18:53.889 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T17:18:53.891 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T17:18:53.893 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T17:18:53.900 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T17:18:53.901 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T17:18:53.967 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T17:18:53.969 INFO:teuthology.orchestra.run.vm02.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T17:18:54.023 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T17:18:54.032 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T17:18:54.093 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T17:18:54.169 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T17:18:54.252 INFO:teuthology.orchestra.run.vm00.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T17:18:54.255 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T17:18:54.278 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T17:18:54.542 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T17:18:54.610 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T17:18:54.612 INFO:teuthology.orchestra.run.vm00.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T17:18:54.614 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T17:18:54.624 INFO:teuthology.orchestra.run.vm02.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T17:18:54.630 INFO:teuthology.orchestra.run.vm02.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T17:18:54.632 INFO:teuthology.orchestra.run.vm02.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:54.673 INFO:teuthology.orchestra.run.vm02.stdout:Adding system user cephadm....done 2026-03-09T17:18:54.682 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T17:18:54.703 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T17:18:54.808 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T17:18:54.841 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T17:18:54.873 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T17:18:54.876 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T17:18:54.941 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T17:18:54.969 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T17:18:55.090 INFO:teuthology.orchestra.run.vm02.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T17:18:55.092 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T17:18:55.093 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T17:18:55.187 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T17:18:55.205 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T17:18:55.276 INFO:teuthology.orchestra.run.vm00.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T17:18:55.279 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:55.312 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T17:18:55.371 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T17:18:55.382 INFO:teuthology.orchestra.run.vm02.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T17:18:55.390 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T17:18:55.458 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T17:18:55.528 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:55.600 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T17:18:55.602 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T17:18:55.605 INFO:teuthology.orchestra.run.vm02.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T17:18:55.607 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T17:18:55.609 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T17:18:55.611 INFO:teuthology.orchestra.run.vm02.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T17:18:55.615 INFO:teuthology.orchestra.run.vm02.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T17:18:55.617 INFO:teuthology.orchestra.run.vm02.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T17:18:55.619 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T17:18:55.621 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T17:18:55.749 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T17:18:55.824 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T17:18:55.897 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T17:18:55.958 INFO:teuthology.orchestra.run.vm00.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T17:18:55.978 INFO:teuthology.orchestra.run.vm02.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T17:18:55.979 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T17:18:55.980 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:55.985 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T17:18:56.054 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T17:18:56.056 INFO:teuthology.orchestra.run.vm00.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T17:18:56.058 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T17:18:56.126 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T17:18:56.189 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:56.191 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T17:18:56.258 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T17:18:56.265 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T17:18:56.328 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T17:18:56.336 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T17:18:56.339 INFO:teuthology.orchestra.run.vm02.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T17:18:56.341 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T17:18:56.407 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T17:18:56.440 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T17:18:56.472 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T17:18:56.535 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T17:18:56.581 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T17:18:56.611 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T17:18:56.613 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T17:18:56.693 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T17:18:56.695 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T17:18:56.716 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T17:18:56.766 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T17:18:56.819 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T17:18:56.853 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T17:18:56.939 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T17:18:56.944 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T17:18:57.010 INFO:teuthology.orchestra.run.vm02.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T17:18:57.010 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T17:18:57.012 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:57.012 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T17:18:57.015 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T17:18:57.017 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T17:18:57.102 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T17:18:57.155 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T17:18:57.224 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T17:18:57.226 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T17:18:57.289 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:57.291 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T17:18:57.364 INFO:teuthology.orchestra.run.vm00.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T17:18:57.366 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T17:18:57.439 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T17:18:57.578 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T17:18:57.662 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T17:18:57.669 INFO:teuthology.orchestra.run.vm02.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T17:18:57.692 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:57.697 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T17:18:57.956 INFO:teuthology.orchestra.run.vm02.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T17:18:57.956 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T17:18:58.008 INFO:teuthology.orchestra.run.vm02.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T17:18:58.008 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:58.010 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T17:18:58.010 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:58.013 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T17:18:58.079 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T17:18:58.149 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:58.151 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T17:18:58.227 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T17:18:58.296 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T17:18:58.371 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T17:18:58.444 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T17:18:58.514 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T17:18:58.589 INFO:teuthology.orchestra.run.vm02.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T17:18:58.591 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T17:18:58.606 INFO:teuthology.orchestra.run.vm00.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T17:18:58.614 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:58.616 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:58.618 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:58.621 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:58.624 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:58.670 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T17:18:58.672 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T17:18:58.687 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T17:18:58.687 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T17:18:58.793 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T17:18:58.904 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T17:18:58.998 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T17:18:59.037 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.040 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.042 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.045 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.047 INFO:teuthology.orchestra.run.vm00.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.050 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.053 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.055 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.083 INFO:teuthology.orchestra.run.vm02.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T17:18:59.086 INFO:teuthology.orchestra.run.vm02.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T17:18:59.088 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T17:18:59.091 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T17:18:59.117 INFO:teuthology.orchestra.run.vm00.stdout:Adding group ceph....done 2026-03-09T17:18:59.153 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user ceph....done 2026-03-09T17:18:59.160 INFO:teuthology.orchestra.run.vm00.stdout:Setting system user ceph properties....done 2026-03-09T17:18:59.164 INFO:teuthology.orchestra.run.vm00.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T17:18:59.227 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T17:18:59.227 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T17:18:59.296 INFO:teuthology.orchestra.run.vm02.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T17:18:59.298 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T17:18:59.391 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:18:59.393 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T17:18:59.472 INFO:teuthology.orchestra.run.vm02.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T17:18:59.474 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T17:18:59.484 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T17:18:59.549 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T17:18:59.684 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T17:18:59.766 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T17:18:59.838 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.841 INFO:teuthology.orchestra.run.vm00.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.878 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T17:18:59.881 INFO:teuthology.orchestra.run.vm02.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.883 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:18:59.886 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T17:19:00.102 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T17:19:00.102 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T17:19:00.538 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:00.559 INFO:teuthology.orchestra.run.vm02.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T17:19:00.566 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:00.568 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:00.570 INFO:teuthology.orchestra.run.vm02.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:00.572 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:00.574 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:00.639 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T17:19:00.639 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T17:19:00.640 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T17:19:00.997 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.040 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.042 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.044 INFO:teuthology.orchestra.run.vm02.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.047 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.049 INFO:teuthology.orchestra.run.vm02.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.051 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.054 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.056 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.059 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T17:19:01.059 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T17:19:01.088 INFO:teuthology.orchestra.run.vm02.stdout:Adding group ceph....done 2026-03-09T17:19:01.120 INFO:teuthology.orchestra.run.vm02.stdout:Adding system user ceph....done 2026-03-09T17:19:01.128 INFO:teuthology.orchestra.run.vm02.stdout:Setting system user ceph properties....done 2026-03-09T17:19:01.132 INFO:teuthology.orchestra.run.vm02.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T17:19:01.195 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T17:19:01.430 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T17:19:01.456 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.522 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T17:19:01.522 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T17:19:01.824 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.827 INFO:teuthology.orchestra.run.vm02.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.910 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:01.988 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T17:19:01.989 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T17:19:02.058 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T17:19:02.059 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T17:19:02.408 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.410 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.425 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.434 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.485 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T17:19:02.485 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T17:19:02.515 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T17:19:02.862 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.877 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.878 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.880 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.901 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:02.941 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T17:19:02.941 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T17:19:03.024 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T17:19:03.033 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T17:19:03.048 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:19:03.132 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T17:19:03.330 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:03.394 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T17:19:03.394 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T17:19:03.471 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:03.471 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-09T17:19:03.471 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:03.471 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-09T17:19:03.476 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:03.479 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T17:19:03.744 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:03.820 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T17:19:03.820 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T17:19:04.151 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:04.154 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:04.166 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:04.224 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T17:19:04.224 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T17:19:04.394 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:19:04.402 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T17:19:04.471 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T17:19:04.600 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:04.613 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:04.615 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:04.628 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:19:04.655 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T17:19:04.656 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T17:19:04.744 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T17:19:04.752 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T17:19:04.768 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:19:04.768 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:19:04.769 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:19:04.770 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T17:19:04.770 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:19:04.787 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-09T17:19:04.787 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath python3-xmltodict 2026-03-09T17:19:04.850 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T17:19:04.996 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:19:04.996 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 34.3 kB of archives. 2026-03-09T17:19:04.996 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T17:19:04.996 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T17:19:05.076 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T17:19:05.174 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:05.174 INFO:teuthology.orchestra.run.vm02.stdout:Running kernel seems to be up-to-date. 2026-03-09T17:19:05.174 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:05.174 INFO:teuthology.orchestra.run.vm02.stdout:Services to be restarted: 2026-03-09T17:19:05.180 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart packagekit.service 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout:Service restarts being deferred: 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart unattended-upgrades.service 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout:No containers need to be restarted. 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout:No user sessions are running outdated binaries. 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:05.183 INFO:teuthology.orchestra.run.vm02.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T17:19:05.271 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 34.3 kB in 0s (119 kB/s) 2026-03-09T17:19:05.287 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T17:19:05.318 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T17:19:05.320 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T17:19:05.322 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T17:19:05.339 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T17:19:05.345 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T17:19:05.346 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T17:19:05.374 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T17:19:05.439 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T17:19:05.763 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:05.763 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-09T17:19:05.763 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:05.763 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-09T17:19:05.768 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-09T17:19:05.771 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:19:05.772 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T17:19:06.160 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:19:06.163 DEBUG:teuthology.orchestra.run.vm02:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T17:19:06.237 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T17:19:06.439 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T17:19:06.440 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T17:19:06.598 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:19:06.598 INFO:teuthology.orchestra.run.vm02.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:19:06.598 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T17:19:06.598 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:19:06.612 INFO:teuthology.orchestra.run.vm02.stdout:The following NEW packages will be installed: 2026-03-09T17:19:06.612 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath python3-xmltodict 2026-03-09T17:19:06.740 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:19:06.743 DEBUG:teuthology.parallel:result is None 2026-03-09T17:19:06.811 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:19:06.812 INFO:teuthology.orchestra.run.vm02.stdout:Need to get 34.3 kB of archives. 2026-03-09T17:19:06.812 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T17:19:06.812 INFO:teuthology.orchestra.run.vm02.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T17:19:06.889 INFO:teuthology.orchestra.run.vm02.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T17:19:07.072 INFO:teuthology.orchestra.run.vm02.stdout:Fetched 34.3 kB in 0s (124 kB/s) 2026-03-09T17:19:07.171 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T17:19:07.193 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T17:19:07.196 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T17:19:07.197 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T17:19:07.211 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T17:19:07.216 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T17:19:07.216 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T17:19:07.243 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T17:19:07.305 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T17:19:07.636 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:07.636 INFO:teuthology.orchestra.run.vm02.stdout:Running kernel seems to be up-to-date. 2026-03-09T17:19:07.636 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:07.636 INFO:teuthology.orchestra.run.vm02.stdout:Services to be restarted: 2026-03-09T17:19:07.642 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart packagekit.service 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout:Service restarts being deferred: 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart unattended-upgrades.service 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout:No containers need to be restarted. 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout:No user sessions are running outdated binaries. 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:19:07.644 INFO:teuthology.orchestra.run.vm02.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T17:19:08.552 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:19:08.556 DEBUG:teuthology.parallel:result is None 2026-03-09T17:19:08.556 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:19:09.168 DEBUG:teuthology.orchestra.run.vm00:> dpkg-query -W -f '${Version}' ceph 2026-03-09T17:19:09.177 INFO:teuthology.orchestra.run.vm00.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:19:09.177 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:19:09.177 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T17:19:09.178 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:19:09.827 DEBUG:teuthology.orchestra.run.vm02:> dpkg-query -W -f '${Version}' ceph 2026-03-09T17:19:09.836 INFO:teuthology.orchestra.run.vm02.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:19:09.836 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:19:09.836 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T17:19:09.837 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T17:19:09.837 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:19:09.837 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T17:19:09.845 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:19:09.846 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T17:19:09.887 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T17:19:09.887 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:19:09.887 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T17:19:09.894 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T17:19:09.942 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:19:09.942 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T17:19:09.950 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T17:19:09.998 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T17:19:09.998 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:19:09.998 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T17:19:10.006 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T17:19:10.054 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:19:10.054 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T17:19:10.062 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T17:19:10.110 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T17:19:10.110 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:19:10.110 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T17:19:10.117 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T17:19:10.165 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:19:10.165 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T17:19:10.173 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T17:19:10.222 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T17:19:10.268 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'client': {'debug ms': 1}, 'global': {'mon election default strategy': 3, 'ms bind msgr1': False, 'ms bind msgr2': True, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon warn on pool no app': False}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd class default list': '*', 'osd class load list': '*', 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'reached quota', 'but it is still running', 'overall HEALTH_', '\\(POOL_FULL\\)', '\\(SMALLER_PGP_NUM\\)', '\\(CACHE_POOL_NO_HIT_SET\\)', '\\(CACHE_POOL_NEAR_FULL\\)', '\\(POOL_APP_NOT_ENABLED\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', 'CEPHADM_STRAY_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'root'} 2026-03-09T17:19:10.268 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:19:10.268 INFO:tasks.cephadm:Cluster fsid is 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:19:10.268 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T17:19:10.269 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.100', 'mon.c': '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]', 'mon.b': '192.168.123.102'} 2026-03-09T17:19:10.269 INFO:tasks.cephadm:First mon is mon.a on vm00 2026-03-09T17:19:10.269 INFO:tasks.cephadm:First mgr is y 2026-03-09T17:19:10.269 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T17:19:10.269 DEBUG:teuthology.orchestra.run.vm00:> sudo hostname $(hostname -s) 2026-03-09T17:19:10.277 DEBUG:teuthology.orchestra.run.vm02:> sudo hostname $(hostname -s) 2026-03-09T17:19:10.285 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-09T17:19:10.285 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:19:10.892 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T17:19:11.517 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:19:11.518 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T17:19:11.518 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T17:19:11.518 DEBUG:teuthology.orchestra.run.vm00:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T17:19:12.812 INFO:teuthology.orchestra.run.vm00.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 17:19 /home/ubuntu/cephtest/cephadm 2026-03-09T17:19:12.812 DEBUG:teuthology.orchestra.run.vm02:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T17:19:14.216 INFO:teuthology.orchestra.run.vm02.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 17:19 /home/ubuntu/cephtest/cephadm 2026-03-09T17:19:14.216 DEBUG:teuthology.orchestra.run.vm00:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T17:19:14.220 DEBUG:teuthology.orchestra.run.vm02:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T17:19:14.228 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T17:19:14.228 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T17:19:14.263 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T17:19:14.353 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T17:19:14.356 INFO:teuthology.orchestra.run.vm02.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T17:20:08.619 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T17:20:08.619 INFO:teuthology.orchestra.run.vm00.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T17:20:08.619 INFO:teuthology.orchestra.run.vm00.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T17:20:08.619 INFO:teuthology.orchestra.run.vm00.stdout: "repo_digests": [ 2026-03-09T17:20:08.619 INFO:teuthology.orchestra.run.vm00.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T17:20:08.619 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-09T17:20:08.619 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T17:20:15.703 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-09T17:20:15.704 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T17:20:15.704 INFO:teuthology.orchestra.run.vm02.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T17:20:15.704 INFO:teuthology.orchestra.run.vm02.stdout: "repo_digests": [ 2026-03-09T17:20:15.704 INFO:teuthology.orchestra.run.vm02.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T17:20:15.704 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-09T17:20:15.704 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-09T17:20:15.717 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph 2026-03-09T17:20:15.725 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /etc/ceph 2026-03-09T17:20:15.732 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /etc/ceph 2026-03-09T17:20:15.774 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 777 /etc/ceph 2026-03-09T17:20:15.782 INFO:tasks.cephadm:Writing seed config... 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [global] ms bind msgr1 = False 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [global] ms bind msgr2 = True 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [mon] mon warn on pool no app = False 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [osd] osd class default list = * 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [osd] osd class load list = * 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T17:20:15.783 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-09T17:20:15.783 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:20:15.783 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T17:20:15.818 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 16190428-1bdc-11f1-aea4-d920f1c7e51e mon election default strategy = 3 ms bind msgr1 = False ms bind msgr2 = True ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd class default list = * osd class load list = * osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon warn on pool no app = False [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] debug ms = 1 2026-03-09T17:20:15.818 DEBUG:teuthology.orchestra.run.vm00:mon.a> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.a.service 2026-03-09T17:20:15.860 DEBUG:teuthology.orchestra.run.vm00:mgr.y> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.y.service 2026-03-09T17:20:15.903 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T17:20:15.904 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T17:20:16.040 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-09T17:20:16.040 INFO:teuthology.orchestra.run.vm00.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '16190428-1bdc-11f1-aea4-d920f1c7e51e', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.100', '--skip-admin-label'] 2026-03-09T17:20:16.040 INFO:teuthology.orchestra.run.vm00.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T17:20:16.040 INFO:teuthology.orchestra.run.vm00.stdout:Verifying podman|docker is present... 2026-03-09T17:20:16.040 INFO:teuthology.orchestra.run.vm00.stdout:Verifying lvm2 is present... 2026-03-09T17:20:16.040 INFO:teuthology.orchestra.run.vm00.stdout:Verifying time synchronization is in place... 2026-03-09T17:20:16.043 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T17:20:16.043 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T17:20:16.045 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T17:20:16.045 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-09T17:20:16.048 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T17:20:16.048 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T17:20:16.050 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T17:20:16.050 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-09T17:20:16.052 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T17:20:16.052 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-09T17:20:16.054 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T17:20:16.054 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-09T17:20:16.056 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T17:20:16.057 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T17:20:16.059 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T17:20:16.059 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-09T17:20:16.061 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-09T17:20:16.064 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-09T17:20:16.064 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-09T17:20:16.064 INFO:teuthology.orchestra.run.vm00.stdout:Repeating the final host check... 2026-03-09T17:20:16.064 INFO:teuthology.orchestra.run.vm00.stdout:docker (/usr/bin/docker) is present 2026-03-09T17:20:16.064 INFO:teuthology.orchestra.run.vm00.stdout:systemctl is present 2026-03-09T17:20:16.064 INFO:teuthology.orchestra.run.vm00.stdout:lvcreate is present 2026-03-09T17:20:16.066 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T17:20:16.066 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T17:20:16.068 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T17:20:16.069 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-09T17:20:16.071 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T17:20:16.071 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T17:20:16.073 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T17:20:16.073 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-09T17:20:16.075 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T17:20:16.075 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-09T17:20:16.077 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T17:20:16.077 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-09T17:20:16.080 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T17:20:16.080 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T17:20:16.082 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T17:20:16.082 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-09T17:20:16.085 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:Host looks OK 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:Cluster fsid: 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:Acquiring lock 140342227077680 on /run/cephadm/16190428-1bdc-11f1-aea4-d920f1c7e51e.lock 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:Lock 140342227077680 acquired on /run/cephadm/16190428-1bdc-11f1-aea4-d920f1c7e51e.lock 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 3300 ... 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 6789 ... 2026-03-09T17:20:16.088 INFO:teuthology.orchestra.run.vm00.stdout:Base mon IP(s) is [192.168.123.100:3300, 192.168.123.100:6789], mon addrv is [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-09T17:20:16.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.100 metric 100 2026-03-09T17:20:16.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T17:20:16.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.100 metric 100 2026-03-09T17:20:16.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.100 metric 100 2026-03-09T17:20:16.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T17:20:16.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:0/64 scope link 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T17:20:16.093 INFO:teuthology.orchestra.run.vm00.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T17:20:17.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-09T17:20:17.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T17:20:17.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:20:17.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:20:17.244 INFO:teuthology.orchestra.run.vm00.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T17:20:17.244 INFO:teuthology.orchestra.run.vm00.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T17:20:17.244 INFO:teuthology.orchestra.run.vm00.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T17:20:17.336 INFO:teuthology.orchestra.run.vm00.stdout:stat: stdout 167 167 2026-03-09T17:20:17.336 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial keys... 2026-03-09T17:20:17.459 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQBRAa9pmIeyGRAA3MgcpUL6tHebCNFIZrsWBA== 2026-03-09T17:20:17.561 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQBRAa9p+PHUHxAAAA1BSyjHkw2b+J5fTmaF+A== 2026-03-09T17:20:17.661 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQBRAa9pnKfZJRAALlAZkhfQ+XtumS9a4PI2bA== 2026-03-09T17:20:17.661 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial monmap... 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool for a [v2:192.168.123.100:3300,v1:192.168.123.100:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = quincy 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: set fsid to 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:17.775 INFO:teuthology.orchestra.run.vm00.stdout:Creating mon... 2026-03-09T17:20:17.912 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 1 imported monmap: 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 0 /usr/bin/ceph-mon: set fsid to 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Git sha 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: DB SUMMARY 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: DB Session ID: 7NS9XJS2A753AW33RTZK 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.create_if_missing: 1 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.env: 0x55ff9e9f0dc0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.info_log: 0x55ffa5916da0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.db_log_dir: 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.wal_dir: 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T17:20:17.913 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.write_buffer_manager: 0x55ffa590d5e0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.row_cache: None 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.wal_filter: None 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Compression algorithms supported: 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: kZSTD supported: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T17:20:17.914 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.863+0000 7fd231019d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.merge_operator: 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffa5909520) 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55ffa592f350 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T17:20:17.915 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.num_levels: 7 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T17:20:17.916 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f4bb0486-807f-45db-a49d-3e6e4b386341 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.867+0000 7fd231019d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.871+0000 7fd231019d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ffa5930e00 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.871+0000 7fd231019d80 4 rocksdb: DB pointer 0x55ffa5a14000 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.871+0000 7fd2287a3640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.871+0000 7fd2287a3640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T17:20:17.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55ffa592f350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.875+0000 7fd231019d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.875+0000 7fd231019d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:20:17.875+0000 7fd231019d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T17:20:17.918 INFO:teuthology.orchestra.run.vm00.stdout:create mon.a on 2026-03-09T17:20:18.065 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-09T17:20:18.237 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T17:20:18.432 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e.target → /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e.target. 2026-03-09T17:20:18.432 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e.target → /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e.target. 2026-03-09T17:20:18.639 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.a 2026-03-09T17:20:18.639 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.a.service: Unit ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.a.service not loaded. 2026-03-09T17:20:18.830 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e.target.wants/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.a.service → /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service. 2026-03-09T17:20:18.841 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-09T17:20:18.841 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T17:20:18.841 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon to start... 2026-03-09T17:20:18.841 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon... 2026-03-09T17:20:19.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:18 vm00 bash[20299]: cluster 2026-03-09T17:20:18.980517+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout id: 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout services: 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.250532s) 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout data: 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.991+0000 7f8ffce95640 1 Processor -- start 2026-03-09T17:20:19.754 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.991+0000 7f8ffce95640 1 -- start start 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.991+0000 7f8ffce95640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.991+0000 7f8ffce95640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f8ff8104d60 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.991+0000 7f8ff77fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.995+0000 7f8ff77fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40464/0 (socket says 192.168.123.100:40464) 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.995+0000 7f8ff77fe640 1 -- 192.168.123.100:0/2142079821 learned_addr learned my addr 192.168.123.100:0/2142079821 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.995+0000 7f8ff77fe640 1 -- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 msgr2=0x7f8ff8104820 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_bulk peer close file descriptor 12 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.995+0000 7f8ff77fe640 1 -- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 msgr2=0x7f8ff8104820 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until read failed 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.995+0000 7f8ff77fe640 1 --2- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:18.995+0000 7f8ff77fe640 1 --2- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ff77fe640 1 --2- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ff77fe640 1 -- 192.168.123.100:0/2142079821 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8ff8104ee0 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ff77fe640 1 --2- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f8ff81051f0 tx=0x7f8fe402f030 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ee61c00eed8cf7fe server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ff67fc640 1 -- 192.168.123.100:0/2142079821 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8fe4035070 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ff67fc640 1 -- 192.168.123.100:0/2142079821 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f8fe402fbb0 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ffce95640 1 -- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 msgr2=0x7f8ff8104820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ffce95640 1 --2- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f8ff81051f0 tx=0x7f8fe402f030 comp rx=0 tx=0).stop 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ffce95640 1 -- 192.168.123.100:0/2142079821 shutdown_connections 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.195+0000 7f8ffce95640 1 --2- 192.168.123.100:0/2142079821 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8104820 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 -- 192.168.123.100:0/2142079821 >> 192.168.123.100:0/2142079821 conn(0x7f8ff81000d0 msgr2=0x7f8ff81024f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 -- 192.168.123.100:0/2142079821 shutdown_connections 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 -- 192.168.123.100:0/2142079821 wait complete. 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 Processor -- start 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 -- start start 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8199980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f8ff8109fe0 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff77fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8199980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff77fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8199980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40480/0 (socket says 192.168.123.100:40480) 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff77fe640 1 -- 192.168.123.100:0/4252241553 learned_addr learned my addr 192.168.123.100:0/4252241553 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff77fe640 1 -- 192.168.123.100:0/4252241553 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8ff8199ec0 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff77fe640 1 --2- 192.168.123.100:0/4252241553 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8199980 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f8fe402f070 tx=0x7f8fe4004830 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff4ff9640 1 -- 192.168.123.100:0/4252241553 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8fe4048070 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff4ff9640 1 -- 192.168.123.100:0/4252241553 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f8fe4038990 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f8ff819a150 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f8ff819ce40 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff4ff9640 1 -- 192.168.123.100:0/4252241553 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8fe4038cb0 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8ff8104820 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff4ff9640 1 -- 192.168.123.100:0/4252241553 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f8fe4004020 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.199+0000 7f8ff4ff9640 1 -- 192.168.123.100:0/4252241553 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f8fe40038f0 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.203+0000 7f8ff4ff9640 1 -- 192.168.123.100:0/4252241553 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f8fe402fbb0 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.231+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "status"} v 0) -- 0x7f8ff819d380 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.235+0000 7f8ff4ff9640 1 -- 192.168.123.100:0/4252241553 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "status"}]=0 v0) ==== 54+0+317 (secure 0 0 0) 0x7f8fe4004430 con 0x7f8ff8104420 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.235+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 msgr2=0x7f8ff8199980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.235+0000 7f8ffce95640 1 --2- 192.168.123.100:0/4252241553 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8199980 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f8fe402f070 tx=0x7f8fe4004830 comp rx=0 tx=0).stop 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.235+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 shutdown_connections 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.235+0000 7f8ffce95640 1 --2- 192.168.123.100:0/4252241553 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ff8104420 0x7f8ff8199980 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:19.755 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.235+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 >> 192.168.123.100:0/4252241553 conn(0x7f8ff81000d0 msgr2=0x7f8ff8100cf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:19.756 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.235+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 shutdown_connections 2026-03-09T17:20:19.756 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.235+0000 7f8ffce95640 1 -- 192.168.123.100:0/4252241553 wait complete. 2026-03-09T17:20:19.756 INFO:teuthology.orchestra.run.vm00.stdout:mon is available 2026-03-09T17:20:19.756 INFO:teuthology.orchestra.run.vm00.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.871+0000 7fd971496640 1 Processor -- start 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.871+0000 7fd971496640 1 -- start start 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.871+0000 7fd971496640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.871+0000 7fd971496640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd96c109540 con 0x7fd96c108b70 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.871+0000 7fd96affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.871+0000 7fd96affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51304/0 (socket says 192.168.123.100:51304) 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.871+0000 7fd96affd640 1 -- 192.168.123.100:0/802601738 learned_addr learned my addr 192.168.123.100:0/802601738 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd96affd640 1 -- 192.168.123.100:0/802601738 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd96c109d70 con 0x7fd96c108b70 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd96affd640 1 --2- 192.168.123.100:0/802601738 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c108f70 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fd954009920 tx=0x7fd95402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6aa5f18c3e942c1b server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd969ffb640 1 -- 192.168.123.100:0/802601738 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd95403c070 con 0x7fd96c108b70 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd969ffb640 1 -- 192.168.123.100:0/802601738 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fd95402fae0 con 0x7fd96c108b70 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd969ffb640 1 -- 192.168.123.100:0/802601738 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd95402fde0 con 0x7fd96c108b70 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 -- 192.168.123.100:0/802601738 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 msgr2=0x7fd96c108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 --2- 192.168.123.100:0/802601738 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c108f70 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fd954009920 tx=0x7fd95402ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 -- 192.168.123.100:0/802601738 shutdown_connections 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 --2- 192.168.123.100:0/802601738 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c108f70 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 -- 192.168.123.100:0/802601738 >> 192.168.123.100:0/802601738 conn(0x7fd96c07c040 msgr2=0x7fd96c07c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 -- 192.168.123.100:0/802601738 shutdown_connections 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 -- 192.168.123.100:0/802601738 wait complete. 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 Processor -- start 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 -- start start 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c19e220 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd971496640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd96c10a2a0 con 0x7fd96c108b70 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd96affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c19e220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd96affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c19e220 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51318/0 (socket says 192.168.123.100:51318) 2026-03-09T17:20:20.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd96affd640 1 -- 192.168.123.100:0/2700050807 learned_addr learned my addr 192.168.123.100:0/2700050807 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd96affd640 1 -- 192.168.123.100:0/2700050807 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd96c19e760 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.875+0000 7fd96affd640 1 --2- 192.168.123.100:0/2700050807 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c19e220 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fd95402fab0 tx=0x7fd9540047c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd94bfff640 1 -- 192.168.123.100:0/2700050807 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd954046070 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd94bfff640 1 -- 192.168.123.100:0/2700050807 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fd954041590 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd96c19e9f0 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd94bfff640 1 -- 192.168.123.100:0/2700050807 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd95403c040 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd96c1a16e0 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd94bfff640 1 -- 192.168.123.100:0/2700050807 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7fd954053020 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd94bfff640 1 -- 192.168.123.100:0/2700050807 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fd95404bb50 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd930005180 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.879+0000 7fd94bfff640 1 -- 192.168.123.100:0/2700050807 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7fd95404bda0 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:19.915+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7fd930003c00 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd94bfff640 1 -- 192.168.123.100:0/2700050807 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) ==== 70+0+380 (secure 0 0 0) 0x7fd95405d4d0 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd94bfff640 1 -- 192.168.123.100:0/2700050807 <== mon.0 v2:192.168.123.100:3300/0 8 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fd95402fae0 con 0x7fd96c108b70 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 msgr2=0x7fd96c19e220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd971496640 1 --2- 192.168.123.100:0/2700050807 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c19e220 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fd95402fab0 tx=0x7fd9540047c0 comp rx=0 tx=0).stop 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 shutdown_connections 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd971496640 1 --2- 192.168.123.100:0/2700050807 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd96c108b70 0x7fd96c19e220 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 >> 192.168.123.100:0/2700050807 conn(0x7fd96c07c040 msgr2=0x7fd96c192540 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 shutdown_connections 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.011+0000 7fd971496640 1 -- 192.168.123.100:0/2700050807 wait complete. 2026-03-09T17:20:20.177 INFO:teuthology.orchestra.run.vm00.stdout:Generating new minimal ceph.conf... 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.985818+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.985818+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986235+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 1 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986235+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 1 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986274+0000 mon.a (mon.0) 4 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986274+0000 mon.a (mon.0) 4 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986311+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986311+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986351+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986351+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:20.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986387+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986387+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986425+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986425+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986461+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.986461+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.990768+0000 mon.a (mon.0) 10 : cluster [DBG] fsmap 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.990768+0000 mon.a (mon.0) 10 : cluster [DBG] fsmap 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.995517+0000 mon.a (mon.0) 11 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.995517+0000 mon.a (mon.0) 11 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.996585+0000 mon.a (mon.0) 12 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: cluster 2026-03-09T17:20:18.996585+0000 mon.a (mon.0) 12 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: audit 2026-03-09T17:20:19.236283+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/4252241553' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: audit 2026-03-09T17:20:19.236283+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/4252241553' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: audit 2026-03-09T17:20:19.920202+0000 mon.a (mon.0) 14 : audit [INF] from='client.? 192.168.123.100:0/2700050807' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T17:20:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: audit 2026-03-09T17:20:19.920202+0000 mon.a (mon.0) 14 : audit [INF] from='client.? 192.168.123.100:0/2700050807' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd3041640 1 Processor -- start 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd3041640 1 -- start start 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd3041640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc07abb0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd3041640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f1dcc07b0f0 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd0db6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc07abb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd0db6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc07abb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51324/0 (socket says 192.168.123.100:51324) 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd0db6640 1 -- 192.168.123.100:0/2690967119 learned_addr learned my addr 192.168.123.100:0/2690967119 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd0db6640 1 -- 192.168.123.100:0/2690967119 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1dcc07b270 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd0db6640 1 --2- 192.168.123.100:0/2690967119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc07abb0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f1db4009920 tx=0x7f1db402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=67de37ad3cbb51cb server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dc37fe640 1 -- 192.168.123.100:0/2690967119 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f1db403c070 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dc37fe640 1 -- 192.168.123.100:0/2690967119 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f1db4037440 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dc37fe640 1 -- 192.168.123.100:0/2690967119 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f1db40354d0 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd3041640 1 -- 192.168.123.100:0/2690967119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 msgr2=0x7f1dcc07abb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.323+0000 7f1dd3041640 1 --2- 192.168.123.100:0/2690967119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc07abb0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f1db4009920 tx=0x7f1db402ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- 192.168.123.100:0/2690967119 shutdown_connections 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 --2- 192.168.123.100:0/2690967119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc07abb0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- 192.168.123.100:0/2690967119 >> 192.168.123.100:0/2690967119 conn(0x7f1dcc102090 msgr2=0x7f1dcc1044b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- 192.168.123.100:0/2690967119 shutdown_connections 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- 192.168.123.100:0/2690967119 wait complete. 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 Processor -- start 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- start start 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc19e910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f1dcc10cdd0 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd0db6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc19e910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd0db6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc19e910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51330/0 (socket says 192.168.123.100:51330) 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd0db6640 1 -- 192.168.123.100:0/1458137245 learned_addr learned my addr 192.168.123.100:0/1458137245 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd0db6640 1 -- 192.168.123.100:0/1458137245 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1dcc19ee50 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd0db6640 1 --2- 192.168.123.100:0/1458137245 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc19e910 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f1db4006b90 tx=0x7f1db4035dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dc1ffb640 1 -- 192.168.123.100:0/1458137245 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f1db4045070 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dc1ffb640 1 -- 192.168.123.100:0/1458137245 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f1db402fbc0 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dc1ffb640 1 -- 192.168.123.100:0/1458137245 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f1db403c070 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f1dcc19f0e0 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f1dcc19f500 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dc1ffb640 1 -- 192.168.123.100:0/1458137245 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f1db404a430 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dc1ffb640 1 -- 192.168.123.100:0/1458137245 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f1db4049a50 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.327+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1d94005180 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.331+0000 7f1dc1ffb640 1 -- 192.168.123.100:0/1458137245 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f1db4049ca0 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.359+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7f1d94005740 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.363+0000 7f1dc1ffb640 1 -- 192.168.123.100:0/1458137245 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) ==== 76+0+181 (secure 0 0 0) 0x7f1db4040450 con 0x7f1dcc07c750 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.363+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 msgr2=0x7f1dcc19e910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.363+0000 7f1dd3041640 1 --2- 192.168.123.100:0/1458137245 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc19e910 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f1db4006b90 tx=0x7f1db4035dc0 comp rx=0 tx=0).stop 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.363+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 shutdown_connections 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.363+0000 7f1dd3041640 1 --2- 192.168.123.100:0/1458137245 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1dcc07c750 0x7f1dcc19e910 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.363+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 >> 192.168.123.100:0/1458137245 conn(0x7f1dcc102090 msgr2=0x7f1dcc102c20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.363+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 shutdown_connections 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.363+0000 7f1dd3041640 1 -- 192.168.123.100:0/1458137245 wait complete. 2026-03-09T17:20:20.603 INFO:teuthology.orchestra.run.vm00.stdout:Restarting the monitor... 2026-03-09T17:20:20.687 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 systemd[1]: Stopping Ceph mon.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T17:20:20.812 INFO:teuthology.orchestra.run.vm00.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: debug 2026-03-09T17:20:20.683+0000 7f9d50544640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20299]: debug 2026-03-09T17:20:20.683+0000 7f9d50544640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20685]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-mon-a 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.a.service: Deactivated successfully. 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 systemd[1]: Stopped Ceph mon.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 systemd[1]: Started Ceph mon.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.919+0000 7f0282cdcd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.919+0000 7f0282cdcd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.919+0000 7f0282cdcd80 0 pidfile_write: ignore empty --pid-file 2026-03-09T17:20:21.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 0 load: jerasure load: lrc 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Git sha 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: DB SUMMARY 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: DB Session ID: 2KVI3EHO7YTSDUWMIZGQ 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 88915 ; 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.env: 0x558295463dc0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.info_log: 0x55829bc3d880 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.statistics: (nil) 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.use_fsync: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.db_log_dir: 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.wal_dir: 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T17:20:21.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.write_buffer_manager: 0x55829bc41900 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.unordered_write: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.row_cache: None 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.wal_filter: None 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.wal_compression: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_open_files: -1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Compression algorithms supported: 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: kZSTD supported: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T17:20:21.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.merge_operator: 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_filter: None 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55829bc3c480) 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: cache_index_and_filter_blocks: 1 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: pin_top_level_index_and_filter: 1 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: index_type: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: data_block_index_type: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: index_shortening: 1 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: checksum: 4 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: no_block_cache: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: block_cache: 0x55829bc63350 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: block_cache_name: BinnedLRUCache 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: block_cache_options: 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: capacity : 536870912 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: num_shard_bits : 4 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: strict_capacity_limit : 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: high_pri_pool_ratio: 0.000 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: block_cache_compressed: (nil) 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: persistent_cache: (nil) 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: block_size: 4096 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: block_size_deviation: 10 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: block_restart_interval: 16 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: index_block_restart_interval: 1 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: metadata_block_size: 4096 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: partition_filters: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: use_delta_encoding: 1 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: filter_policy: bloomfilter 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: whole_key_filtering: 1 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: verify_compression: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: read_amp_bytes_per_bit: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: format_version: 5 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: enable_index_compression: 1 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: block_align: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: max_auto_readahead_size: 262144 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: prepopulate_block_cache: 0 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: initial_auto_readahead_size: 8192 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: num_file_reads_for_auto_readahead: 2 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression: NoCompression 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.num_levels: 7 2026-03-09T17:20:21.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.923+0000 7f0282cdcd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T17:20:21.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.ttl: 2592000 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f4bb0486-807f-45db-a49d-3e6e4b386341 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773076820935530, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T17:20:21.044 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:20 vm00 bash[20770]: debug 2026-03-09T17:20:20.931+0000 7f0282cdcd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T17:20:21.249 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.963+0000 7f49d358d640 1 Processor -- start 2026-03-09T17:20:21.249 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.963+0000 7f49d358d640 1 -- start start 2026-03-09T17:20:21.249 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.963+0000 7f49d358d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:21.249 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.963+0000 7f49d358d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f49cc109540 con 0x7f49cc108b70 2026-03-09T17:20:21.249 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.963+0000 7f49d1302640 1 -- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 msgr2=0x7f49cc108f70 unknown :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed to v2:192.168.123.100:3300/0 2026-03-09T17:20:21.249 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:20.963+0000 7f49d1302640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc108f70 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.163+0000 7f49d1302640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.163+0000 7f49d1302640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51350/0 (socket says 192.168.123.100:51350) 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.163+0000 7f49d1302640 1 -- 192.168.123.100:0/1399575153 learned_addr learned my addr 192.168.123.100:0/1399575153 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d1302640 1 -- 192.168.123.100:0/1399575153 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f49cc109d70 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d1302640 1 --2- 192.168.123.100:0/1399575153 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc108f70 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f49bc009e50 tx=0x7f49bc02f330 comp rx=0 tx=0).ready entity=mon.0 client_cookie=78ff1c6c854b8138 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49bbfff640 1 -- 192.168.123.100:0/1399575153 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f49bc003a80 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49bbfff640 1 -- 192.168.123.100:0/1399575153 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f49bc033070 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 -- 192.168.123.100:0/1399575153 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 msgr2=0x7f49cc108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 --2- 192.168.123.100:0/1399575153 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc108f70 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f49bc009e50 tx=0x7f49bc02f330 comp rx=0 tx=0).stop 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 -- 192.168.123.100:0/1399575153 shutdown_connections 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 --2- 192.168.123.100:0/1399575153 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc108f70 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 -- 192.168.123.100:0/1399575153 >> 192.168.123.100:0/1399575153 conn(0x7f49cc07c040 msgr2=0x7f49cc07c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 -- 192.168.123.100:0/1399575153 shutdown_connections 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 -- 192.168.123.100:0/1399575153 wait complete. 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 Processor -- start 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 -- start start 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc19e820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d358d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f49cc10cfa0 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d1302640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc19e820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d1302640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc19e820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51366/0 (socket says 192.168.123.100:51366) 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d1302640 1 -- 192.168.123.100:0/1804430473 learned_addr learned my addr 192.168.123.100:0/1804430473 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d1302640 1 -- 192.168.123.100:0/1804430473 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f49cc19ed60 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49d1302640 1 --2- 192.168.123.100:0/1804430473 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc19e820 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f49bc009520 tx=0x7f49bc038900 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49ba7fc640 1 -- 192.168.123.100:0/1804430473 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f49bc046070 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.167+0000 7f49ba7fc640 1 -- 192.168.123.100:0/1804430473 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f49bc038d50 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.171+0000 7f49ba7fc640 1 -- 192.168.123.100:0/1804430473 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f49bc0073c0 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.171+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f49cc19eff0 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.171+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f49cc1a1ce0 con 0x7f49cc108b70 2026-03-09T17:20:21.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.171+0000 7f49ba7fc640 1 -- 192.168.123.100:0/1804430473 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f49bc041410 con 0x7f49cc108b70 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.171+0000 7f49ba7fc640 1 -- 192.168.123.100:0/1804430473 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f49bc03e070 con 0x7f49cc108b70 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.171+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4998005180 con 0x7f49cc108b70 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.171+0000 7f49ba7fc640 1 -- 192.168.123.100:0/1804430473 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f49bc041870 con 0x7f49cc108b70 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.203+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config set, name=public_network}] v 0) -- 0x7f4998005470 con 0x7f49cc108b70 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.203+0000 7f49ba7fc640 1 -- 192.168.123.100:0/1804430473 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) ==== 144+0+0 (secure 0 0 0) 0x7f49bc007560 con 0x7f49cc108b70 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.207+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 msgr2=0x7f49cc19e820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.207+0000 7f49d358d640 1 --2- 192.168.123.100:0/1804430473 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc19e820 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f49bc009520 tx=0x7f49bc038900 comp rx=0 tx=0).stop 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.207+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 shutdown_connections 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.207+0000 7f49d358d640 1 --2- 192.168.123.100:0/1804430473 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f49cc108b70 0x7f49cc19e820 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.207+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 >> 192.168.123.100:0/1804430473 conn(0x7f49cc07c040 msgr2=0x7f49cc105e10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.207+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 shutdown_connections 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.207+0000 7f49d358d640 1 -- 192.168.123.100:0/1804430473 wait complete. 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:Creating mgr... 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T17:20:21.251 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: debug 2026-03-09T17:20:21.067+0000 7f0282cdcd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773076821069529, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 85476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 258, "table_properties": {"data_size": 83634, "index_size": 231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10894, "raw_average_key_size": 48, "raw_value_size": 77543, "raw_average_value_size": 344, "num_data_blocks": 10, "num_entries": 225, "num_filter_entries": 225, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773076820, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f4bb0486-807f-45db-a49d-3e6e4b386341", "db_session_id": "2KVI3EHO7YTSDUWMIZGQ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: debug 2026-03-09T17:20:21.067+0000 7f0282cdcd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773076821069621, "job": 1, "event": "recovery_finished"} 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: debug 2026-03-09T17:20:21.067+0000 7f0282cdcd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: debug 2026-03-09T17:20:21.147+0000 7f0282cdcd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: debug 2026-03-09T17:20:21.147+0000 7f0282cdcd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55829bc64e00 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: debug 2026-03-09T17:20:21.147+0000 7f0282cdcd80 4 rocksdb: DB pointer 0x55829bd70000 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: debug 2026-03-09T17:20:21.147+0000 7f0278aa6640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: debug 2026-03-09T17:20:21.147+0000 7f0278aa6640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: ** DB Stats ** 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Uptime(secs): 0.2 total, 0.2 interval 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T17:20:21.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: ** Compaction Stats [default] ** 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: L0 2/0 85.33 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.13 0.00 1 0.134 0 0 0.0 0.0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Sum 2/0 85.33 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.13 0.00 1 0.134 0 0 0.0 0.0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.13 0.00 1 0.134 0 0 0.0 0.0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: ** Compaction Stats [default] ** 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 0.13 0.00 1 0.134 0 0 0.0 0.0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Uptime(secs): 0.2 total, 0.2 interval 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Cumulative compaction: 0.00 GB write, 0.36 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Interval compaction: 0.00 GB write, 0.36 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Block cache BinnedLRUCache@0x55829bc63350#7 capacity: 512.00 MB usage: 1.19 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: Block cache entry stats(count,size,portion): FilterBlock(2,0.77 KB,0.000146031%) IndexBlock(2,0.42 KB,8.04663e-05%) Misc(1,0.00 KB,0%) 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161199+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161199+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161291+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161291+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161329+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161329+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161366+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161366+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161407+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161407+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161442+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161442+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161478+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161478+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161512+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161512+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161839+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161839+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161922+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.161922+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.162674+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: cluster 2026-03-09T17:20:21.162674+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: audit 2026-03-09T17:20:21.207504+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/1804430473' entity='client.admin' 2026-03-09T17:20:21.358 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 bash[20770]: audit 2026-03-09T17:20:21.207504+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/1804430473' entity='client.admin' 2026-03-09T17:20:21.412 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.y 2026-03-09T17:20:21.412 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.y.service: Unit ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.y.service not loaded. 2026-03-09T17:20:21.569 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e.target.wants/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.y.service → /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service. 2026-03-09T17:20:21.577 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-09T17:20:21.577 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T17:20:21.577 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-09T17:20:21.577 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T17:20:21.577 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr to start... 2026-03-09T17:20:21.577 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr... 2026-03-09T17:20:21.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:20:21.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:21 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "16190428-1bdc-11f1-aea4-d920f1c7e51e", 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T17:20:21.822 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T17:20:18:986784+0000", 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T17:20:18.988210+0000", 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 Processor -- start 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- start start 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f4144108dd0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41437fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f4144108dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41437fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f4144108dd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51394/0 (socket says 192.168.123.100:51394) 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41437fe640 1 -- 192.168.123.100:0/1351032243 learned_addr learned my addr 192.168.123.100:0/1351032243 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- 192.168.123.100:0/1351032243 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f41441093a0 con 0x7f41441089d0 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41437fe640 1 -- 192.168.123.100:0/1351032243 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4144109bd0 con 0x7f41441089d0 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41437fe640 1 --2- 192.168.123.100:0/1351032243 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f4144108dd0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f4130009b80 tx=0x7f413002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b8a87cc1188d9d64 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41427fc640 1 -- 192.168.123.100:0/1351032243 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f413003c070 con 0x7f41441089d0 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41427fc640 1 -- 192.168.123.100:0/1351032243 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f4130037440 con 0x7f41441089d0 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41427fc640 1 -- 192.168.123.100:0/1351032243 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f41300353a0 con 0x7f41441089d0 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- 192.168.123.100:0/1351032243 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 msgr2=0x7f4144108dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 --2- 192.168.123.100:0/1351032243 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f4144108dd0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f4130009b80 tx=0x7f413002f190 comp rx=0 tx=0).stop 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- 192.168.123.100:0/1351032243 shutdown_connections 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 --2- 192.168.123.100:0/1351032243 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f4144108dd0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- 192.168.123.100:0/1351032243 >> 192.168.123.100:0/1351032243 conn(0x7f414407be90 msgr2=0x7f414407c2c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- 192.168.123.100:0/1351032243 shutdown_connections 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- 192.168.123.100:0/1351032243 wait complete. 2026-03-09T17:20:21.823 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 Processor -- start 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- start start 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f414419a2f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f4149e88640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f414410ce00 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41437fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f414419a2f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41437fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f414419a2f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51400/0 (socket says 192.168.123.100:51400) 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.731+0000 7f41437fe640 1 -- 192.168.123.100:0/1624963909 learned_addr learned my addr 192.168.123.100:0/1624963909 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f41437fe640 1 -- 192.168.123.100:0/1624963909 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f414419a830 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f41437fe640 1 --2- 192.168.123.100:0/1624963909 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f414419a2f0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f413003a040 tx=0x7f4130003940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f4140ff9640 1 -- 192.168.123.100:0/1624963909 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f413003c070 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f4149e88640 1 -- 192.168.123.100:0/1624963909 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f414419aac0 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f4149e88640 1 -- 192.168.123.100:0/1624963909 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f414419b7a0 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f4140ff9640 1 -- 192.168.123.100:0/1624963909 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f413002fa10 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f4140ff9640 1 -- 192.168.123.100:0/1624963909 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f41300359c0 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f4140ff9640 1 -- 192.168.123.100:0/1624963909 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f4130049440 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f4140ff9640 1 -- 192.168.123.100:0/1624963909 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f41300405c0 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.735+0000 7f4149e88640 1 -- 192.168.123.100:0/1624963909 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4110005180 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.739+0000 7f4140ff9640 1 -- 192.168.123.100:0/1624963909 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f4130003a00 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.771+0000 7f4149e88640 1 -- 192.168.123.100:0/1624963909 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f4110005740 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.771+0000 7f4140ff9640 1 -- 192.168.123.100:0/1624963909 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (secure 0 0 0) 0x7f4130003be0 con 0x7f41441089d0 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.775+0000 7f41267fc640 1 -- 192.168.123.100:0/1624963909 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 msgr2=0x7f414419a2f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.775+0000 7f41267fc640 1 --2- 192.168.123.100:0/1624963909 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f414419a2f0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f413003a040 tx=0x7f4130003940 comp rx=0 tx=0).stop 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.775+0000 7f41267fc640 1 -- 192.168.123.100:0/1624963909 shutdown_connections 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.775+0000 7f41267fc640 1 --2- 192.168.123.100:0/1624963909 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f41441089d0 0x7f414419a2f0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.775+0000 7f41267fc640 1 -- 192.168.123.100:0/1624963909 >> 192.168.123.100:0/1624963909 conn(0x7f414407be90 msgr2=0x7f4144106000 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.775+0000 7f41267fc640 1 -- 192.168.123.100:0/1624963909 shutdown_connections 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:21.775+0000 7f41267fc640 1 -- 192.168.123.100:0/1624963909 wait complete. 2026-03-09T17:20:21.824 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (1/15)... 2026-03-09T17:20:21.938 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:21 vm00 bash[21037]: debug 2026-03-09T17:20:21.823+0000 7fce0ef52140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:20:22.212 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:21 vm00 bash[21037]: debug 2026-03-09T17:20:21.935+0000 7fce0ef52140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:20:22.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:22 vm00 bash[20770]: audit 2026-03-09T17:20:21.773130+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/1624963909' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:20:22.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:22 vm00 bash[20770]: audit 2026-03-09T17:20:21.773130+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/1624963909' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:20:22.537 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:22 vm00 bash[21037]: debug 2026-03-09T17:20:22.211+0000 7fce0ef52140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:20:22.975 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:22 vm00 bash[21037]: debug 2026-03-09T17:20:22.635+0000 7fce0ef52140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:20:22.975 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:22 vm00 bash[21037]: debug 2026-03-09T17:20:22.715+0000 7fce0ef52140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:20:22.975 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:22 vm00 bash[21037]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:20:22.976 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:22 vm00 bash[21037]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:20:22.976 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:22 vm00 bash[21037]: from numpy import show_config as show_numpy_config 2026-03-09T17:20:22.976 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:22 vm00 bash[21037]: debug 2026-03-09T17:20:22.835+0000 7fce0ef52140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:20:23.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:22 vm00 bash[21037]: debug 2026-03-09T17:20:22.971+0000 7fce0ef52140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:20:23.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.011+0000 7fce0ef52140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:20:23.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.047+0000 7fce0ef52140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:20:23.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.091+0000 7fce0ef52140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:20:23.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.139+0000 7fce0ef52140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:20:23.839 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.567+0000 7fce0ef52140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:20:23.839 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.603+0000 7fce0ef52140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:20:23.839 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.639+0000 7fce0ef52140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:20:23.839 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.775+0000 7fce0ef52140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:20:23.839 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.815+0000 7fce0ef52140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "16190428-1bdc-11f1-aea4-d920f1c7e51e", 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T17:20:24.056 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T17:20:18:986784+0000", 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:24.057 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T17:20:18.988210+0000", 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 Processor -- start 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- start start 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a40a4d20 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f21a40a52f0 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a40a4d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a40a4d20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51406/0 (socket says 192.168.123.100:51406) 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 -- 192.168.123.100:0/1280387624 learned_addr learned my addr 192.168.123.100:0/1280387624 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 -- 192.168.123.100:0/1280387624 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f21a40a3960 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 --2- 192.168.123.100:0/1280387624 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a40a4d20 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f21a000f1d0 tx=0x7f21a0037ba0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b88e255d16382e61 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21aaffd640 1 -- 192.168.123.100:0/1280387624 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f21a0040020 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21aaffd640 1 -- 192.168.123.100:0/1280387624 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f21a000d200 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21aaffd640 1 -- 192.168.123.100:0/1280387624 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f21a003c6a0 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- 192.168.123.100:0/1280387624 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 msgr2=0x7f21a40a4d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 --2- 192.168.123.100:0/1280387624 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a40a4d20 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f21a000f1d0 tx=0x7f21a0037ba0 comp rx=0 tx=0).stop 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- 192.168.123.100:0/1280387624 shutdown_connections 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 --2- 192.168.123.100:0/1280387624 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a40a4d20 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- 192.168.123.100:0/1280387624 >> 192.168.123.100:0/1280387624 conn(0x7f21a409fc30 msgr2=0x7f21a40a2090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- 192.168.123.100:0/1280387624 shutdown_connections 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- 192.168.123.100:0/1280387624 wait complete. 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 Processor -- start 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- start start 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a413a2f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f21a40a6540 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a413a2f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a413a2f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51410/0 (socket says 192.168.123.100:51410) 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 -- 192.168.123.100:0/883577286 learned_addr learned my addr 192.168.123.100:0/883577286 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 -- 192.168.123.100:0/883577286 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f21a413a830 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21abfff640 1 --2- 192.168.123.100:0/883577286 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a413a2f0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f21a00380d0 tx=0x7f21a003cab0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21a97fa640 1 -- 192.168.123.100:0/883577286 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f21a0040030 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.967+0000 7f21b1036640 1 -- 192.168.123.100:0/883577286 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f21a413aac0 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.971+0000 7f21b1036640 1 -- 192.168.123.100:0/883577286 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f21a413d7b0 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.971+0000 7f21a97fa640 1 -- 192.168.123.100:0/883577286 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f21a0049070 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.971+0000 7f21a97fa640 1 -- 192.168.123.100:0/883577286 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f21a004b070 con 0x7f21a40a4920 2026-03-09T17:20:24.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.971+0000 7f21a97fa640 1 -- 192.168.123.100:0/883577286 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f21a0045420 con 0x7f21a40a4920 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.971+0000 7f21a97fa640 1 -- 192.168.123.100:0/883577286 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f21a0044920 con 0x7f21a40a4920 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.971+0000 7f21b1036640 1 -- 192.168.123.100:0/883577286 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2170005180 con 0x7f21a40a4920 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:23.971+0000 7f21a97fa640 1 -- 192.168.123.100:0/883577286 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f21a00443b0 con 0x7f21a40a4920 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.003+0000 7f21b1036640 1 -- 192.168.123.100:0/883577286 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f2170005740 con 0x7f21a40a4920 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.003+0000 7f21a97fa640 1 -- 192.168.123.100:0/883577286 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (secure 0 0 0) 0x7f21a003cd70 con 0x7f21a40a4920 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.007+0000 7f218effd640 1 -- 192.168.123.100:0/883577286 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 msgr2=0x7f21a413a2f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.007+0000 7f218effd640 1 --2- 192.168.123.100:0/883577286 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a413a2f0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f21a00380d0 tx=0x7f21a003cab0 comp rx=0 tx=0).stop 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.011+0000 7f218effd640 1 -- 192.168.123.100:0/883577286 shutdown_connections 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.011+0000 7f218effd640 1 --2- 192.168.123.100:0/883577286 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f21a40a4920 0x7f21a413a2f0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.011+0000 7f218effd640 1 -- 192.168.123.100:0/883577286 >> 192.168.123.100:0/883577286 conn(0x7f21a409fc30 msgr2=0x7f21a40a0760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.011+0000 7f218effd640 1 -- 192.168.123.100:0/883577286 shutdown_connections 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:24.011+0000 7f218effd640 1 -- 192.168.123.100:0/883577286 wait complete. 2026-03-09T17:20:24.060 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (2/15)... 2026-03-09T17:20:24.168 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:24 vm00 bash[20770]: audit 2026-03-09T17:20:24.007725+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/883577286' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:20:24.168 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:24 vm00 bash[20770]: audit 2026-03-09T17:20:24.007725+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/883577286' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:20:24.168 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:23 vm00 bash[21037]: debug 2026-03-09T17:20:23.867+0000 7fce0ef52140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:20:24.168 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:24 vm00 bash[21037]: debug 2026-03-09T17:20:23.999+0000 7fce0ef52140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:20:24.537 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:24 vm00 bash[21037]: debug 2026-03-09T17:20:24.167+0000 7fce0ef52140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:20:24.537 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:24 vm00 bash[21037]: debug 2026-03-09T17:20:24.331+0000 7fce0ef52140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:20:24.537 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:24 vm00 bash[21037]: debug 2026-03-09T17:20:24.367+0000 7fce0ef52140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:20:24.537 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:24 vm00 bash[21037]: debug 2026-03-09T17:20:24.407+0000 7fce0ef52140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:20:24.838 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:24 vm00 bash[21037]: debug 2026-03-09T17:20:24.555+0000 7fce0ef52140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:20:24.838 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:24 vm00 bash[21037]: debug 2026-03-09T17:20:24.787+0000 7fce0ef52140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:20:25.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: cluster 2026-03-09T17:20:24.789731+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: cluster 2026-03-09T17:20:24.789731+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: cluster 2026-03-09T17:20:24.792969+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00331198s) 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: cluster 2026-03-09T17:20:24.792969+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00331198s) 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794443+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794443+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794478+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794478+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794514+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794514+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794542+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794542+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794562+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794562+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794581+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.794581+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.795794+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.795794+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.797095+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.797095+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: cluster 2026-03-09T17:20:24.802197+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon y is now available 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: cluster 2026-03-09T17:20:24.802197+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon y is now available 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.813392+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.813392+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.814692+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.814692+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.816161+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.816161+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.818575+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.818575+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.820851+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:20:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:25 vm00 bash[20770]: audit 2026-03-09T17:20:24.820851+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14100 192.168.123.100:0/2453525200' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "16190428-1bdc-11f1-aea4-d920f1c7e51e", 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T17:20:26.368 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T17:20:18:986784+0000", 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T17:20:26.370 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T17:20:18.988210+0000", 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.179+0000 7f661f487640 1 Processor -- start 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- start start 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f6618108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6618109540 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661d1fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f6618108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661d1fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f6618108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51482/0 (socket says 192.168.123.100:51482) 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661d1fc640 1 -- 192.168.123.100:0/4042627246 learned_addr learned my addr 192.168.123.100:0/4042627246 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661d1fc640 1 -- 192.168.123.100:0/4042627246 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6618109d70 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661d1fc640 1 --2- 192.168.123.100:0/4042627246 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f6618108f70 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f6608009920 tx=0x7f660802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b5c3e39448178e10 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f6607fff640 1 -- 192.168.123.100:0/4042627246 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f660803c070 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f6607fff640 1 -- 192.168.123.100:0/4042627246 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f6608037440 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f6607fff640 1 -- 192.168.123.100:0/4042627246 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f66080354d0 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- 192.168.123.100:0/4042627246 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 msgr2=0x7f6618108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 --2- 192.168.123.100:0/4042627246 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f6618108f70 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f6608009920 tx=0x7f660802ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- 192.168.123.100:0/4042627246 shutdown_connections 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 --2- 192.168.123.100:0/4042627246 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f6618108f70 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- 192.168.123.100:0/4042627246 >> 192.168.123.100:0/4042627246 conn(0x7f661807c040 msgr2=0x7f661807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- 192.168.123.100:0/4042627246 shutdown_connections 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- 192.168.123.100:0/4042627246 wait complete. 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 Processor -- start 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- start start 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f661819e990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661f487640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f661810cfa0 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661d1fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f661819e990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661d1fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f661819e990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51488/0 (socket says 192.168.123.100:51488) 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.183+0000 7f661d1fc640 1 -- 192.168.123.100:0/3518790899 learned_addr learned my addr 192.168.123.100:0/3518790899 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f661d1fc640 1 -- 192.168.123.100:0/3518790899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f661819eed0 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f661d1fc640 1 --2- 192.168.123.100:0/3518790899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f661819e990 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f6608006fd0 tx=0x7f6608035dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f66067fc640 1 -- 192.168.123.100:0/3518790899 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6608045070 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f66067fc640 1 -- 192.168.123.100:0/3518790899 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f660802fbc0 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f66067fc640 1 -- 192.168.123.100:0/3518790899 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f660803c070 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f661819f160 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f66181a1e50 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f66067fc640 1 -- 192.168.123.100:0/3518790899 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 3) ==== 50095+0+0 (secure 0 0 0) 0x7f660804a430 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f66067fc640 1 --2- 192.168.123.100:0/3518790899 >> v2:192.168.123.100:6800/2312331294 conn(0x7f65f403da90 0x7f65f403ff50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f66067fc640 1 -- 192.168.123.100:0/3518790899 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f6608077510 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f661c9fb640 1 --2- 192.168.123.100:0/3518790899 >> v2:192.168.123.100:6800/2312331294 conn(0x7f65f403da90 0x7f65f403ff50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.187+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f65e0005180 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.191+0000 7f661c9fb640 1 --2- 192.168.123.100:0/3518790899 >> v2:192.168.123.100:6800/2312331294 conn(0x7f65f403da90 0x7f65f403ff50 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f660c0099c0 tx=0x7f660c006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.191+0000 7f66067fc640 1 -- 192.168.123.100:0/3518790899 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6608033200 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.319+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f65e0005470 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f66067fc640 1 -- 192.168.123.100:0/3518790899 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1290 (secure 0 0 0) 0x7f660804a750 con 0x7f6618108b70 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 >> v2:192.168.123.100:6800/2312331294 conn(0x7f65f403da90 msgr2=0x7f65f403ff50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 --2- 192.168.123.100:0/3518790899 >> v2:192.168.123.100:6800/2312331294 conn(0x7f65f403da90 0x7f65f403ff50 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f660c0099c0 tx=0x7f660c006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 msgr2=0x7f661819e990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 --2- 192.168.123.100:0/3518790899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f661819e990 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f6608006fd0 tx=0x7f6608035dc0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 shutdown_connections 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 --2- 192.168.123.100:0/3518790899 >> v2:192.168.123.100:6800/2312331294 conn(0x7f65f403da90 0x7f65f403ff50 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 --2- 192.168.123.100:0/3518790899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6618108b70 0x7f661819e990 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 >> 192.168.123.100:0/3518790899 conn(0x7f661807c040 msgr2=0x7f6618105f30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 shutdown_connections 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.323+0000 7f661f487640 1 -- 192.168.123.100:0/3518790899 wait complete. 2026-03-09T17:20:26.371 INFO:teuthology.orchestra.run.vm00.stdout:mgr is available 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 Processor -- start 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 -- start start 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a8108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f60a8109540 con 0x7f60a8108b70 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60adb00640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a8108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60adb00640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a8108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51500/0 (socket says 192.168.123.100:51500) 2026-03-09T17:20:26.627 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60adb00640 1 -- 192.168.123.100:0/26077053 learned_addr learned my addr 192.168.123.100:0/26077053 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60adb00640 1 -- 192.168.123.100:0/26077053 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f60a8109d70 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60adb00640 1 --2- 192.168.123.100:0/26077053 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a8108f70 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f609c009b80 tx=0x7f609c02f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=66a168c643474cc0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60acafe640 1 -- 192.168.123.100:0/26077053 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f609c03c070 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60acafe640 1 -- 192.168.123.100:0/26077053 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f609c037440 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 -- 192.168.123.100:0/26077053 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 msgr2=0x7f60a8108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 --2- 192.168.123.100:0/26077053 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a8108f70 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f609c009b80 tx=0x7f609c02f190 comp rx=0 tx=0).stop 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 -- 192.168.123.100:0/26077053 shutdown_connections 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 --2- 192.168.123.100:0/26077053 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a8108f70 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 -- 192.168.123.100:0/26077053 >> 192.168.123.100:0/26077053 conn(0x7f60a807c040 msgr2=0x7f60a807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 -- 192.168.123.100:0/26077053 shutdown_connections 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.487+0000 7f60afd8b640 1 -- 192.168.123.100:0/26077053 wait complete. 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60afd8b640 1 Processor -- start 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60afd8b640 1 -- start start 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60afd8b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a819ea30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60afd8b640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f60a810cfa0 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60adb00640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a819ea30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60adb00640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a819ea30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51510/0 (socket says 192.168.123.100:51510) 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60adb00640 1 -- 192.168.123.100:0/3221934201 learned_addr learned my addr 192.168.123.100:0/3221934201 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60adb00640 1 -- 192.168.123.100:0/3221934201 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f60a819ef70 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60adb00640 1 --2- 192.168.123.100:0/3221934201 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a819ea30 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f609c02f740 tx=0x7f609c0039b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f6096ffd640 1 -- 192.168.123.100:0/3221934201 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f609c03c040 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f6096ffd640 1 -- 192.168.123.100:0/3221934201 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f609c003c40 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f60a819f200 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f6096ffd640 1 -- 192.168.123.100:0/3221934201 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f609c035db0 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f60a81a1ef0 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f6096ffd640 1 -- 192.168.123.100:0/3221934201 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 3) ==== 50095+0+0 (secure 0 0 0) 0x7f609c037440 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6070005180 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f6096ffd640 1 --2- 192.168.123.100:0/3221934201 >> v2:192.168.123.100:6800/2312331294 conn(0x7f608003d680 0x7f608003fb40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f60ad2ff640 1 --2- 192.168.123.100:0/3221934201 >> v2:192.168.123.100:6800/2312331294 conn(0x7f608003d680 0x7f608003fb40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.491+0000 7f6096ffd640 1 -- 192.168.123.100:0/3221934201 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f609c07b760 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.495+0000 7f60ad2ff640 1 --2- 192.168.123.100:0/3221934201 >> v2:192.168.123.100:6800/2312331294 conn(0x7f608003d680 0x7f608003fb40 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f6098009a10 tx=0x7f6098006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.495+0000 7f6096ffd640 1 -- 192.168.123.100:0/3221934201 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f609c03d280 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.587+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7f6070003c00 con 0x7f60a8108b70 2026-03-09T17:20:26.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.587+0000 7f6096ffd640 1 -- 192.168.123.100:0/3221934201 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) ==== 70+0+380 (secure 0 0 0) 0x7f609c04adc0 con 0x7f60a8108b70 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 >> v2:192.168.123.100:6800/2312331294 conn(0x7f608003d680 msgr2=0x7f608003fb40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 --2- 192.168.123.100:0/3221934201 >> v2:192.168.123.100:6800/2312331294 conn(0x7f608003d680 0x7f608003fb40 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f6098009a10 tx=0x7f6098006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 msgr2=0x7f60a819ea30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 --2- 192.168.123.100:0/3221934201 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a819ea30 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f609c02f740 tx=0x7f609c0039b0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 shutdown_connections 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 --2- 192.168.123.100:0/3221934201 >> v2:192.168.123.100:6800/2312331294 conn(0x7f608003d680 0x7f608003fb40 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 --2- 192.168.123.100:0/3221934201 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f60a8108b70 0x7f60a819ea30 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 >> 192.168.123.100:0/3221934201 conn(0x7f60a807c040 msgr2=0x7f60a81060d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 shutdown_connections 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.591+0000 7f60afd8b640 1 -- 192.168.123.100:0/3221934201 wait complete. 2026-03-09T17:20:26.629 INFO:teuthology.orchestra.run.vm00.stdout:Enabling cephadm module... 2026-03-09T17:20:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:26 vm00 bash[20770]: cluster 2026-03-09T17:20:25.796305+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: y(active, since 1.00663s) 2026-03-09T17:20:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:26 vm00 bash[20770]: cluster 2026-03-09T17:20:25.796305+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: y(active, since 1.00663s) 2026-03-09T17:20:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:26 vm00 bash[20770]: audit 2026-03-09T17:20:26.324460+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.100:0/3518790899' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:20:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:26 vm00 bash[20770]: audit 2026-03-09T17:20:26.324460+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.100:0/3518790899' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:20:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:26 vm00 bash[20770]: audit 2026-03-09T17:20:26.590693+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3221934201' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T17:20:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:26 vm00 bash[20770]: audit 2026-03-09T17:20:26.590693+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3221934201' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f082b3fb640 1 Processor -- start 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f082b3fb640 1 -- start start 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f082b3fb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f0824108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f082b3fb640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0824109540 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f0829170640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f0824108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f0829170640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f0824108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51526/0 (socket says 192.168.123.100:51526) 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f0829170640 1 -- 192.168.123.100:0/319445766 learned_addr learned my addr 192.168.123.100:0/319445766 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f0829170640 1 -- 192.168.123.100:0/319445766 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0824109d70 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f0829170640 1 --2- 192.168.123.100:0/319445766 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f0824108f70 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f0818009b80 tx=0x7f081802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=85a9060ca0f679c2 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f0813fff640 1 -- 192.168.123.100:0/319445766 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f081803c070 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.735+0000 7f0813fff640 1 -- 192.168.123.100:0/319445766 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f0818037440 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/319445766 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 msgr2=0x7f0824108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 --2- 192.168.123.100:0/319445766 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f0824108f70 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f0818009b80 tx=0x7f081802f190 comp rx=0 tx=0).stop 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/319445766 shutdown_connections 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 --2- 192.168.123.100:0/319445766 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f0824108f70 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/319445766 >> 192.168.123.100:0/319445766 conn(0x7f082407c040 msgr2=0x7f082407c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/319445766 shutdown_connections 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/319445766 wait complete. 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 Processor -- start 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- start start 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f082419ea50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f0829170640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f082419ea50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f0829170640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f082419ea50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51528/0 (socket says 192.168.123.100:51528) 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f0829170640 1 -- 192.168.123.100:0/2452042556 learned_addr learned my addr 192.168.123.100:0/2452042556 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f082410cfa0 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f0829170640 1 -- 192.168.123.100:0/2452042556 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f082419ef90 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f0829170640 1 --2- 192.168.123.100:0/2452042556 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f082419ea50 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f081802f740 tx=0x7f08180039b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f081803c040 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f0818003c40 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0818035db0 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f082419f220 con 0x7f0824108b70 2026-03-09T17:20:27.871 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f082419f640 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 3) ==== 50095+0+0 (secure 0 0 0) 0x7f0818030050 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f07ec005180 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f08127fc640 1 --2- 192.168.123.100:0/2452042556 >> v2:192.168.123.100:6800/2312331294 conn(0x7f080003da40 0x7f080003ff00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.739+0000 7f082896f640 1 --2- 192.168.123.100:0/2452042556 >> v2:192.168.123.100:6800/2312331294 conn(0x7f080003da40 0x7f080003ff00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.743+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f0818076af0 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.743+0000 7f082896f640 1 --2- 192.168.123.100:0/2452042556 >> v2:192.168.123.100:6800/2312331294 conn(0x7f080003da40 0x7f080003ff00 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f0814009a10 tx=0x7f0814006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.747+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0818030420 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.799+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mgrmap(e 4) ==== 50201+0+0 (secure 0 0 0) 0x7f0818030650 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:26.863+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) -- 0x7f07ec005470 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.807+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v5) ==== 86+0+0 (secure 0 0 0) 0x7f081804b400 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f08127fc640 1 -- 192.168.123.100:0/2452042556 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mgrmap(e 5) ==== 50212+0+0 (secure 0 0 0) 0x7f081804a770 con 0x7f0824108b70 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 >> v2:192.168.123.100:6800/2312331294 conn(0x7f080003da40 msgr2=0x7f080003ff00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f082b3fb640 1 --2- 192.168.123.100:0/2452042556 >> v2:192.168.123.100:6800/2312331294 conn(0x7f080003da40 0x7f080003ff00 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f0814009a10 tx=0x7f0814006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 msgr2=0x7f082419ea50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f082b3fb640 1 --2- 192.168.123.100:0/2452042556 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f082419ea50 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f081802f740 tx=0x7f08180039b0 comp rx=0 tx=0).stop 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 shutdown_connections 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f082b3fb640 1 --2- 192.168.123.100:0/2452042556 >> v2:192.168.123.100:6800/2312331294 conn(0x7f080003da40 0x7f080003ff00 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f082b3fb640 1 --2- 192.168.123.100:0/2452042556 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0824108b70 0x7f082419ea50 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.811+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 >> 192.168.123.100:0/2452042556 conn(0x7f082407c040 msgr2=0x7f08241060f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.815+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 shutdown_connections 2026-03-09T17:20:27.872 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:27.815+0000 7f082b3fb640 1 -- 192.168.123.100:0/2452042556 wait complete. 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:27 vm00 bash[20770]: cluster 2026-03-09T17:20:26.802305+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:27 vm00 bash[20770]: cluster 2026-03-09T17:20:26.802305+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:27 vm00 bash[20770]: audit 2026-03-09T17:20:26.865130+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.100:0/2452042556' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:27 vm00 bash[20770]: audit 2026-03-09T17:20:26.865130+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.100:0/2452042556' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:27 vm00 bash[21037]: ignoring --setuser ceph since I am not root 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:27 vm00 bash[21037]: ignoring --setgroup ceph since I am not root 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:27 vm00 bash[21037]: debug 2026-03-09T17:20:27.951+0000 7fb8e12cb140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:27 vm00 bash[21037]: debug 2026-03-09T17:20:27.995+0000 7fb8e12cb140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:20:28.126 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:28 vm00 bash[21037]: debug 2026-03-09T17:20:28.123+0000 7fb8e12cb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 Processor -- start 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- start start 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e8074bd0 0x7f25e8074fd0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f25e80755a0 con 0x7f25e8074bd0 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25edb32640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e8074bd0 0x7f25e8074fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25edb32640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e8074bd0 0x7f25e8074fd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51552/0 (socket says 192.168.123.100:51552) 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25edb32640 1 -- 192.168.123.100:0/4116775494 learned_addr learned my addr 192.168.123.100:0/4116775494 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25edb32640 1 -- 192.168.123.100:0/4116775494 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f25e810e4c0 con 0x7f25e8074bd0 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25edb32640 1 --2- 192.168.123.100:0/4116775494 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e8074bd0 0x7f25e8074fd0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f25e4009eb0 tx=0x7f25e40311b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=e5d41fe9fba9b586 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25ecb30640 1 -- 192.168.123.100:0/4116775494 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f25e403e070 con 0x7f25e8074bd0 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25ecb30640 1 -- 192.168.123.100:0/4116775494 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f25e4031d70 con 0x7f25e8074bd0 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- 192.168.123.100:0/4116775494 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e8074bd0 msgr2=0x7f25e8074fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 --2- 192.168.123.100:0/4116775494 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e8074bd0 0x7f25e8074fd0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f25e4009eb0 tx=0x7f25e40311b0 comp rx=0 tx=0).stop 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- 192.168.123.100:0/4116775494 shutdown_connections 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 --2- 192.168.123.100:0/4116775494 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e8074bd0 0x7f25e8074fd0 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- 192.168.123.100:0/4116775494 >> 192.168.123.100:0/4116775494 conn(0x7f25e806fe50 msgr2=0x7f25e80722b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- 192.168.123.100:0/4116775494 shutdown_connections 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- 192.168.123.100:0/4116775494 wait complete. 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 Processor -- start 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- start start 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e81a2b40 0x7f25e81a2f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:28.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25eeb34640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f25e810ef90 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25edb32640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e81a2b40 0x7f25e81a2f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25edb32640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e81a2b40 0x7f25e81a2f60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51556/0 (socket says 192.168.123.100:51556) 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.027+0000 7f25edb32640 1 -- 192.168.123.100:0/266413845 learned_addr learned my addr 192.168.123.100:0/266413845 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25edb32640 1 -- 192.168.123.100:0/266413845 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f25e81a34a0 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25edb32640 1 --2- 192.168.123.100:0/266413845 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e81a2b40 0x7f25e81a2f60 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f25e4006fd0 tx=0x7f25e40396e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25d6ffd640 1 -- 192.168.123.100:0/266413845 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f25e403e070 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25eeb34640 1 -- 192.168.123.100:0/266413845 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f25e81a3730 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25eeb34640 1 -- 192.168.123.100:0/266413845 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f25e81a4280 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25eeb34640 1 -- 192.168.123.100:0/266413845 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f25e8074bd0 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25d6ffd640 1 -- 192.168.123.100:0/266413845 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f25e4042400 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25d6ffd640 1 -- 192.168.123.100:0/266413845 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f25e40413b0 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25d6ffd640 1 -- 192.168.123.100:0/266413845 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 50212+0+0 (secure 0 0 0) 0x7f25e4041550 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25d6ffd640 1 --2- 192.168.123.100:0/266413845 >> v2:192.168.123.100:6800/2312331294 conn(0x7f25c003dc00 0x7f25c00400c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25ed331640 1 -- 192.168.123.100:0/266413845 >> v2:192.168.123.100:6800/2312331294 conn(0x7f25c003dc00 msgr2=0x7f25c00400c0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2312331294 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25ed331640 1 --2- 192.168.123.100:0/266413845 >> v2:192.168.123.100:6800/2312331294 conn(0x7f25c003dc00 0x7f25c00400c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.031+0000 7f25d6ffd640 1 -- 192.168.123.100:0/266413845 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f25e40781d0 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.043+0000 7f25d6ffd640 1 -- 192.168.123.100:0/266413845 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f25e4041920 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.167+0000 7f25eeb34640 1 -- 192.168.123.100:0/266413845 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f25e81a49b0 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.167+0000 7f25d6ffd640 1 -- 192.168.123.100:0/266413845 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v5) ==== 56+0+88 (secure 0 0 0) 0x7f25e400edd0 con 0x7f25e81a2b40 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 -- 192.168.123.100:0/266413845 >> v2:192.168.123.100:6800/2312331294 conn(0x7f25c003dc00 msgr2=0x7f25c00400c0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 --2- 192.168.123.100:0/266413845 >> v2:192.168.123.100:6800/2312331294 conn(0x7f25c003dc00 0x7f25c00400c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 -- 192.168.123.100:0/266413845 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e81a2b40 msgr2=0x7f25e81a2f60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 --2- 192.168.123.100:0/266413845 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e81a2b40 0x7f25e81a2f60 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f25e4006fd0 tx=0x7f25e40396e0 comp rx=0 tx=0).stop 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 -- 192.168.123.100:0/266413845 shutdown_connections 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 --2- 192.168.123.100:0/266413845 >> v2:192.168.123.100:6800/2312331294 conn(0x7f25c003dc00 0x7f25c00400c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 --2- 192.168.123.100:0/266413845 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25e81a2b40 0x7f25e81a2f60 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 -- 192.168.123.100:0/266413845 >> 192.168.123.100:0/266413845 conn(0x7f25e806fe50 msgr2=0x7f25e8070740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 -- 192.168.123.100:0/266413845 shutdown_connections 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.171+0000 7f25d4ff9640 1 -- 192.168.123.100:0/266413845 wait complete. 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-09T17:20:28.219 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 5... 2026-03-09T17:20:28.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:28 vm00 bash[21037]: debug 2026-03-09T17:20:28.451+0000 7fb8e12cb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:20:29.133 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:28 vm00 bash[21037]: debug 2026-03-09T17:20:28.931+0000 7fb8e12cb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:20:29.133 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.015+0000 7fb8e12cb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:20:29.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:28 vm00 bash[20770]: audit 2026-03-09T17:20:27.805248+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.100:0/2452042556' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T17:20:29.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:28 vm00 bash[20770]: audit 2026-03-09T17:20:27.805248+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.100:0/2452042556' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T17:20:29.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:28 vm00 bash[20770]: cluster 2026-03-09T17:20:27.808872+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T17:20:29.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:28 vm00 bash[20770]: cluster 2026-03-09T17:20:27.808872+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T17:20:29.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:28 vm00 bash[20770]: audit 2026-03-09T17:20:28.170625+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.100:0/266413845' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:20:29.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:28 vm00 bash[20770]: audit 2026-03-09T17:20:28.170625+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.100:0/266413845' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:20:29.403 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:20:29.403 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:20:29.403 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: from numpy import show_config as show_numpy_config 2026-03-09T17:20:29.403 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.135+0000 7fb8e12cb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:20:29.403 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.275+0000 7fb8e12cb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:20:29.403 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.315+0000 7fb8e12cb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:20:29.403 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.355+0000 7fb8e12cb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:20:29.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.399+0000 7fb8e12cb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:20:29.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.447+0000 7fb8e12cb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:20:30.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.879+0000 7fb8e12cb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:20:30.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.911+0000 7fb8e12cb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:20:30.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:29 vm00 bash[21037]: debug 2026-03-09T17:20:29.947+0000 7fb8e12cb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:20:30.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.087+0000 7fb8e12cb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:20:30.426 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.127+0000 7fb8e12cb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:20:30.426 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.167+0000 7fb8e12cb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:20:30.426 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.271+0000 7fb8e12cb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:20:30.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.423+0000 7fb8e12cb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:20:30.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.587+0000 7fb8e12cb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:20:30.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.623+0000 7fb8e12cb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:20:30.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.663+0000 7fb8e12cb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:20:31.092 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:30 vm00 bash[21037]: debug 2026-03-09T17:20:30.811+0000 7fb8e12cb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:20:31.092 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:31 vm00 bash[21037]: debug 2026-03-09T17:20:31.035+0000 7fb8e12cb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.039609+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.039609+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.039818+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon y 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.039818+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon y 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.043743+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.043743+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.044013+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0042777s) 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.044013+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0042777s) 2026-03-09T17:20:31.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.046341+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.046341+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.047156+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.047156+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.048074+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.048074+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.048271+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.048271+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.048423+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.048423+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.054755+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon y is now available 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: cluster 2026-03-09T17:20:31.054755+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon y is now available 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.063887+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.063887+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.067994+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.067994+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.081797+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.081797+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.082213+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.082213+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.083895+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.083895+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.090550+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:20:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:31 vm00 bash[20770]: audit 2026-03-09T17:20:31.090550+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:20:32.094 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:20:32.094 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-09T17:20:32.094 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T17:20:32.094 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:20:32.094 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 Processor -- start 2026-03-09T17:20:32.094 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- start start 2026-03-09T17:20:32.094 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c074510 0x7fb04c074910 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fb04c074ee0 con 0x7fb04c074510 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c074510 0x7fb04c074910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c074510 0x7fb04c074910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51564/0 (socket says 192.168.123.100:51564) 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 -- 192.168.123.100:0/2182048535 learned_addr learned my addr 192.168.123.100:0/2182048535 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 -- 192.168.123.100:0/2182048535 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb04c075060 con 0x7fb04c074510 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 --2- 192.168.123.100:0/2182048535 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c074510 0x7fb04c074910 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fb048009de0 tx=0x7fb048031240 comp rx=0 tx=0).ready entity=mon.0 client_cookie=28c7ea3316a278df server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05145c640 1 -- 192.168.123.100:0/2182048535 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb04803e070 con 0x7fb04c074510 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05145c640 1 -- 192.168.123.100:0/2182048535 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fb048031dc0 con 0x7fb04c074510 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- 192.168.123.100:0/2182048535 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c074510 msgr2=0x7fb04c074910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 --2- 192.168.123.100:0/2182048535 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c074510 0x7fb04c074910 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fb048009de0 tx=0x7fb048031240 comp rx=0 tx=0).stop 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- 192.168.123.100:0/2182048535 shutdown_connections 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 --2- 192.168.123.100:0/2182048535 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c074510 0x7fb04c074910 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- 192.168.123.100:0/2182048535 >> 192.168.123.100:0/2182048535 conn(0x7fb04c06fe30 msgr2=0x7fb04c0722b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- 192.168.123.100:0/2182048535 shutdown_connections 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- 192.168.123.100:0/2182048535 wait complete. 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 Processor -- start 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- start start 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c086e90 0x7fb04c0872b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fb04c079cd0 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c086e90 0x7fb04c0872b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c086e90 0x7fb04c0872b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51580/0 (socket says 192.168.123.100:51580) 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 -- 192.168.123.100:0/95859338 learned_addr learned my addr 192.168.123.100:0/95859338 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 -- 192.168.123.100:0/95859338 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb04c0877f0 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb05245e640 1 --2- 192.168.123.100:0/95859338 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c086e90 0x7fb04c0872b0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fb0480317f0 tx=0x7fb048003990 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb048046070 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- 192.168.123.100:0/95859338 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb04c087a80 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.355+0000 7fb053460640 1 -- 192.168.123.100:0/95859338 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb04c1bbce0 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.359+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fb048039890 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.359+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb04803e070 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.359+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 50212+0+0 (secure 0 0 0) 0x7fb04800f030 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.359+0000 7fb0437fe640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 0x7fb0380400f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.359+0000 7fb051c5d640 1 -- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 msgr2=0x7fb0380400f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2312331294 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.359+0000 7fb051c5d640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 0x7fb0380400f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.359+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fb048076ef0 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.359+0000 7fb053460640 1 -- 192.168.123.100:0/95859338 --> v2:192.168.123.100:6800/2312331294 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fb04c074510 con 0x7fb03803dc30 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.559+0000 7fb051c5d640 1 -- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 msgr2=0x7fb0380400f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2312331294 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.559+0000 7fb051c5d640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 0x7fb0380400f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.959+0000 7fb051c5d640 1 -- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 msgr2=0x7fb0380400f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2312331294 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:28.959+0000 7fb051c5d640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 0x7fb0380400f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:29.759+0000 7fb051c5d640 1 -- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 msgr2=0x7fb0380400f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2312331294 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:29.759+0000 7fb051c5d640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 0x7fb0380400f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:31.039+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mgrmap(e 6) ==== 50014+0+0 (secure 0 0 0) 0x7fb0480759b0 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:31.039+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 msgr2=0x7fb0380400f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:31.039+0000 7fb0437fe640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 0x7fb0380400f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.043+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7fb048076430 con 0x7fb04c086e90 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.043+0000 7fb0437fe640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/1655006894 conn(0x7fb0380411d0 0x7fb0380435c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.043+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 --> v2:192.168.123.100:6800/1655006894 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fb048063830 con 0x7fb0380411d0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.043+0000 7fb051c5d640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/1655006894 conn(0x7fb0380411d0 0x7fb0380435c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.043+0000 7fb051c5d640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/1655006894 conn(0x7fb0380411d0 0x7fb0380435c0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fb044003e00 tx=0x7fb044007350 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.043+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mgr.14118 v2:192.168.123.100:6800/1655006894 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (secure 0 0 0) 0x7fb048063830 con 0x7fb0380411d0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.047+0000 7fb0417fa640 1 -- 192.168.123.100:0/95859338 --> v2:192.168.123.100:6800/1655006894 -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7fb010000d10 con 0x7fb0380411d0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.051+0000 7fb0437fe640 1 -- 192.168.123.100:0/95859338 <== mgr.14118 v2:192.168.123.100:6800/1655006894 2 ==== command_reply(tid 1: 0 ) ==== 8+0+51 (secure 0 0 0) 0x7fb010000d10 con 0x7fb0380411d0 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 -- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/1655006894 conn(0x7fb0380411d0 msgr2=0x7fb0380435c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/1655006894 conn(0x7fb0380411d0 0x7fb0380435c0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fb044003e00 tx=0x7fb044007350 comp rx=0 tx=0).stop 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 -- 192.168.123.100:0/95859338 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c086e90 msgr2=0x7fb04c0872b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.095 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 --2- 192.168.123.100:0/95859338 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c086e90 0x7fb04c0872b0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fb0480317f0 tx=0x7fb048003990 comp rx=0 tx=0).stop 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 -- 192.168.123.100:0/95859338 shutdown_connections 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/1655006894 conn(0x7fb0380411d0 0x7fb0380435c0 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 --2- 192.168.123.100:0/95859338 >> v2:192.168.123.100:6800/2312331294 conn(0x7fb03803dc30 0x7fb0380400f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 --2- 192.168.123.100:0/95859338 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb04c086e90 0x7fb04c0872b0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 -- 192.168.123.100:0/95859338 >> 192.168.123.100:0/95859338 conn(0x7fb04c06fe30 msgr2=0x7fb04c070930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 -- 192.168.123.100:0/95859338 shutdown_connections 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.055+0000 7fb0417fa640 1 -- 192.168.123.100:0/95859338 wait complete. 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 5 is available 2026-03-09T17:20:32.096 INFO:teuthology.orchestra.run.vm00.stdout:Setting orchestrator backend to cephadm... 2026-03-09T17:20:32.393 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe3855a640 1 Processor -- start 2026-03-09T17:20:32.393 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe3855a640 1 -- start start 2026-03-09T17:20:32.393 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe3855a640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3007abd0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe3855a640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fbe3007b110 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe362cf640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3007abd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe362cf640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3007abd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36242/0 (socket says 192.168.123.100:36242) 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe362cf640 1 -- 192.168.123.100:0/755043670 learned_addr learned my addr 192.168.123.100:0/755043670 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe362cf640 1 -- 192.168.123.100:0/755043670 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbe3007b290 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.223+0000 7fbe362cf640 1 --2- 192.168.123.100:0/755043670 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3007abd0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fbe18009920 tx=0x7fbe1802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6cdb5865b5a535be server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe352cd640 1 -- 192.168.123.100:0/755043670 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbe1803c070 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe352cd640 1 -- 192.168.123.100:0/755043670 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fbe18037440 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- 192.168.123.100:0/755043670 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 msgr2=0x7fbe3007abd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 --2- 192.168.123.100:0/755043670 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3007abd0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fbe18009920 tx=0x7fbe1802ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- 192.168.123.100:0/755043670 shutdown_connections 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 --2- 192.168.123.100:0/755043670 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3007abd0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- 192.168.123.100:0/755043670 >> 192.168.123.100:0/755043670 conn(0x7fbe30102090 msgr2=0x7fbe301044b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- 192.168.123.100:0/755043670 shutdown_connections 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- 192.168.123.100:0/755043670 wait complete. 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 Processor -- start 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- start start 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3019ea40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fbe3010cdb0 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe362cf640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3019ea40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe362cf640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3019ea40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36244/0 (socket says 192.168.123.100:36244) 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe362cf640 1 -- 192.168.123.100:0/1543613732 learned_addr learned my addr 192.168.123.100:0/1543613732 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe362cf640 1 -- 192.168.123.100:0/1543613732 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbe3019ef80 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe362cf640 1 --2- 192.168.123.100:0/1543613732 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3019ea40 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fbe18035ed0 tx=0x7fbe18035f00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe277fe640 1 -- 192.168.123.100:0/1543613732 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbe18047070 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe277fe640 1 -- 192.168.123.100:0/1543613732 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fbe18037960 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe277fe640 1 -- 192.168.123.100:0/1543613732 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbe1804c8c0 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbe3019f210 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fbe30106300 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe277fe640 1 -- 192.168.123.100:0/1543613732 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7fbe18054070 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe277fe640 1 --2- 192.168.123.100:0/1543613732 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbe0803db30 0x7fbe0803fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.227+0000 7fbe277fe640 1 -- 192.168.123.100:0/1543613732 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7fbe1807f810 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.231+0000 7fbe35ace640 1 --2- 192.168.123.100:0/1543613732 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbe0803db30 0x7fbe0803fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.231+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbdf8005180 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.231+0000 7fbe35ace640 1 --2- 192.168.123.100:0/1543613732 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbe0803db30 0x7fbe0803fff0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fbe200099c0 tx=0x7fbe20006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.231+0000 7fbe277fe640 1 -- 192.168.123.100:0/1543613732 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fbe1804f120 con 0x7fbe3007c770 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.339+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 --> v2:192.168.123.100:6800/1655006894 -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) -- 0x7fbdf8002bf0 con 0x7fbe0803db30 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.347+0000 7fbe277fe640 1 -- 192.168.123.100:0/1543613732 <== mgr.14118 v2:192.168.123.100:6800/1655006894 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7fbdf8002bf0 con 0x7fbe0803db30 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbe0803db30 msgr2=0x7fbe0803fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 --2- 192.168.123.100:0/1543613732 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbe0803db30 0x7fbe0803fff0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fbe200099c0 tx=0x7fbe20006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 msgr2=0x7fbe3019ea40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 --2- 192.168.123.100:0/1543613732 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3019ea40 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fbe18035ed0 tx=0x7fbe18035f00 comp rx=0 tx=0).stop 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 shutdown_connections 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 --2- 192.168.123.100:0/1543613732 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbe0803db30 0x7fbe0803fff0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 --2- 192.168.123.100:0/1543613732 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbe3007c770 0x7fbe3019ea40 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 >> 192.168.123.100:0/1543613732 conn(0x7fbe30102090 msgr2=0x7fbe30102dc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 shutdown_connections 2026-03-09T17:20:32.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.355+0000 7fbe3855a640 1 -- 192.168.123.100:0/1543613732 wait complete. 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: cephadm 2026-03-09T17:20:31.061291+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: cephadm 2026-03-09T17:20:31.061291+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:31.409608+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:31.409608+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:31.412087+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:31.412087+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: cluster 2026-03-09T17:20:32.046374+0000 mon.a (mon.0) 57 : cluster [DBG] mgrmap e7: y(active, since 1.00664s) 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: cluster 2026-03-09T17:20:32.046374+0000 mon.a (mon.0) 57 : cluster [DBG] mgrmap e7: y(active, since 1.00664s) 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:32.286418+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:32.286418+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:32.347061+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:32.347061+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:32.354240+0000 mon.a (mon.0) 60 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:32.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:32 vm00 bash[20770]: audit 2026-03-09T17:20:32.354240+0000 mon.a (mon.0) 60 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea8c28640 1 Processor -- start 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea8c28640 1 -- start start 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea8c28640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea4108fa0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea8c28640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6ea4109570 con 0x7f6ea4108ba0 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea2575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea4108fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea2575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea4108fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36250/0 (socket says 192.168.123.100:36250) 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea2575640 1 -- 192.168.123.100:0/1817608827 learned_addr learned my addr 192.168.123.100:0/1817608827 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea2575640 1 -- 192.168.123.100:0/1817608827 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6ea4109da0 con 0x7f6ea4108ba0 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea2575640 1 --2- 192.168.123.100:0/1817608827 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea4108fa0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f6e8c009920 tx=0x7f6e8c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=307690e42e6b42ea server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea1573640 1 -- 192.168.123.100:0/1817608827 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6e8c03c070 con 0x7f6ea4108ba0 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea1573640 1 -- 192.168.123.100:0/1817608827 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f6e8c037440 con 0x7f6ea4108ba0 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea8c28640 1 -- 192.168.123.100:0/1817608827 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 msgr2=0x7f6ea4108fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.523+0000 7f6ea8c28640 1 --2- 192.168.123.100:0/1817608827 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea4108fa0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f6e8c009920 tx=0x7f6e8c02ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- 192.168.123.100:0/1817608827 shutdown_connections 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 --2- 192.168.123.100:0/1817608827 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea4108fa0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- 192.168.123.100:0/1817608827 >> 192.168.123.100:0/1817608827 conn(0x7f6ea407c090 msgr2=0x7f6ea407c4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- 192.168.123.100:0/1817608827 shutdown_connections 2026-03-09T17:20:32.673 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- 192.168.123.100:0/1817608827 wait complete. 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 Processor -- start 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- start start 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea419ea60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6ea410cfd0 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea2575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea419ea60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea2575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea419ea60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36264/0 (socket says 192.168.123.100:36264) 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea2575640 1 -- 192.168.123.100:0/277220907 learned_addr learned my addr 192.168.123.100:0/277220907 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea2575640 1 -- 192.168.123.100:0/277220907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6ea419efa0 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea2575640 1 --2- 192.168.123.100:0/277220907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea419ea60 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f6e8c0376c0 tx=0x7f6e8c0376f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6e937fe640 1 -- 192.168.123.100:0/277220907 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6e8c045070 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6ea419f230 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6ea41a1f20 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6e937fe640 1 -- 192.168.123.100:0/277220907 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f6e8c037db0 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6e937fe640 1 -- 192.168.123.100:0/277220907 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6e8c03c070 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6e937fe640 1 -- 192.168.123.100:0/277220907 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7f6e8c04a430 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6e937fe640 1 --2- 192.168.123.100:0/277220907 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6e7403db30 0x7f6e7403fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6e937fe640 1 -- 192.168.123.100:0/277220907 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f6e8c077180 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea1d74640 1 --2- 192.168.123.100:0/277220907 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6e7403db30 0x7f6e7403fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.527+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6e68005180 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.531+0000 7f6ea1d74640 1 --2- 192.168.123.100:0/277220907 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6e7403db30 0x7f6e7403fff0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f6e980099c0 tx=0x7f6e98006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.531+0000 7f6e937fe640 1 -- 192.168.123.100:0/277220907 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6e8c033200 con 0x7f6ea4108ba0 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.627+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 --> v2:192.168.123.100:6800/1655006894 -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) -- 0x7f6e68002bf0 con 0x7f6e7403db30 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.627+0000 7f6e937fe640 1 -- 192.168.123.100:0/277220907 <== mgr.14118 v2:192.168.123.100:6800/1655006894 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+16 (secure 0 0 0) 0x7f6e68002bf0 con 0x7f6e7403db30 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6e7403db30 msgr2=0x7f6e7403fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 --2- 192.168.123.100:0/277220907 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6e7403db30 0x7f6e7403fff0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f6e980099c0 tx=0x7f6e98006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 msgr2=0x7f6ea419ea60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 --2- 192.168.123.100:0/277220907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea419ea60 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f6e8c0376c0 tx=0x7f6e8c0376f0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 shutdown_connections 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 --2- 192.168.123.100:0/277220907 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6e7403db30 0x7f6e7403fff0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 --2- 192.168.123.100:0/277220907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ea4108ba0 0x7f6ea419ea60 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 >> 192.168.123.100:0/277220907 conn(0x7f6ea407c090 msgr2=0x7f6ea4106100 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 shutdown_connections 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.631+0000 7f6ea8c28640 1 -- 192.168.123.100:0/277220907 wait complete. 2026-03-09T17:20:32.674 INFO:teuthology.orchestra.run.vm00.stdout:Generating ssh key... 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.783+0000 7f2d33d59640 1 Processor -- start 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.783+0000 7f2d33d59640 1 -- start start 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.783+0000 7f2d33d59640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.783+0000 7f2d33d59640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2d2c109540 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.783+0000 7f2d31ace640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.783+0000 7f2d31ace640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36278/0 (socket says 192.168.123.100:36278) 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.783+0000 7f2d31ace640 1 -- 192.168.123.100:0/3799726004 learned_addr learned my addr 192.168.123.100:0/3799726004 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.783+0000 7f2d31ace640 1 -- 192.168.123.100:0/3799726004 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2d2c109d70 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d31ace640 1 --2- 192.168.123.100:0/3799726004 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c108f70 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f2d20009920 tx=0x7f2d2002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=af27b17ff1508fdb server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d30acc640 1 -- 192.168.123.100:0/3799726004 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2d2003c070 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d30acc640 1 -- 192.168.123.100:0/3799726004 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f2d20037440 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- 192.168.123.100:0/3799726004 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 msgr2=0x7f2d2c108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 --2- 192.168.123.100:0/3799726004 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c108f70 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f2d20009920 tx=0x7f2d2002ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- 192.168.123.100:0/3799726004 shutdown_connections 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 --2- 192.168.123.100:0/3799726004 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c108f70 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- 192.168.123.100:0/3799726004 >> 192.168.123.100:0/3799726004 conn(0x7f2d2c07c040 msgr2=0x7f2d2c07c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- 192.168.123.100:0/3799726004 shutdown_connections 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- 192.168.123.100:0/3799726004 wait complete. 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 Processor -- start 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- start start 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c19ea30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2d2c10cfa0 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d31ace640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c19ea30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d31ace640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c19ea30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36288/0 (socket says 192.168.123.100:36288) 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d31ace640 1 -- 192.168.123.100:0/2515103391 learned_addr learned my addr 192.168.123.100:0/2515103391 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d31ace640 1 -- 192.168.123.100:0/2515103391 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2d2c19ef70 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d31ace640 1 --2- 192.168.123.100:0/2515103391 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c19ea30 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f2d2002ffd0 tx=0x7f2d20035e30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d1affd640 1 -- 192.168.123.100:0/2515103391 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2d20045070 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2d2c19f200 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2d2c1a1ef0 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d1affd640 1 -- 192.168.123.100:0/2515103391 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f2d20037d10 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.787+0000 7f2d1affd640 1 -- 192.168.123.100:0/2515103391 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2d2003c070 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.791+0000 7f2d1affd640 1 -- 192.168.123.100:0/2515103391 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7f2d2004a430 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.791+0000 7f2d1affd640 1 --2- 192.168.123.100:0/2515103391 >> v2:192.168.123.100:6800/1655006894 conn(0x7f2d0003db30 0x7f2d0003fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.791+0000 7f2d1affd640 1 -- 192.168.123.100:0/2515103391 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f2d200770e0 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.791+0000 7f2d312cd640 1 --2- 192.168.123.100:0/2515103391 >> v2:192.168.123.100:6800/1655006894 conn(0x7f2d0003db30 0x7f2d0003fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.791+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2cf4005180 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.795+0000 7f2d312cd640 1 --2- 192.168.123.100:0/2515103391 >> v2:192.168.123.100:6800/1655006894 conn(0x7f2d0003db30 0x7f2d0003fff0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f2d1c009a10 tx=0x7f2d1c006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.795+0000 7f2d1affd640 1 -- 192.168.123.100:0/2515103391 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2d20033200 con 0x7f2d2c108b70 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.891+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 --> v2:192.168.123.100:6800/1655006894 -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) -- 0x7f2cf4002bf0 con 0x7f2d0003db30 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d1affd640 1 -- 192.168.123.100:0/2515103391 <== mgr.14118 v2:192.168.123.100:6800/1655006894 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7f2cf4002bf0 con 0x7f2d0003db30 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 >> v2:192.168.123.100:6800/1655006894 conn(0x7f2d0003db30 msgr2=0x7f2d0003fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 --2- 192.168.123.100:0/2515103391 >> v2:192.168.123.100:6800/1655006894 conn(0x7f2d0003db30 0x7f2d0003fff0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f2d1c009a10 tx=0x7f2d1c006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 msgr2=0x7f2d2c19ea30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 --2- 192.168.123.100:0/2515103391 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c19ea30 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f2d2002ffd0 tx=0x7f2d20035e30 comp rx=0 tx=0).stop 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 shutdown_connections 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 --2- 192.168.123.100:0/2515103391 >> v2:192.168.123.100:6800/1655006894 conn(0x7f2d0003db30 0x7f2d0003fff0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 --2- 192.168.123.100:0/2515103391 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d2c108b70 0x7f2d2c19ea30 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 >> 192.168.123.100:0/2515103391 conn(0x7f2d2c07c040 msgr2=0x7f2d2c1060d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:32.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 shutdown_connections 2026-03-09T17:20:32.954 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:32.915+0000 7f2d33d59640 1 -- 192.168.123.100:0/2515103391 wait complete. 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: Generating public/private ed25519 key pair. 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: Your identification has been saved in /tmp/tmp7rciov5t/key 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: Your public key has been saved in /tmp/tmp7rciov5t/key.pub 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: The key fingerprint is: 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: SHA256:dnNcZNfDoRE7gtnurS7OoK92dPR9BSSzSTVOoV6PbXk ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: The key's randomart image is: 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: +--[ED25519 256]--+ 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | +=@o+| 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | +. @==.| 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | o o+++..| 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | ..o.o.+o| 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | S +.= ..E| 2026-03-09T17:20:33.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | o o.+.. o.| 2026-03-09T17:20:33.196 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | ... . .. | 2026-03-09T17:20:33.196 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | ...o. . | 2026-03-09T17:20:33.196 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: | .o+..ooo | 2026-03-09T17:20:33.196 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:32 vm00 bash[21037]: +----[SHA256]-----+ 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMS8vm6y0TabM0+MhpSTHYvKhUzzSz2GjjcD2VmCO+5 ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c43409640 1 Processor -- start 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c43409640 1 -- start start 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c43409640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c108f50 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c43409640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6c3c109520 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c4117e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c108f50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c4117e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c108f50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36296/0 (socket says 192.168.123.100:36296) 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c4117e640 1 -- 192.168.123.100:0/3098250026 learned_addr learned my addr 192.168.123.100:0/3098250026 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c4117e640 1 -- 192.168.123.100:0/3098250026 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6c3c109d50 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c4117e640 1 --2- 192.168.123.100:0/3098250026 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c108f50 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f6c2c009920 tx=0x7f6c2c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=592c176fd6be7871 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c33fff640 1 -- 192.168.123.100:0/3098250026 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6c2c03c070 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c33fff640 1 -- 192.168.123.100:0/3098250026 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f6c2c037440 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c33fff640 1 -- 192.168.123.100:0/3098250026 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6c2c0354d0 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c43409640 1 -- 192.168.123.100:0/3098250026 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 msgr2=0x7f6c3c108f50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.075+0000 7f6c43409640 1 --2- 192.168.123.100:0/3098250026 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c108f50 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f6c2c009920 tx=0x7f6c2c02ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 -- 192.168.123.100:0/3098250026 shutdown_connections 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 --2- 192.168.123.100:0/3098250026 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c108f50 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 -- 192.168.123.100:0/3098250026 >> 192.168.123.100:0/3098250026 conn(0x7f6c3c07c030 msgr2=0x7f6c3c07c440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 -- 192.168.123.100:0/3098250026 shutdown_connections 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 -- 192.168.123.100:0/3098250026 wait complete. 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 Processor -- start 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 -- start start 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c19e9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6c3c10cf80 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c4117e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c19e9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c4117e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c19e9b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36308/0 (socket says 192.168.123.100:36308) 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c4117e640 1 -- 192.168.123.100:0/100534285 learned_addr learned my addr 192.168.123.100:0/100534285 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c4117e640 1 -- 192.168.123.100:0/100534285 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6c3c19eef0 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c4117e640 1 --2- 192.168.123.100:0/100534285 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c19e9b0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f6c2c006fd0 tx=0x7f6c2c035dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c327fc640 1 -- 192.168.123.100:0/100534285 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6c2c045070 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c327fc640 1 -- 192.168.123.100:0/100534285 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f6c2c02fbc0 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c327fc640 1 -- 192.168.123.100:0/100534285 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6c2c03c070 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6c3c19f180 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.079+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6c3c1a1e70 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.083+0000 7f6c327fc640 1 -- 192.168.123.100:0/100534285 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7f6c2c040450 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.083+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6c3c108fd0 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.083+0000 7f6c327fc640 1 --2- 192.168.123.100:0/100534285 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6c1403db30 0x7f6c1403fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.083+0000 7f6c327fc640 1 -- 192.168.123.100:0/100534285 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f6c2c0767a0 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.087+0000 7f6c4097d640 1 --2- 192.168.123.100:0/100534285 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6c1403db30 0x7f6c1403fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.087+0000 7f6c4097d640 1 --2- 192.168.123.100:0/100534285 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6c1403db30 0x7f6c1403fff0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f6c240099c0 tx=0x7f6c24006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.087+0000 7f6c327fc640 1 -- 192.168.123.100:0/100534285 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6c2c03b170 con 0x7f6c3c108b50 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.183+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 --> v2:192.168.123.100:6800/1655006894 -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) -- 0x7f6c3c106790 con 0x7f6c1403db30 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.183+0000 7f6c327fc640 1 -- 192.168.123.100:0/100534285 <== mgr.14118 v2:192.168.123.100:6800/1655006894 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+123 (secure 0 0 0) 0x7f6c3c106790 con 0x7f6c1403db30 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.183+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6c1403db30 msgr2=0x7f6c1403fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.183+0000 7f6c43409640 1 --2- 192.168.123.100:0/100534285 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6c1403db30 0x7f6c1403fff0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f6c240099c0 tx=0x7f6c24006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.183+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 msgr2=0x7f6c3c19e9b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.187+0000 7f6c43409640 1 --2- 192.168.123.100:0/100534285 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c19e9b0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f6c2c006fd0 tx=0x7f6c2c035dc0 comp rx=0 tx=0).stop 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.187+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 shutdown_connections 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.187+0000 7f6c43409640 1 --2- 192.168.123.100:0/100534285 >> v2:192.168.123.100:6800/1655006894 conn(0x7f6c1403db30 0x7f6c1403fff0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.187+0000 7f6c43409640 1 --2- 192.168.123.100:0/100534285 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6c3c108b50 0x7f6c3c19e9b0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.187+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 >> 192.168.123.100:0/100534285 conn(0x7f6c3c07c030 msgr2=0x7f6c3c106080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:33.225 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.187+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 shutdown_connections 2026-03-09T17:20:33.226 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.187+0000 7f6c43409640 1 -- 192.168.123.100:0/100534285 wait complete. 2026-03-09T17:20:33.226 INFO:teuthology.orchestra.run.vm00.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T17:20:33.226 INFO:teuthology.orchestra.run.vm00.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T17:20:33.226 INFO:teuthology.orchestra.run.vm00.stdout:Adding host vm00... 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.047319+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.047319+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.052201+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.052201+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.073717+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Bus STARTING 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.073717+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Bus STARTING 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.175439+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.175439+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.285891+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.285891+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.285977+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Bus STARTED 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.285977+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Bus STARTED 2026-03-09T17:20:33.462 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.286538+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Client ('192.168.123.100', 42210) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.286538+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:17:20:32] ENGINE Client ('192.168.123.100', 42210) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.343210+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.343210+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.630955+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.630955+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.911889+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.911889+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.915083+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:33.463 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:33 vm00 bash[20770]: audit 2026-03-09T17:20:32.915083+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: audit 2026-03-09T17:20:32.895172+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: audit 2026-03-09T17:20:32.895172+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.895381+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: cephadm 2026-03-09T17:20:32.895381+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: audit 2026-03-09T17:20:33.185519+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: audit 2026-03-09T17:20:33.185519+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: audit 2026-03-09T17:20:33.456437+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: audit 2026-03-09T17:20:33.456437+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: cluster 2026-03-09T17:20:33.920857+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T17:20:34.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:34 vm00 bash[20770]: cluster 2026-03-09T17:20:33.920857+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T17:20:35.352 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Added host 'vm00' with addr '192.168.123.100' 2026-03-09T17:20:35.352 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.347+0000 7fbf8eea4640 1 Processor -- start 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.347+0000 7fbf8eea4640 1 -- start start 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.347+0000 7fbf8eea4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf88108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.347+0000 7fbf8eea4640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fbf88109540 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.347+0000 7fbf8cc19640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf88108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.347+0000 7fbf8cc19640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf88108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36318/0 (socket says 192.168.123.100:36318) 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.347+0000 7fbf8cc19640 1 -- 192.168.123.100:0/586815391 learned_addr learned my addr 192.168.123.100:0/586815391 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8cc19640 1 -- 192.168.123.100:0/586815391 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbf88109d70 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8cc19640 1 --2- 192.168.123.100:0/586815391 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf88108f70 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fbf70009920 tx=0x7fbf7002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=afea89835e2ccc26 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf7f7fe640 1 -- 192.168.123.100:0/586815391 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbf7003c070 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf7f7fe640 1 -- 192.168.123.100:0/586815391 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fbf70037440 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf7f7fe640 1 -- 192.168.123.100:0/586815391 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbf700354d0 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- 192.168.123.100:0/586815391 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 msgr2=0x7fbf88108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 --2- 192.168.123.100:0/586815391 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf88108f70 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fbf70009920 tx=0x7fbf7002ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- 192.168.123.100:0/586815391 shutdown_connections 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 --2- 192.168.123.100:0/586815391 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf88108f70 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- 192.168.123.100:0/586815391 >> 192.168.123.100:0/586815391 conn(0x7fbf8807c040 msgr2=0x7fbf8807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- 192.168.123.100:0/586815391 shutdown_connections 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- 192.168.123.100:0/586815391 wait complete. 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 Processor -- start 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- start start 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf8819e960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fbf8810cfa0 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8cc19640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf8819e960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8cc19640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf8819e960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36334/0 (socket says 192.168.123.100:36334) 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8cc19640 1 -- 192.168.123.100:0/1686014114 learned_addr learned my addr 192.168.123.100:0/1686014114 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8cc19640 1 -- 192.168.123.100:0/1686014114 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbf8819eea0 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8cc19640 1 --2- 192.168.123.100:0/1686014114 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf8819e960 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fbf70006fd0 tx=0x7fbf70035dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf7dffb640 1 -- 192.168.123.100:0/1686014114 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbf70045070 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbf8819f130 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fbf881a1e20 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf7dffb640 1 -- 192.168.123.100:0/1686014114 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fbf7002fbc0 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf7dffb640 1 -- 192.168.123.100:0/1686014114 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbf7003c070 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.351+0000 7fbf7dffb640 1 -- 192.168.123.100:0/1686014114 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7fbf700493b0 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.355+0000 7fbf7dffb640 1 --2- 192.168.123.100:0/1686014114 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbf5c03db30 0x7fbf5c03fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.355+0000 7fbf7dffb640 1 -- 192.168.123.100:0/1686014114 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7fbf70077420 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.355+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbf50005180 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.355+0000 7fbf7ffff640 1 --2- 192.168.123.100:0/1686014114 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbf5c03db30 0x7fbf5c03fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.355+0000 7fbf7dffb640 1 -- 192.168.123.100:0/1686014114 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fbf70047d40 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.355+0000 7fbf7ffff640 1 --2- 192.168.123.100:0/1686014114 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbf5c03db30 0x7fbf5c03fff0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fbf78009a10 tx=0x7fbf78006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.451+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 --> v2:192.168.123.100:6800/1655006894 -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}) -- 0x7fbf50002bf0 con 0x7fbf5c03db30 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:33.915+0000 7fbf7dffb640 1 -- 192.168.123.100:0/1686014114 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7fbf700496d0 con 0x7fbf88108b70 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.287+0000 7fbf7dffb640 1 -- 192.168.123.100:0/1686014114 <== mgr.14118 v2:192.168.123.100:6800/1655006894 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (secure 0 0 0) 0x7fbf50002bf0 con 0x7fbf5c03db30 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbf5c03db30 msgr2=0x7fbf5c03fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 --2- 192.168.123.100:0/1686014114 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbf5c03db30 0x7fbf5c03fff0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fbf78009a10 tx=0x7fbf78006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 msgr2=0x7fbf8819e960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 --2- 192.168.123.100:0/1686014114 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf8819e960 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fbf70006fd0 tx=0x7fbf70035dc0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 shutdown_connections 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 --2- 192.168.123.100:0/1686014114 >> v2:192.168.123.100:6800/1655006894 conn(0x7fbf5c03db30 0x7fbf5c03fff0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 --2- 192.168.123.100:0/1686014114 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf88108b70 0x7fbf8819e960 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 >> 192.168.123.100:0/1686014114 conn(0x7fbf8807c040 msgr2=0x7fbf88105f10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 shutdown_connections 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.291+0000 7fbf8eea4640 1 -- 192.168.123.100:0/1686014114 wait complete. 2026-03-09T17:20:35.353 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mon service... 2026-03-09T17:20:35.664 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:35 vm00 bash[20770]: cephadm 2026-03-09T17:20:33.992891+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T17:20:35.665 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:35 vm00 bash[20770]: cephadm 2026-03-09T17:20:33.992891+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T17:20:35.665 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:35 vm00 bash[20770]: audit 2026-03-09T17:20:35.288954+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:35.665 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:35 vm00 bash[20770]: audit 2026-03-09T17:20:35.288954+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:35.665 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:35 vm00 bash[20770]: audit 2026-03-09T17:20:35.292658+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:35.665 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:35 vm00 bash[20770]: audit 2026-03-09T17:20:35.292658+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:35.693 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 Processor -- start 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 -- start start 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f0074a50 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f14f0075020 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14eeffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f0074a50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14eeffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f0074a50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36338/0 (socket says 192.168.123.100:36338) 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14eeffd640 1 -- 192.168.123.100:0/1000121293 learned_addr learned my addr 192.168.123.100:0/1000121293 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14eeffd640 1 -- 192.168.123.100:0/1000121293 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f14f00751a0 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14eeffd640 1 --2- 192.168.123.100:0/1000121293 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f0074a50 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f14e001adc0 tx=0x7f14e00402a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=33643857a3cdc7ce server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14ee7fc640 1 -- 192.168.123.100:0/1000121293 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f14e0040aa0 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14ee7fc640 1 -- 192.168.123.100:0/1000121293 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f14e0040c40 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14ee7fc640 1 -- 192.168.123.100:0/1000121293 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f14e0047530 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 -- 192.168.123.100:0/1000121293 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 msgr2=0x7f14f0074a50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 --2- 192.168.123.100:0/1000121293 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f0074a50 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f14e001adc0 tx=0x7f14e00402a0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 -- 192.168.123.100:0/1000121293 shutdown_connections 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 --2- 192.168.123.100:0/1000121293 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f0074a50 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 -- 192.168.123.100:0/1000121293 >> 192.168.123.100:0/1000121293 conn(0x7f14f006fe80 msgr2=0x7f14f00722e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 -- 192.168.123.100:0/1000121293 shutdown_connections 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 -- 192.168.123.100:0/1000121293 wait complete. 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.535+0000 7f14f52fa640 1 Processor -- start 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14f52fa640 1 -- start start 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14f52fa640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f01a2bf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14f52fa640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f14f010ee00 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14eeffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f01a2bf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14eeffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f01a2bf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36342/0 (socket says 192.168.123.100:36342) 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14eeffd640 1 -- 192.168.123.100:0/162721084 learned_addr learned my addr 192.168.123.100:0/162721084 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14eeffd640 1 -- 192.168.123.100:0/162721084 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f14f01a3130 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14eeffd640 1 --2- 192.168.123.100:0/162721084 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f01a2bf0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f14e0047dd0 tx=0x7f14e0047e00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14ecff9640 1 -- 192.168.123.100:0/162721084 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f14e0018660 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14f52fa640 1 -- 192.168.123.100:0/162721084 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f14f01a33c0 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14f52fa640 1 -- 192.168.123.100:0/162721084 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f14f01a60b0 con 0x7f14f0074650 2026-03-09T17:20:35.694 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14e5ffb640 1 -- 192.168.123.100:0/162721084 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f14c4005180 con 0x7f14f0074650 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14ecff9640 1 -- 192.168.123.100:0/162721084 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f14e0057070 con 0x7f14f0074650 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14ecff9640 1 -- 192.168.123.100:0/162721084 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f14e0052cf0 con 0x7f14f0074650 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14ecff9640 1 -- 192.168.123.100:0/162721084 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7f14e0051070 con 0x7f14f0074650 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.539+0000 7f14ecff9640 1 --2- 192.168.123.100:0/162721084 >> v2:192.168.123.100:6800/1655006894 conn(0x7f14cc03dc00 0x7f14cc0400c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.543+0000 7f14ecff9640 1 -- 192.168.123.100:0/162721084 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f14e005cc60 con 0x7f14f0074650 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.543+0000 7f14ecff9640 1 -- 192.168.123.100:0/162721084 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f14e005c460 con 0x7f14f0074650 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.547+0000 7f14e7fff640 1 --2- 192.168.123.100:0/162721084 >> v2:192.168.123.100:6800/1655006894 conn(0x7f14cc03dc00 0x7f14cc0400c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.551+0000 7f14e7fff640 1 --2- 192.168.123.100:0/162721084 >> v2:192.168.123.100:6800/1655006894 conn(0x7f14cc03dc00 0x7f14cc0400c0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f14d80099c0 tx=0x7f14d8006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.647+0000 7f14e5ffb640 1 -- 192.168.123.100:0/162721084 --> v2:192.168.123.100:6800/1655006894 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f14c4002bf0 con 0x7f14cc03dc00 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.651+0000 7f14ecff9640 1 -- 192.168.123.100:0/162721084 <== mgr.14118 v2:192.168.123.100:6800/1655006894 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f14c4002bf0 con 0x7f14cc03dc00 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 -- 192.168.123.100:0/162721084 >> v2:192.168.123.100:6800/1655006894 conn(0x7f14cc03dc00 msgr2=0x7f14cc0400c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 --2- 192.168.123.100:0/162721084 >> v2:192.168.123.100:6800/1655006894 conn(0x7f14cc03dc00 0x7f14cc0400c0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f14d80099c0 tx=0x7f14d8006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 -- 192.168.123.100:0/162721084 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 msgr2=0x7f14f01a2bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 --2- 192.168.123.100:0/162721084 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f01a2bf0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f14e0047dd0 tx=0x7f14e0047e00 comp rx=0 tx=0).stop 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 -- 192.168.123.100:0/162721084 shutdown_connections 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 --2- 192.168.123.100:0/162721084 >> v2:192.168.123.100:6800/1655006894 conn(0x7f14cc03dc00 0x7f14cc0400c0 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 --2- 192.168.123.100:0/162721084 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f14f0074650 0x7f14f01a2bf0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 -- 192.168.123.100:0/162721084 >> 192.168.123.100:0/162721084 conn(0x7f14f006fe80 msgr2=0x7f14f0070aa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 -- 192.168.123.100:0/162721084 shutdown_connections 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.655+0000 7f14e5ffb640 1 -- 192.168.123.100:0/162721084 wait complete. 2026-03-09T17:20:35.695 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mgr service... 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 Processor -- start 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 -- start start 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f7828108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7828109540 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f7826ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f7828108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f7826ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f7828108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36356/0 (socket says 192.168.123.100:36356) 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f7826ffd640 1 -- 192.168.123.100:0/2124464437 learned_addr learned my addr 192.168.123.100:0/2124464437 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f7826ffd640 1 -- 192.168.123.100:0/2124464437 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7828109d70 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f7826ffd640 1 --2- 192.168.123.100:0/2124464437 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f7828108f70 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f781c009920 tx=0x7f781c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=a0aa1ba57d541eaf server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f7825ffb640 1 -- 192.168.123.100:0/2124464437 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f781c03c070 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f7825ffb640 1 -- 192.168.123.100:0/2124464437 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f781c037440 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f7825ffb640 1 -- 192.168.123.100:0/2124464437 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f781c0354d0 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 -- 192.168.123.100:0/2124464437 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 msgr2=0x7f7828108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 --2- 192.168.123.100:0/2124464437 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f7828108f70 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f781c009920 tx=0x7f781c02ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 -- 192.168.123.100:0/2124464437 shutdown_connections 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 --2- 192.168.123.100:0/2124464437 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f7828108f70 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 -- 192.168.123.100:0/2124464437 >> 192.168.123.100:0/2124464437 conn(0x7f782807c040 msgr2=0x7f782807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 -- 192.168.123.100:0/2124464437 shutdown_connections 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 -- 192.168.123.100:0/2124464437 wait complete. 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.815+0000 7f782da41640 1 Processor -- start 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782da41640 1 -- start start 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782da41640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f782819e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782da41640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f782810cfa0 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f7826ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f782819e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f7826ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f782819e9a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36372/0 (socket says 192.168.123.100:36372) 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f7826ffd640 1 -- 192.168.123.100:0/213924442 learned_addr learned my addr 192.168.123.100:0/213924442 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f7826ffd640 1 -- 192.168.123.100:0/213924442 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f782819eee0 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f7826ffd640 1 --2- 192.168.123.100:0/213924442 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f782819e9a0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f781c009a50 tx=0x7f781c035d70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782ca3f640 1 -- 192.168.123.100:0/213924442 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f781c03c030 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f782819f170 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f78281a1e60 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782ca3f640 1 -- 192.168.123.100:0/213924442 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f781c03e070 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782ca3f640 1 -- 192.168.123.100:0/213924442 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f781c0429f0 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782ca3f640 1 -- 192.168.123.100:0/213924442 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7f781c042c60 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782ca3f640 1 --2- 192.168.123.100:0/213924442 >> v2:192.168.123.100:6800/1655006894 conn(0x7f77f803dc00 0x7f77f80400c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f78267fc640 1 --2- 192.168.123.100:0/213924442 >> v2:192.168.123.100:6800/1655006894 conn(0x7f77f803dc00 0x7f77f80400c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f78267fc640 1 --2- 192.168.123.100:0/213924442 >> v2:192.168.123.100:6800/1655006894 conn(0x7f77f803dc00 0x7f77f80400c0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f7810009a10 tx=0x7f7810006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782ca3f640 1 -- 192.168.123.100:0/213924442 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f781c03d070 con 0x7f7828108b70 2026-03-09T17:20:35.978 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.819+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7828108ff0 con 0x7f7828108b70 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.823+0000 7f782ca3f640 1 -- 192.168.123.100:0/213924442 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f781c03c1d0 con 0x7f7828108b70 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.927+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 --> v2:192.168.123.100:6800/1655006894 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f7828106660 con 0x7f77f803dc00 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782ca3f640 1 -- 192.168.123.100:0/213924442 <== mgr.14118 v2:192.168.123.100:6800/1655006894 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f7828106660 con 0x7f77f803dc00 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 >> v2:192.168.123.100:6800/1655006894 conn(0x7f77f803dc00 msgr2=0x7f77f80400c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 --2- 192.168.123.100:0/213924442 >> v2:192.168.123.100:6800/1655006894 conn(0x7f77f803dc00 0x7f77f80400c0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f7810009a10 tx=0x7f7810006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 msgr2=0x7f782819e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 --2- 192.168.123.100:0/213924442 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f782819e9a0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f781c009a50 tx=0x7f781c035d70 comp rx=0 tx=0).stop 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 shutdown_connections 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 --2- 192.168.123.100:0/213924442 >> v2:192.168.123.100:6800/1655006894 conn(0x7f77f803dc00 0x7f77f80400c0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 --2- 192.168.123.100:0/213924442 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7828108b70 0x7f782819e9a0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 >> 192.168.123.100:0/213924442 conn(0x7f782807c040 msgr2=0x7f7828105f50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 shutdown_connections 2026-03-09T17:20:35.979 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:35.935+0000 7f782da41640 1 -- 192.168.123.100:0/213924442 wait complete. 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.091+0000 7f4b0ca32640 1 Processor -- start 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.091+0000 7f4b0ca32640 1 -- start start 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.091+0000 7f4b0ca32640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b08107530 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.091+0000 7f4b0ca32640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4b0807a760 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.091+0000 7f4b06575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b08107530 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.091+0000 7f4b06575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b08107530 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36376/0 (socket says 192.168.123.100:36376) 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.091+0000 7f4b06575640 1 -- 192.168.123.100:0/1416270005 learned_addr learned my addr 192.168.123.100:0/1416270005 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.091+0000 7f4b06575640 1 -- 192.168.123.100:0/1416270005 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4b08107a70 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b06575640 1 --2- 192.168.123.100:0/1416270005 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b08107530 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f4afc009920 tx=0x7f4afc02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9fe80491736a3620 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b05573640 1 -- 192.168.123.100:0/1416270005 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4afc03c070 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b05573640 1 -- 192.168.123.100:0/1416270005 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f4afc037440 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- 192.168.123.100:0/1416270005 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 msgr2=0x7f4b08107530 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 --2- 192.168.123.100:0/1416270005 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b08107530 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f4afc009920 tx=0x7f4afc02ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- 192.168.123.100:0/1416270005 shutdown_connections 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 --2- 192.168.123.100:0/1416270005 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b08107530 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- 192.168.123.100:0/1416270005 >> 192.168.123.100:0/1416270005 conn(0x7f4b08100f90 msgr2=0x7f4b081033b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- 192.168.123.100:0/1416270005 shutdown_connections 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- 192.168.123.100:0/1416270005 wait complete. 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 Processor -- start 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- start start 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b081a2f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4b081089b0 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b06575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b081a2f20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b06575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b081a2f20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36378/0 (socket says 192.168.123.100:36378) 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b06575640 1 -- 192.168.123.100:0/2526236106 learned_addr learned my addr 192.168.123.100:0/2526236106 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b06575640 1 -- 192.168.123.100:0/2526236106 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4b081a3460 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b06575640 1 --2- 192.168.123.100:0/2526236106 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b081a2f20 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f4afc035ed0 tx=0x7f4afc035f00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4af77fe640 1 -- 192.168.123.100:0/2526236106 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4afc045070 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4af77fe640 1 -- 192.168.123.100:0/2526236106 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f4afc037ca0 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4af77fe640 1 -- 192.168.123.100:0/2526236106 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4afc03c070 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4b081a36f0 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f4b081a63e0 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4af77fe640 1 -- 192.168.123.100:0/2526236106 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7f4afc037850 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.095+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4b08105520 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.099+0000 7f4af77fe640 1 --2- 192.168.123.100:0/2526236106 >> v2:192.168.123.100:6800/1655006894 conn(0x7f4ae003d7f0 0x7f4ae003fcb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.099+0000 7f4af77fe640 1 -- 192.168.123.100:0/2526236106 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f4afc076670 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.099+0000 7f4b05d74640 1 --2- 192.168.123.100:0/2526236106 >> v2:192.168.123.100:6800/1655006894 conn(0x7f4ae003d7f0 0x7f4ae003fcb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.099+0000 7f4b05d74640 1 --2- 192.168.123.100:0/2526236106 >> v2:192.168.123.100:6800/1655006894 conn(0x7f4ae003d7f0 0x7f4ae003fcb0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f4af0009a10 tx=0x7f4af0006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.099+0000 7f4af77fe640 1 -- 192.168.123.100:0/2526236106 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4afc048360 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.191+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) -- 0x7f4b0819c2d0 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.199+0000 7f4af77fe640 1 -- 192.168.123.100:0/2526236106 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v6)=0 v6) ==== 142+0+0 (secure 0 0 0) 0x7f4afc043120 con 0x7f4b08105120 2026-03-09T17:20:36.239 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 >> v2:192.168.123.100:6800/1655006894 conn(0x7f4ae003d7f0 msgr2=0x7f4ae003fcb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 --2- 192.168.123.100:0/2526236106 >> v2:192.168.123.100:6800/1655006894 conn(0x7f4ae003d7f0 0x7f4ae003fcb0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f4af0009a10 tx=0x7f4af0006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 msgr2=0x7f4b081a2f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 --2- 192.168.123.100:0/2526236106 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b081a2f20 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f4afc035ed0 tx=0x7f4afc035f00 comp rx=0 tx=0).stop 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 shutdown_connections 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 --2- 192.168.123.100:0/2526236106 >> v2:192.168.123.100:6800/1655006894 conn(0x7f4ae003d7f0 0x7f4ae003fcb0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 --2- 192.168.123.100:0/2526236106 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4b08105120 0x7f4b081a2f20 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 >> 192.168.123.100:0/2526236106 conn(0x7f4b08100f90 msgr2=0x7f4b0810b080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 shutdown_connections 2026-03-09T17:20:36.240 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.203+0000 7f4b0ca32640 1 -- 192.168.123.100:0/2526236106 wait complete. 2026-03-09T17:20:36.509 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.351+0000 7ff90d262640 1 Processor -- start 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.351+0000 7ff90d262640 1 -- start start 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.351+0000 7ff90d262640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908106ce0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff9081072b0 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908106ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908106ce0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36390/0 (socket says 192.168.123.100:36390) 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 -- 192.168.123.100:0/3835659803 learned_addr learned my addr 192.168.123.100:0/3835659803 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 -- 192.168.123.100:0/3835659803 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff908107ae0 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 --2- 192.168.123.100:0/3835659803 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908106ce0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7ff8fc009920 tx=0x7ff8fc02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c6a1f8004cbd0c3c server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff905d74640 1 -- 192.168.123.100:0/3835659803 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff8fc03c070 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff905d74640 1 -- 192.168.123.100:0/3835659803 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7ff8fc037440 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 -- 192.168.123.100:0/3835659803 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 msgr2=0x7ff908106ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 --2- 192.168.123.100:0/3835659803 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908106ce0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7ff8fc009920 tx=0x7ff8fc02ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 -- 192.168.123.100:0/3835659803 shutdown_connections 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 --2- 192.168.123.100:0/3835659803 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908106ce0 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 -- 192.168.123.100:0/3835659803 >> 192.168.123.100:0/3835659803 conn(0x7ff908102090 msgr2=0x7ff9081044b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 -- 192.168.123.100:0/3835659803 shutdown_connections 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 -- 192.168.123.100:0/3835659803 wait complete. 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 Processor -- start 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 -- start start 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908195740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff90d262640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff90810ad10 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908195740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908195740 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36400/0 (socket says 192.168.123.100:36400) 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 -- 192.168.123.100:0/360443459 learned_addr learned my addr 192.168.123.100:0/360443459 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.355+0000 7ff906d76640 1 -- 192.168.123.100:0/360443459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff908195c80 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff906d76640 1 --2- 192.168.123.100:0/360443459 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908195740 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7ff8fc035f40 tx=0x7ff8fc035f70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff8ebfff640 1 -- 192.168.123.100:0/360443459 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff8fc045070 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff8ebfff640 1 -- 192.168.123.100:0/360443459 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7ff8fc037c50 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff8ebfff640 1 -- 192.168.123.100:0/360443459 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff8fc03c070 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff908195f10 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff908192280 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff8ebfff640 1 -- 192.168.123.100:0/360443459 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7ff8fc04a430 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff8ebfff640 1 --2- 192.168.123.100:0/360443459 >> v2:192.168.123.100:6800/1655006894 conn(0x7ff8e003dc50 0x7ff8e0040110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff8ebfff640 1 -- 192.168.123.100:0/360443459 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7ff8fc0767e0 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff906575640 1 --2- 192.168.123.100:0/360443459 >> v2:192.168.123.100:6800/1655006894 conn(0x7ff8e003dc50 0x7ff8e0040110 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff906575640 1 --2- 192.168.123.100:0/360443459 >> v2:192.168.123.100:6800/1655006894 conn(0x7ff8e003dc50 0x7ff8e0040110 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7ff8f4009a10 tx=0x7ff8f4006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.359+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff908106ce0 con 0x7ff9081068e0 2026-03-09T17:20:36.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.363+0000 7ff8ebfff640 1 -- 192.168.123.100:0/360443459 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff8fc067020 con 0x7ff9081068e0 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.455+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) -- 0x7ff908192810 con 0x7ff9081068e0 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.459+0000 7ff8ebfff640 1 -- 192.168.123.100:0/360443459 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v7)=0 v7) ==== 130+0+0 (secure 0 0 0) 0x7ff8fc033200 con 0x7ff9081068e0 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 >> v2:192.168.123.100:6800/1655006894 conn(0x7ff8e003dc50 msgr2=0x7ff8e0040110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 --2- 192.168.123.100:0/360443459 >> v2:192.168.123.100:6800/1655006894 conn(0x7ff8e003dc50 0x7ff8e0040110 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7ff8f4009a10 tx=0x7ff8f4006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 msgr2=0x7ff908195740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 --2- 192.168.123.100:0/360443459 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908195740 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7ff8fc035f40 tx=0x7ff8fc035f70 comp rx=0 tx=0).stop 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 shutdown_connections 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 --2- 192.168.123.100:0/360443459 >> v2:192.168.123.100:6800/1655006894 conn(0x7ff8e003dc50 0x7ff8e0040110 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 --2- 192.168.123.100:0/360443459 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff9081068e0 0x7ff908195740 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 >> 192.168.123.100:0/360443459 conn(0x7ff908102090 msgr2=0x7ff908102d00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 shutdown_connections 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.467+0000 7ff90d262640 1 -- 192.168.123.100:0/360443459 wait complete. 2026-03-09T17:20:36.511 INFO:teuthology.orchestra.run.vm00.stdout:Enabling the dashboard module... 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: cephadm 2026-03-09T17:20:35.289904+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: cephadm 2026-03-09T17:20:35.289904+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:35.650496+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:35.650496+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: cephadm 2026-03-09T17:20:35.651340+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: cephadm 2026-03-09T17:20:35.651340+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:35.653942+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:35.653942+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:35.935663+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:35.935663+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:36.197702+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.100:0/2526236106' entity='client.admin' 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:36.197702+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.100:0/2526236106' entity='client.admin' 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:36.461848+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/360443459' entity='client.admin' 2026-03-09T17:20:36.757 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:36 vm00 bash[20770]: audit 2026-03-09T17:20:36.461848+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/360443459' entity='client.admin' 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.667+0000 7f6229166640 1 Processor -- start 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.667+0000 7f6229166640 1 -- start start 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.667+0000 7f6229166640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224074910 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.667+0000 7f6229166640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6224074ee0 con 0x7f6224074510 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.667+0000 7f6223fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224074910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.667+0000 7f6223fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224074910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36408/0 (socket says 192.168.123.100:36408) 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.667+0000 7f6223fff640 1 -- 192.168.123.100:0/2430124740 learned_addr learned my addr 192.168.123.100:0/2430124740 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.667+0000 7f6223fff640 1 -- 192.168.123.100:0/2430124740 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6224075060 con 0x7f6224074510 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.671+0000 7f6223fff640 1 --2- 192.168.123.100:0/2430124740 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224074910 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f6214009920 tx=0x7f621402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7d405acea313f58f server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.671+0000 7f6222ffd640 1 -- 192.168.123.100:0/2430124740 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f621403c070 con 0x7f6224074510 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.671+0000 7f6222ffd640 1 -- 192.168.123.100:0/2430124740 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f6214037890 con 0x7f6224074510 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.671+0000 7f6229166640 1 -- 192.168.123.100:0/2430124740 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 msgr2=0x7f6224074910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.671+0000 7f6229166640 1 --2- 192.168.123.100:0/2430124740 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224074910 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f6214009920 tx=0x7f621402ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 -- 192.168.123.100:0/2430124740 shutdown_connections 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 --2- 192.168.123.100:0/2430124740 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224074910 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 -- 192.168.123.100:0/2430124740 >> 192.168.123.100:0/2430124740 conn(0x7f622406fe30 msgr2=0x7f62240722b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 -- 192.168.123.100:0/2430124740 shutdown_connections 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 -- 192.168.123.100:0/2430124740 wait complete. 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 Processor -- start 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 -- start start 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224086d40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6224079cd0 con 0x7f6224074510 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6223fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224086d40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6223fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224086d40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36410/0 (socket says 192.168.123.100:36410) 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6223fff640 1 -- 192.168.123.100:0/1830910416 learned_addr learned my addr 192.168.123.100:0/1830910416 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:37.899 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6223fff640 1 -- 192.168.123.100:0/1830910416 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f622408a200 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6223fff640 1 --2- 192.168.123.100:0/1830910416 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224086d40 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f6214036620 tx=0x7f6214036650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f621403c030 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 -- 192.168.123.100:0/1830910416 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6224087280 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f6229166640 1 -- 192.168.123.100:0/1830910416 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6224087700 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f621403e070 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.675+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6214042410 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.679+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7f6214042670 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.679+0000 7f62217fa640 1 --2- 192.168.123.100:0/1830910416 >> v2:192.168.123.100:6800/1655006894 conn(0x7f620803dd30 0x7f62080401f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.679+0000 7f62237fe640 1 --2- 192.168.123.100:0/1830910416 >> v2:192.168.123.100:6800/1655006894 conn(0x7f620803dd30 0x7f62080401f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.679+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f621403d070 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.679+0000 7f62237fe640 1 --2- 192.168.123.100:0/1830910416 >> v2:192.168.123.100:6800/1655006894 conn(0x7f620803dd30 0x7f62080401f0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f621c00ad30 tx=0x7f621c0093f0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.679+0000 7f6229166640 1 -- 192.168.123.100:0/1830910416 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f61f0005180 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.683+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f622408a200 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.807+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 7 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f6214047210 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:36.823+0000 7f6229166640 1 -- 192.168.123.100:0/1830910416 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) -- 0x7f61f0005470 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.807+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v9) ==== 88+0+0 (secure 0 0 0) 0x7f6214030e20 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.807+0000 7f62217fa640 1 -- 192.168.123.100:0/1830910416 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mgrmap(e 9) ==== 50225+0+0 (secure 0 0 0) 0x7f6214042990 con 0x7f6224074510 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 -- 192.168.123.100:0/1830910416 >> v2:192.168.123.100:6800/1655006894 conn(0x7f620803dd30 msgr2=0x7f62080401f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 --2- 192.168.123.100:0/1830910416 >> v2:192.168.123.100:6800/1655006894 conn(0x7f620803dd30 0x7f62080401f0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f621c00ad30 tx=0x7f621c0093f0 comp rx=0 tx=0).stop 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 -- 192.168.123.100:0/1830910416 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 msgr2=0x7f6224086d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 --2- 192.168.123.100:0/1830910416 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224086d40 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f6214036620 tx=0x7f6214036650 comp rx=0 tx=0).stop 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 -- 192.168.123.100:0/1830910416 shutdown_connections 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 --2- 192.168.123.100:0/1830910416 >> v2:192.168.123.100:6800/1655006894 conn(0x7f620803dd30 0x7f62080401f0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 --2- 192.168.123.100:0/1830910416 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6224074510 0x7f6224086d40 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 -- 192.168.123.100:0/1830910416 >> 192.168.123.100:0/1830910416 conn(0x7f622406fe30 msgr2=0x7f6224070930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 -- 192.168.123.100:0/1830910416 shutdown_connections 2026-03-09T17:20:37.900 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:37.811+0000 7f6202ffd640 1 -- 192.168.123.100:0/1830910416 wait complete. 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: audit 2026-03-09T17:20:35.932140+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: audit 2026-03-09T17:20:35.932140+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: cephadm 2026-03-09T17:20:35.932888+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: cephadm 2026-03-09T17:20:35.932888+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: audit 2026-03-09T17:20:36.806668+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: audit 2026-03-09T17:20:36.806668+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: audit 2026-03-09T17:20:36.832645+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/1830910416' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: audit 2026-03-09T17:20:36.832645+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/1830910416' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: audit 2026-03-09T17:20:37.114337+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:37 vm00 bash[20770]: audit 2026-03-09T17:20:37.114337+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:37 vm00 bash[21037]: ignoring --setuser ceph since I am not root 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:37 vm00 bash[21037]: ignoring --setgroup ceph since I am not root 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:38 vm00 bash[21037]: debug 2026-03-09T17:20:38.003+0000 7f58bf24c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:38 vm00 bash[21037]: debug 2026-03-09T17:20:38.039+0000 7f58bf24c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:20:38.175 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:38 vm00 bash[21037]: debug 2026-03-09T17:20:38.171+0000 7f58bf24c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:20:38.235 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:20:38.235 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-09T17:20:38.235 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T17:20:38.235 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T17:20:38.235 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T17:20:38.235 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:20:38.235 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a40319640 1 Processor -- start 2026-03-09T17:20:38.235 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a40319640 1 -- start start 2026-03-09T17:20:38.236 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a40319640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a38108de0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:38.236 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a40319640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0a381093b0 con 0x7f0a381089e0 2026-03-09T17:20:38.236 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a3e08e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a38108de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:38.236 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a3e08e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a38108de0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36432/0 (socket says 192.168.123.100:36432) 2026-03-09T17:20:38.236 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a3e08e640 1 -- 192.168.123.100:0/2818116322 learned_addr learned my addr 192.168.123.100:0/2818116322 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a3e08e640 1 -- 192.168.123.100:0/2818116322 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0a38109be0 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a3e08e640 1 --2- 192.168.123.100:0/2818116322 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a38108de0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f0a28009b80 tx=0x7f0a2802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6545603d464eb8c1 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a3d08c640 1 -- 192.168.123.100:0/2818116322 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0a2803c070 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a3d08c640 1 -- 192.168.123.100:0/2818116322 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0a28037440 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a3d08c640 1 -- 192.168.123.100:0/2818116322 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0a280354d0 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a40319640 1 -- 192.168.123.100:0/2818116322 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 msgr2=0x7f0a38108de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.051+0000 7f0a40319640 1 --2- 192.168.123.100:0/2818116322 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a38108de0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f0a28009b80 tx=0x7f0a2802f190 comp rx=0 tx=0).stop 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 -- 192.168.123.100:0/2818116322 shutdown_connections 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 --2- 192.168.123.100:0/2818116322 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a38108de0 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 -- 192.168.123.100:0/2818116322 >> 192.168.123.100:0/2818116322 conn(0x7f0a3807bf00 msgr2=0x7f0a38106690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 -- 192.168.123.100:0/2818116322 shutdown_connections 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 -- 192.168.123.100:0/2818116322 wait complete. 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 Processor -- start 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 -- start start 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a3819e800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a3e08e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a3819e800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a3e08e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a3819e800 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36446/0 (socket says 192.168.123.100:36446) 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a3e08e640 1 -- 192.168.123.100:0/4251548214 learned_addr learned my addr 192.168.123.100:0/4251548214 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0a3810ce10 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a3e08e640 1 -- 192.168.123.100:0/4251548214 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0a3819ed40 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a3e08e640 1 --2- 192.168.123.100:0/4251548214 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a3819e800 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f0a28009cb0 tx=0x7f0a28037f80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a1f7fe640 1 -- 192.168.123.100:0/4251548214 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0a28047070 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0a3819efd0 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0a381a1cc0 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.055+0000 7f0a1f7fe640 1 -- 192.168.123.100:0/4251548214 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0a28035d70 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.059+0000 7f0a1f7fe640 1 -- 192.168.123.100:0/4251548214 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0a2803c070 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.059+0000 7f0a1f7fe640 1 -- 192.168.123.100:0/4251548214 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 9) ==== 50225+0+0 (secure 0 0 0) 0x7f0a280413e0 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.059+0000 7f0a1f7fe640 1 --2- 192.168.123.100:0/4251548214 >> v2:192.168.123.100:6800/1655006894 conn(0x7f0a1003dcf0 0x7f0a100401b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.059+0000 7f0a3d88d640 1 -- 192.168.123.100:0/4251548214 >> v2:192.168.123.100:6800/1655006894 conn(0x7f0a1003dcf0 msgr2=0x7f0a100401b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1655006894 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.059+0000 7f0a3d88d640 1 --2- 192.168.123.100:0/4251548214 >> v2:192.168.123.100:6800/1655006894 conn(0x7f0a1003dcf0 0x7f0a100401b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.059+0000 7f0a1f7fe640 1 -- 192.168.123.100:0/4251548214 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f0a28079da0 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.059+0000 7f0a1d7fa640 1 -- 192.168.123.100:0/4251548214 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0a00005180 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.063+0000 7f0a1f7fe640 1 -- 192.168.123.100:0/4251548214 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0a28040e80 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.183+0000 7f0a1d7fa640 1 -- 192.168.123.100:0/4251548214 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f0a00005740 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.183+0000 7f0a1f7fe640 1 -- 192.168.123.100:0/4251548214 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v9) ==== 56+0+88 (secure 0 0 0) 0x7f0a280418e0 con 0x7f0a381089e0 2026-03-09T17:20:38.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 >> v2:192.168.123.100:6800/1655006894 conn(0x7f0a1003dcf0 msgr2=0x7f0a100401b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 --2- 192.168.123.100:0/4251548214 >> v2:192.168.123.100:6800/1655006894 conn(0x7f0a1003dcf0 0x7f0a100401b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 msgr2=0x7f0a3819e800 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 --2- 192.168.123.100:0/4251548214 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a3819e800 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f0a28009cb0 tx=0x7f0a28037f80 comp rx=0 tx=0).stop 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 shutdown_connections 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 --2- 192.168.123.100:0/4251548214 >> v2:192.168.123.100:6800/1655006894 conn(0x7f0a1003dcf0 0x7f0a100401b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 --2- 192.168.123.100:0/4251548214 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0a381089e0 0x7f0a3819e800 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 >> 192.168.123.100:0/4251548214 conn(0x7f0a3807bf00 msgr2=0x7f0a38105ec0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 shutdown_connections 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.187+0000 7f0a40319640 1 -- 192.168.123.100:0/4251548214 wait complete. 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-09T17:20:38.238 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 9... 2026-03-09T17:20:38.813 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:38 vm00 bash[21037]: debug 2026-03-09T17:20:38.555+0000 7f58bf24c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:20:39.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:38 vm00 bash[20770]: audit 2026-03-09T17:20:37.807224+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.100:0/1830910416' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T17:20:39.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:38 vm00 bash[20770]: audit 2026-03-09T17:20:37.807224+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.100:0/1830910416' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T17:20:39.120 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:38 vm00 bash[20770]: cluster 2026-03-09T17:20:37.810143+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-09T17:20:39.120 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:38 vm00 bash[20770]: cluster 2026-03-09T17:20:37.810143+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-09T17:20:39.120 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:38 vm00 bash[20770]: audit 2026-03-09T17:20:38.186451+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.100:0/4251548214' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:20:39.120 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:38 vm00 bash[20770]: audit 2026-03-09T17:20:38.186451+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.100:0/4251548214' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:20:39.120 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: debug 2026-03-09T17:20:39.027+0000 7f58bf24c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:20:39.387 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: debug 2026-03-09T17:20:39.115+0000 7f58bf24c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:20:39.388 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:20:39.388 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:20:39.388 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: from numpy import show_config as show_numpy_config 2026-03-09T17:20:39.388 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: debug 2026-03-09T17:20:39.243+0000 7f58bf24c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:20:39.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: debug 2026-03-09T17:20:39.383+0000 7f58bf24c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:20:39.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: debug 2026-03-09T17:20:39.423+0000 7f58bf24c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:20:39.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: debug 2026-03-09T17:20:39.463+0000 7f58bf24c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:20:39.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: debug 2026-03-09T17:20:39.511+0000 7f58bf24c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:20:39.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:39 vm00 bash[21037]: debug 2026-03-09T17:20:39.563+0000 7f58bf24c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:20:40.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.027+0000 7f58bf24c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:20:40.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.067+0000 7f58bf24c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:20:40.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.107+0000 7f58bf24c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:20:40.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.267+0000 7f58bf24c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:20:40.653 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.319+0000 7f58bf24c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:20:40.653 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.371+0000 7f58bf24c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:20:40.653 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.487+0000 7f58bf24c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:20:40.904 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.651+0000 7f58bf24c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:20:40.904 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.823+0000 7f58bf24c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:20:40.904 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.859+0000 7f58bf24c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:20:41.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:40 vm00 bash[21037]: debug 2026-03-09T17:20:40.903+0000 7f58bf24c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:20:41.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:41 vm00 bash[21037]: debug 2026-03-09T17:20:41.067+0000 7f58bf24c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:20:41.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:20:41 vm00 bash[21037]: debug 2026-03-09T17:20:41.299+0000 7f58bf24c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:20:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.305050+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:20:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.305050+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:20:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.305469+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon y 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.305469+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon y 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.310766+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.310766+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.310939+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00557159s) 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.310939+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00557159s) 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.315943+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.315943+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.316779+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.316779+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.317616+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.317616+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.317973+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.317973+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.318294+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.318294+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.324863+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon y is now available 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: cluster 2026-03-09T17:20:41.324863+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon y is now available 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.342022+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.342022+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.342995+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.342995+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.344551+0000 mon.a (mon.0) 88 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:41 vm00 bash[20770]: audit 2026-03-09T17:20:41.344551+0000 mon.a (mon.0) 88 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 Processor -- start 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- start start 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f19380a4d10 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f19380a52e0 con 0x7f19380a4910 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19448ca640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f19380a4d10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19448ca640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f19380a4d10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36462/0 (socket says 192.168.123.100:36462) 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19448ca640 1 -- 192.168.123.100:0/1596833433 learned_addr learned my addr 192.168.123.100:0/1596833433 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19448ca640 1 -- 192.168.123.100:0/1596833433 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f19380a5b10 con 0x7f19380a4910 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19448ca640 1 --2- 192.168.123.100:0/1596833433 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f19380a4d10 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f19340089a0 tx=0x7f1934031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=80ba440806aa11e6 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:42.364 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f193f7fe640 1 -- 192.168.123.100:0/1596833433 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f193403c480 con 0x7f19380a4910 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f193f7fe640 1 -- 192.168.123.100:0/1596833433 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f193403ca90 con 0x7f19380a4910 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- 192.168.123.100:0/1596833433 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 msgr2=0x7f19380a4d10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 --2- 192.168.123.100:0/1596833433 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f19380a4d10 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f19340089a0 tx=0x7f1934031440 comp rx=0 tx=0).stop 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- 192.168.123.100:0/1596833433 shutdown_connections 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 --2- 192.168.123.100:0/1596833433 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f19380a4d10 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- 192.168.123.100:0/1596833433 >> 192.168.123.100:0/1596833433 conn(0x7f193809fc20 msgr2=0x7f19380a2080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- 192.168.123.100:0/1596833433 shutdown_connections 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- 192.168.123.100:0/1596833433 wait complete. 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 Processor -- start 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- start start 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f193813d7c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19458cc640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f19380a6650 con 0x7f19380a4910 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19448ca640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f193813d7c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:42.365 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19448ca640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f193813d7c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36478/0 (socket says 192.168.123.100:36478) 2026-03-09T17:20:42.366 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.407+0000 7f19448ca640 1 -- 192.168.123.100:0/2839204157 learned_addr learned my addr 192.168.123.100:0/2839204157 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:42.366 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.411+0000 7f19448ca640 1 -- 192.168.123.100:0/2839204157 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f193813dd00 con 0x7f19380a4910 2026-03-09T17:20:42.366 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.411+0000 7f19448ca640 1 --2- 192.168.123.100:0/2839204157 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f193813d7c0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f1934002410 tx=0x7f1934009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:42.366 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.411+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f193403c750 con 0x7f19380a4910 2026-03-09T17:20:42.366 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.411+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f193400b520 con 0x7f19380a4910 2026-03-09T17:20:42.366 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.411+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f193813df90 con 0x7f19380a4910 2026-03-09T17:20:42.366 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.411+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f19340455c0 con 0x7f19380a4910 2026-03-09T17:20:42.366 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.411+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f193813a300 con 0x7f19380a4910 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.415+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 9) ==== 50225+0+0 (secure 0 0 0) 0x7f193400bce0 con 0x7f19380a4910 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.415+0000 7f193dffb640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 0x7f1924040160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.415+0000 7f193ffff640 1 -- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 msgr2=0x7f1924040160 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1655006894 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.415+0000 7f193ffff640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 0x7f1924040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.415+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 --> v2:192.168.123.100:6800/1655006894 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f1924040830 con 0x7f192403dca0 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.415+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f1934077a20 con 0x7f19380a4910 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.615+0000 7f193ffff640 1 -- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 msgr2=0x7f1924040160 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1655006894 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:38.615+0000 7f193ffff640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 0x7f1924040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:39.015+0000 7f193ffff640 1 -- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 msgr2=0x7f1924040160 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1655006894 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:39.015+0000 7f193ffff640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 0x7f1924040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:39.815+0000 7f193ffff640 1 -- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 msgr2=0x7f1924040160 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1655006894 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:39.815+0000 7f193ffff640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 0x7f1924040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:41.307+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mgrmap(e 10) ==== 50027+0+0 (secure 0 0 0) 0x7f1934076430 con 0x7f19380a4910 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:41.307+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 msgr2=0x7f1924040160 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:41.307+0000 7f193dffb640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 0x7f1924040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.311+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7f1934076eb0 con 0x7f19380a4910 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.311+0000 7f193dffb640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/3114914985 conn(0x7f19240416f0 0x7f1924043ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.311+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 --> v2:192.168.123.100:6800/3114914985 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f193404adf0 con 0x7f19240416f0 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.311+0000 7f193ffff640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/3114914985 conn(0x7f19240416f0 0x7f1924043ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.311+0000 7f193ffff640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/3114914985 conn(0x7f19240416f0 0x7f1924043ae0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f19400664d0 tx=0x7f194006b6e0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.315+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (secure 0 0 0) 0x7f193404adf0 con 0x7f19240416f0 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 --> v2:192.168.123.100:6800/3114914985 -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7f19380a4d10 con 0x7f19240416f0 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f193dffb640 1 -- 192.168.123.100:0/2839204157 <== mgr.14150 v2:192.168.123.100:6800/3114914985 2 ==== command_reply(tid 1: 0 ) ==== 8+0+52 (secure 0 0 0) 0x7f19380a4d10 con 0x7f19240416f0 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/3114914985 conn(0x7f19240416f0 msgr2=0x7f1924043ae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/3114914985 conn(0x7f19240416f0 0x7f1924043ae0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f19400664d0 tx=0x7f194006b6e0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 msgr2=0x7f193813d7c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 --2- 192.168.123.100:0/2839204157 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f193813d7c0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f1934002410 tx=0x7f1934009da0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 shutdown_connections 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/3114914985 conn(0x7f19240416f0 0x7f1924043ae0 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 --2- 192.168.123.100:0/2839204157 >> v2:192.168.123.100:6800/1655006894 conn(0x7f192403dca0 0x7f1924040160 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 --2- 192.168.123.100:0/2839204157 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19380a4910 0x7f193813d7c0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 >> 192.168.123.100:0/2839204157 conn(0x7f193809fc20 msgr2=0x7f19380a0730 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 shutdown_connections 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.319+0000 7f19458cc640 1 -- 192.168.123.100:0/2839204157 wait complete. 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 9 is available 2026-03-09T17:20:42.367 INFO:teuthology.orchestra.run.vm00.stdout:Generating a dashboard self-signed certificate... 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 Processor -- start 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 -- start start 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f25401049e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2540104f20 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f253e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f25401049e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f253e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f25401049e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54898/0 (socket says 192.168.123.100:54898) 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f253e575640 1 -- 192.168.123.100:0/4267029611 learned_addr learned my addr 192.168.123.100:0/4267029611 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f253e575640 1 -- 192.168.123.100:0/4267029611 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f25401050a0 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f253e575640 1 --2- 192.168.123.100:0/4267029611 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f25401049e0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f2528009920 tx=0x7f252802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=204f22b255a1c1e4 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f253d573640 1 -- 192.168.123.100:0/4267029611 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f252803c070 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f253d573640 1 -- 192.168.123.100:0/4267029611 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2528037440 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f253d573640 1 -- 192.168.123.100:0/4267029611 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f25280354d0 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 -- 192.168.123.100:0/4267029611 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 msgr2=0x7f25401049e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 --2- 192.168.123.100:0/4267029611 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f25401049e0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f2528009920 tx=0x7f252802ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 -- 192.168.123.100:0/4267029611 shutdown_connections 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 --2- 192.168.123.100:0/4267029611 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f25401049e0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 -- 192.168.123.100:0/4267029611 >> 192.168.123.100:0/4267029611 conn(0x7f2540100250 msgr2=0x7f25401026b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 -- 192.168.123.100:0/4267029611 shutdown_connections 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.491+0000 7f2544d79640 1 -- 192.168.123.100:0/4267029611 wait complete. 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f2544d79640 1 Processor -- start 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f2544d79640 1 -- start start 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f2544d79640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f2540107810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f2544d79640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f254010ab60 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f253e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f2540107810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f253e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f2540107810 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54904/0 (socket says 192.168.123.100:54904) 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f253e575640 1 -- 192.168.123.100:0/3653562174 learned_addr learned my addr 192.168.123.100:0/3653562174 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f253e575640 1 -- 192.168.123.100:0/3653562174 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2540107d50 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f253e575640 1 --2- 192.168.123.100:0/3653562174 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f2540107810 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f2528002410 tx=0x7f252802fd20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f25277fe640 1 -- 192.168.123.100:0/3653562174 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2528047070 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f25277fe640 1 -- 192.168.123.100:0/3653562174 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2528042440 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f25277fe640 1 -- 192.168.123.100:0/3653562174 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f252803c070 con 0x7f25401045e0 2026-03-09T17:20:42.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2540105f40 con 0x7f25401045e0 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2540106360 con 0x7f25401045e0 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f25277fe640 1 -- 192.168.123.100:0/3653562174 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7f2528041570 con 0x7f25401045e0 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f25277fe640 1 --2- 192.168.123.100:0/3653562174 >> v2:192.168.123.100:6800/3114914985 conn(0x7f251003db30 0x7f251003fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f25277fe640 1 -- 192.168.123.100:0/3653562174 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7f2528077a30 con 0x7f25401045e0 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f253dd74640 1 --2- 192.168.123.100:0/3653562174 >> v2:192.168.123.100:6800/3114914985 conn(0x7f251003db30 0x7f251003fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f250c005180 con 0x7f25401045e0 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.495+0000 7f253dd74640 1 --2- 192.168.123.100:0/3653562174 >> v2:192.168.123.100:6800/3114914985 conn(0x7f251003db30 0x7f251003fff0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f2534009a10 tx=0x7f2534006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.499+0000 7f25277fe640 1 -- 192.168.123.100:0/3653562174 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2528047210 con 0x7f25401045e0 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.595+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) -- 0x7f250c002bf0 con 0x7f251003db30 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.663+0000 7f25277fe640 1 -- 192.168.123.100:0/3653562174 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f250c002bf0 con 0x7f251003db30 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 >> v2:192.168.123.100:6800/3114914985 conn(0x7f251003db30 msgr2=0x7f251003fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 --2- 192.168.123.100:0/3653562174 >> v2:192.168.123.100:6800/3114914985 conn(0x7f251003db30 0x7f251003fff0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f2534009a10 tx=0x7f2534006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 msgr2=0x7f2540107810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 --2- 192.168.123.100:0/3653562174 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f2540107810 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f2528002410 tx=0x7f252802fd20 comp rx=0 tx=0).stop 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 shutdown_connections 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 --2- 192.168.123.100:0/3653562174 >> v2:192.168.123.100:6800/3114914985 conn(0x7f251003db30 0x7f251003fff0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 --2- 192.168.123.100:0/3653562174 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f25401045e0 0x7f2540107810 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 >> 192.168.123.100:0/3653562174 conn(0x7f2540100250 msgr2=0x7f2540100dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 shutdown_connections 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.667+0000 7f2544d79640 1 -- 192.168.123.100:0/3653562174 wait complete. 2026-03-09T17:20:42.708 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial admin user... 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$f/zKre8yd6WgJVPvJEJfBeeV4tyilCFQFskW.Hhka4MhNzp4kByR6", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773076843, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.831+0000 7fa16a963640 1 Processor -- start 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.831+0000 7fa16a963640 1 -- start start 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.831+0000 7fa16a963640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa164108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa164109540 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa164108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa164108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54910/0 (socket says 192.168.123.100:54910) 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 -- 192.168.123.100:0/2154192259 learned_addr learned my addr 192.168.123.100:0/2154192259 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 -- 192.168.123.100:0/2154192259 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa164109d70 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 --2- 192.168.123.100:0/2154192259 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa164108f70 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fa154009b80 tx=0x7fa15402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=f6d73e8f7f3f5350 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa162ffd640 1 -- 192.168.123.100:0/2154192259 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa15403c070 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa162ffd640 1 -- 192.168.123.100:0/2154192259 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa154037440 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 -- 192.168.123.100:0/2154192259 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 msgr2=0x7fa164108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 --2- 192.168.123.100:0/2154192259 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa164108f70 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fa154009b80 tx=0x7fa15402f190 comp rx=0 tx=0).stop 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 -- 192.168.123.100:0/2154192259 shutdown_connections 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 --2- 192.168.123.100:0/2154192259 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa164108f70 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 -- 192.168.123.100:0/2154192259 >> 192.168.123.100:0/2154192259 conn(0x7fa16407c070 msgr2=0x7fa16407c480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 -- 192.168.123.100:0/2154192259 shutdown_connections 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 -- 192.168.123.100:0/2154192259 wait complete. 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 Processor -- start 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 -- start start 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa1640804f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa16a963640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa16410cfa0 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa1640804f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa1640804f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54924/0 (socket says 192.168.123.100:54924) 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 -- 192.168.123.100:0/2954268015 learned_addr learned my addr 192.168.123.100:0/2954268015 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.835+0000 7fa163fff640 1 -- 192.168.123.100:0/2954268015 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa164080a30 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa163fff640 1 --2- 192.168.123.100:0/2954268015 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa1640804f0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fa154035a70 tx=0x7fa154035aa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa1617fa640 1 -- 192.168.123.100:0/2954268015 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa154047070 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa1617fa640 1 -- 192.168.123.100:0/2954268015 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa154035d30 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa1617fa640 1 -- 192.168.123.100:0/2954268015 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa15403c040 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa164080cc0 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa16407d030 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa1617fa640 1 -- 192.168.123.100:0/2954268015 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7fa154030050 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa1617fa640 1 --2- 192.168.123.100:0/2954268015 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa14403db30 0x7fa14403fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa1617fa640 1 -- 192.168.123.100:0/2954268015 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fa154077660 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa1637fe640 1 --2- 192.168.123.100:0/2954268015 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa14403db30 0x7fa14403fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.839+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa130005180 con 0x7fa164108b70 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.843+0000 7fa1637fe640 1 --2- 192.168.123.100:0/2954268015 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa14403db30 0x7fa14403fff0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fa1500099c0 tx=0x7fa150006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.843+0000 7fa1617fa640 1 -- 192.168.123.100:0/2954268015 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa1540342a0 con 0x7fa164108b70 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:42.947+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) -- 0x7fa130003c00 con 0x7fa14403db30 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa1617fa640 1 -- 192.168.123.100:0/2954268015 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+252 (secure 0 0 0) 0x7fa130003c00 con 0x7fa14403db30 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa14403db30 msgr2=0x7fa14403fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa16a963640 1 --2- 192.168.123.100:0/2954268015 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa14403db30 0x7fa14403fff0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fa1500099c0 tx=0x7fa150006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 msgr2=0x7fa1640804f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa16a963640 1 --2- 192.168.123.100:0/2954268015 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa1640804f0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fa154035a70 tx=0x7fa154035aa0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 shutdown_connections 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa16a963640 1 --2- 192.168.123.100:0/2954268015 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa14403db30 0x7fa14403fff0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa16a963640 1 --2- 192.168.123.100:0/2954268015 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa164108b70 0x7fa1640804f0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.107+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 >> 192.168.123.100:0/2954268015 conn(0x7fa16407c070 msgr2=0x7fa164106130 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.111+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 shutdown_connections 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.111+0000 7fa16a963640 1 -- 192.168.123.100:0/2954268015 wait complete. 2026-03-09T17:20:43.159 INFO:teuthology.orchestra.run.vm00.stdout:Fetching dashboard port number... 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cluster 2026-03-09T17:20:42.314698+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e11: y(active, since 1.00933s) 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cluster 2026-03-09T17:20:42.314698+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e11: y(active, since 1.00933s) 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.439266+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Bus STARTING 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.439266+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Bus STARTING 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.553802+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.553802+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.554518+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Client ('192.168.123.100', 53988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.554518+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Client ('192.168.123.100', 53988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: audit 2026-03-09T17:20:42.598073+0000 mgr.y (mgr.14150) 6 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: audit 2026-03-09T17:20:42.598073+0000 mgr.y (mgr.14150) 6 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.654759+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.654759+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.654806+0000 mgr.y (mgr.14150) 8 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Bus STARTED 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: cephadm 2026-03-09T17:20:42.654806+0000 mgr.y (mgr.14150) 8 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Bus STARTED 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: audit 2026-03-09T17:20:42.662692+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: audit 2026-03-09T17:20:42.662692+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: audit 2026-03-09T17:20:42.667487+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: audit 2026-03-09T17:20:42.667487+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: audit 2026-03-09T17:20:43.108829+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:43.401 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:43 vm00 bash[20770]: audit 2026-03-09T17:20:43.108829+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:43.430 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T17:20:43.430 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe061332640 1 Processor -- start 2026-03-09T17:20:43.430 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe061332640 1 -- start start 2026-03-09T17:20:43.430 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe061332640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.430 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe061332640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe05c109540 con 0x7fe05c108b70 2026-03-09T17:20:43.430 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe05affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.430 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe05affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54932/0 (socket says 192.168.123.100:54932) 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe05affd640 1 -- 192.168.123.100:0/1828788974 learned_addr learned my addr 192.168.123.100:0/1828788974 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe05affd640 1 -- 192.168.123.100:0/1828788974 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe05c109d70 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe05affd640 1 --2- 192.168.123.100:0/1828788974 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c108f70 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe050009920 tx=0x7fe05002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b8b4e7f777017eb2 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe059ffb640 1 -- 192.168.123.100:0/1828788974 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe05003c070 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe059ffb640 1 -- 192.168.123.100:0/1828788974 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fe050037440 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.275+0000 7fe059ffb640 1 -- 192.168.123.100:0/1828788974 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe0500354d0 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- 192.168.123.100:0/1828788974 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 msgr2=0x7fe05c108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 --2- 192.168.123.100:0/1828788974 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c108f70 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe050009920 tx=0x7fe05002ef20 comp rx=0 tx=0).stop 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- 192.168.123.100:0/1828788974 shutdown_connections 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 --2- 192.168.123.100:0/1828788974 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c108f70 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- 192.168.123.100:0/1828788974 >> 192.168.123.100:0/1828788974 conn(0x7fe05c07c070 msgr2=0x7fe05c07c480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- 192.168.123.100:0/1828788974 shutdown_connections 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- 192.168.123.100:0/1828788974 wait complete. 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 Processor -- start 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- start start 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c19e910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe05c10cfa0 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe05affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c19e910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe05affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c19e910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54948/0 (socket says 192.168.123.100:54948) 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe05affd640 1 -- 192.168.123.100:0/1319275870 learned_addr learned my addr 192.168.123.100:0/1319275870 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe05affd640 1 -- 192.168.123.100:0/1319275870 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe05c19ee50 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe05affd640 1 --2- 192.168.123.100:0/1319275870 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c19e910 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fe050002410 tx=0x7fe05002fd20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe03ffff640 1 -- 192.168.123.100:0/1319275870 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe050047070 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe03ffff640 1 -- 192.168.123.100:0/1319275870 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fe050042440 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe03ffff640 1 -- 192.168.123.100:0/1319275870 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe05003c070 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe05c19f0e0 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.279+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe05c1a1dd0 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.283+0000 7fe03ffff640 1 -- 192.168.123.100:0/1319275870 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7fe050041570 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.283+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe028005180 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.283+0000 7fe03ffff640 1 --2- 192.168.123.100:0/1319275870 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe03403db30 0x7fe03403fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.283+0000 7fe03ffff640 1 -- 192.168.123.100:0/1319275870 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fe050077000 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.283+0000 7fe05a7fc640 1 --2- 192.168.123.100:0/1319275870 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe03403db30 0x7fe03403fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.283+0000 7fe05a7fc640 1 --2- 192.168.123.100:0/1319275870 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe03403db30 0x7fe03403fff0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fe048009a10 tx=0x7fe048006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.283+0000 7fe03ffff640 1 -- 192.168.123.100:0/1319275870 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe050047210 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.383+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) -- 0x7fe028005470 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe03ffff640 1 -- 192.168.123.100:0/1319275870 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) ==== 112+0+5 (secure 0 0 0) 0x7fe050037440 con 0x7fe05c108b70 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe03403db30 msgr2=0x7fe03403fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 --2- 192.168.123.100:0/1319275870 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe03403db30 0x7fe03403fff0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fe048009a10 tx=0x7fe048006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 msgr2=0x7fe05c19e910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 --2- 192.168.123.100:0/1319275870 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c19e910 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fe050002410 tx=0x7fe05002fd20 comp rx=0 tx=0).stop 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 shutdown_connections 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 --2- 192.168.123.100:0/1319275870 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe03403db30 0x7fe03403fff0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 --2- 192.168.123.100:0/1319275870 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe05c108b70 0x7fe05c19e910 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 >> 192.168.123.100:0/1319275870 conn(0x7fe05c07c070 msgr2=0x7fe05c105ef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.387+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 shutdown_connections 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.391+0000 7fe061332640 1 -- 192.168.123.100:0/1319275870 wait complete. 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-09T17:20:43.431 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T17:20:43.432 INFO:teuthology.orchestra.run.vm00.stdout:Ceph Dashboard is now available at: 2026-03-09T17:20:43.432 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.432 INFO:teuthology.orchestra.run.vm00.stdout: URL: https://vm00.local:8443/ 2026-03-09T17:20:43.432 INFO:teuthology.orchestra.run.vm00.stdout: User: admin 2026-03-09T17:20:43.432 INFO:teuthology.orchestra.run.vm00.stdout: Password: gaec7uxugq 2026-03-09T17:20:43.432 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.432 INFO:teuthology.orchestra.run.vm00.stdout:Saving cluster configuration to /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config directory 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f964b2b2640 1 Processor -- start 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f964b2b2640 1 -- start start 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f964b2b2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f9644108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f964b2b2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f9644109540 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f9649027640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f9644108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f9649027640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f9644108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54958/0 (socket says 192.168.123.100:54958) 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f9649027640 1 -- 192.168.123.100:0/1257261806 learned_addr learned my addr 192.168.123.100:0/1257261806 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f9649027640 1 -- 192.168.123.100:0/1257261806 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9644109d70 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f9649027640 1 --2- 192.168.123.100:0/1257261806 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f9644108f70 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f9638009b80 tx=0x7f963802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=a0bb13de31410a3b server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.559+0000 7f9633fff640 1 -- 192.168.123.100:0/1257261806 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f963803c070 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f9633fff640 1 -- 192.168.123.100:0/1257261806 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f9638037440 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- 192.168.123.100:0/1257261806 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 msgr2=0x7f9644108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 --2- 192.168.123.100:0/1257261806 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f9644108f70 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f9638009b80 tx=0x7f963802f190 comp rx=0 tx=0).stop 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- 192.168.123.100:0/1257261806 shutdown_connections 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 --2- 192.168.123.100:0/1257261806 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f9644108f70 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- 192.168.123.100:0/1257261806 >> 192.168.123.100:0/1257261806 conn(0x7f964407c040 msgr2=0x7f964407c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- 192.168.123.100:0/1257261806 shutdown_connections 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- 192.168.123.100:0/1257261806 wait complete. 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 Processor -- start 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- start start 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f964419ead0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f964410cfa0 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f9649027640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f964419ead0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f9649027640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f964419ead0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54962/0 (socket says 192.168.123.100:54962) 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f9649027640 1 -- 192.168.123.100:0/3590720498 learned_addr learned my addr 192.168.123.100:0/3590720498 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f9649027640 1 -- 192.168.123.100:0/3590720498 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f964419f010 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f9649027640 1 --2- 192.168.123.100:0/3590720498 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f964419ead0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f963802f6c0 tx=0x7f9638035a00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f96327fc640 1 -- 192.168.123.100:0/3590720498 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f963803c040 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f96327fc640 1 -- 192.168.123.100:0/3590720498 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f9638035ce0 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f964419f2a0 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f96327fc640 1 -- 192.168.123.100:0/3590720498 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f9638003e20 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f96441a1f90 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f96327fc640 1 -- 192.168.123.100:0/3590720498 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7f9638030000 con 0x7f9644108b70 2026-03-09T17:20:43.747 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f96327fc640 1 --2- 192.168.123.100:0/3590720498 >> v2:192.168.123.100:6800/3114914985 conn(0x7f961c03db30 0x7f961c03fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f96327fc640 1 -- 192.168.123.100:0/3590720498 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7f96380779b0 con 0x7f9644108b70 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.563+0000 7f9648826640 1 --2- 192.168.123.100:0/3590720498 >> v2:192.168.123.100:6800/3114914985 conn(0x7f961c03db30 0x7f961c03fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.567+0000 7f9648826640 1 --2- 192.168.123.100:0/3590720498 >> v2:192.168.123.100:6800/3114914985 conn(0x7f961c03db30 0x7f961c03fff0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f96340099c0 tx=0x7f9634006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.567+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f960c005180 con 0x7f9644108b70 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.567+0000 7f96327fc640 1 -- 192.168.123.100:0/3590720498 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f963803d280 con 0x7f9644108b70 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.703+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) -- 0x7f960c005470 con 0x7f9644108b70 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f96327fc640 1 -- 192.168.123.100:0/3590720498 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v24)=0 set mgr/dashboard/cluster/status v24) ==== 153+0+0 (secure 0 0 0) 0x7f963803fe50 con 0x7f9644108b70 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 >> v2:192.168.123.100:6800/3114914985 conn(0x7f961c03db30 msgr2=0x7f961c03fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 --2- 192.168.123.100:0/3590720498 >> v2:192.168.123.100:6800/3114914985 conn(0x7f961c03db30 0x7f961c03fff0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f96340099c0 tx=0x7f9634006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 msgr2=0x7f964419ead0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 --2- 192.168.123.100:0/3590720498 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f964419ead0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f963802f6c0 tx=0x7f9638035a00 comp rx=0 tx=0).stop 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 shutdown_connections 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 --2- 192.168.123.100:0/3590720498 >> v2:192.168.123.100:6800/3114914985 conn(0x7f961c03db30 0x7f961c03fff0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 --2- 192.168.123.100:0/3590720498 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9644108b70 0x7f964419ead0 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 >> 192.168.123.100:0/3590720498 conn(0x7f964407c040 msgr2=0x7f9644106170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.707+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 shutdown_connections 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-09T17:20:43.711+0000 7f964b2b2640 1 -- 192.168.123.100:0/3590720498 wait complete. 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: ceph telemetry on 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:For more information see: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:20:43.748 INFO:teuthology.orchestra.run.vm00.stdout:Bootstrap complete. 2026-03-09T17:20:43.769 INFO:tasks.cephadm:Fetching config... 2026-03-09T17:20:43.769 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:20:43.769 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T17:20:43.772 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T17:20:43.772 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:20:43.772 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T17:20:43.818 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T17:20:43.818 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:20:43.818 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.a/keyring of=/dev/stdout 2026-03-09T17:20:43.867 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T17:20:43.867 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:20:43.867 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T17:20:43.914 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T17:20:43.914 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMS8vm6y0TabM0+MhpSTHYvKhUzzSz2GjjcD2VmCO+5 ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T17:20:43.971 INFO:teuthology.orchestra.run.vm00.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMS8vm6y0TabM0+MhpSTHYvKhUzzSz2GjjcD2VmCO+5 ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:43.978 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMS8vm6y0TabM0+MhpSTHYvKhUzzSz2GjjcD2VmCO+5 ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T17:20:43.991 INFO:teuthology.orchestra.run.vm02.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMS8vm6y0TabM0+MhpSTHYvKhUzzSz2GjjcD2VmCO+5 ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:20:43.996 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T17:20:44.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:44 vm00 bash[20770]: audit 2026-03-09T17:20:42.951535+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:44.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:44 vm00 bash[20770]: audit 2026-03-09T17:20:42.951535+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:44.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:44 vm00 bash[20770]: audit 2026-03-09T17:20:43.388645+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.100:0/1319275870' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T17:20:44.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:44 vm00 bash[20770]: audit 2026-03-09T17:20:43.388645+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.100:0/1319275870' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T17:20:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:44 vm00 bash[20770]: audit 2026-03-09T17:20:43.708435+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.100:0/3590720498' entity='client.admin' 2026-03-09T17:20:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:44 vm00 bash[20770]: audit 2026-03-09T17:20:43.708435+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.100:0/3590720498' entity='client.admin' 2026-03-09T17:20:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:44 vm00 bash[20770]: cluster 2026-03-09T17:20:44.113453+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T17:20:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:44 vm00 bash[20770]: cluster 2026-03-09T17:20:44.113453+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T17:20:47.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:47 vm00 bash[20770]: audit 2026-03-09T17:20:46.056239+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:47.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:47 vm00 bash[20770]: audit 2026-03-09T17:20:46.056239+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:47.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:47 vm00 bash[20770]: audit 2026-03-09T17:20:46.680281+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:47.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:47 vm00 bash[20770]: audit 2026-03-09T17:20:46.680281+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:47.912 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.a/config 2026-03-09T17:20:48.076 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- 192.168.123.100:0/3348646344 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 msgr2=0x7f36ec075480 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:48.076 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 --2- 192.168.123.100:0/3348646344 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 0x7f36ec075480 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f36d4009a00 tx=0x7f36d402f310 comp rx=0 tx=0).stop 2026-03-09T17:20:48.076 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- 192.168.123.100:0/3348646344 shutdown_connections 2026-03-09T17:20:48.076 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 --2- 192.168.123.100:0/3348646344 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 0x7f36ec075480 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:48.076 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- 192.168.123.100:0/3348646344 >> 192.168.123.100:0/3348646344 conn(0x7f36ec0fdaf0 msgr2=0x7f36ec0fff30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:48.076 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- 192.168.123.100:0/3348646344 shutdown_connections 2026-03-09T17:20:48.076 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- 192.168.123.100:0/3348646344 wait complete. 2026-03-09T17:20:48.077 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 Processor -- start 2026-03-09T17:20:48.077 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- start start 2026-03-09T17:20:48.077 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 0x7f36ec19b240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:48.077 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f36ec109c10 con 0x7f36ec077020 2026-03-09T17:20:48.077 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f0870640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 0x7f36ec19b240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:48.078 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f0870640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 0x7f36ec19b240 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54990/0 (socket says 192.168.123.100:54990) 2026-03-09T17:20:48.078 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f0870640 1 -- 192.168.123.100:0/256166564 learned_addr learned my addr 192.168.123.100:0/256166564 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:48.078 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f0870640 1 -- 192.168.123.100:0/256166564 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f36ec19b780 con 0x7f36ec077020 2026-03-09T17:20:48.078 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f0870640 1 --2- 192.168.123.100:0/256166564 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 0x7f36ec19b240 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f36d4004300 tx=0x7f36d4004330 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:48.078 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36e1ffb640 1 -- 192.168.123.100:0/256166564 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f36d403d070 con 0x7f36ec077020 2026-03-09T17:20:48.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f36ec19ba10 con 0x7f36ec077020 2026-03-09T17:20:48.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.075+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f36ec19e700 con 0x7f36ec077020 2026-03-09T17:20:48.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.079+0000 7f36e1ffb640 1 -- 192.168.123.100:0/256166564 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f36d4045070 con 0x7f36ec077020 2026-03-09T17:20:48.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.079+0000 7f36e1ffb640 1 -- 192.168.123.100:0/256166564 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f36d4040580 con 0x7f36ec077020 2026-03-09T17:20:48.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.079+0000 7f36e1ffb640 1 -- 192.168.123.100:0/256166564 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f36d4040720 con 0x7f36ec077020 2026-03-09T17:20:48.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.079+0000 7f36e1ffb640 1 --2- 192.168.123.100:0/256166564 >> v2:192.168.123.100:6800/3114914985 conn(0x7f36c003dd40 0x7f36c0040200 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:48.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.079+0000 7f36e1ffb640 1 -- 192.168.123.100:0/256166564 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7f36d4077770 con 0x7f36ec077020 2026-03-09T17:20:48.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.079+0000 7f36e3fff640 1 --2- 192.168.123.100:0/256166564 >> v2:192.168.123.100:6800/3114914985 conn(0x7f36c003dd40 0x7f36c0040200 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:48.081 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.079+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f36ec10a3b0 con 0x7f36ec077020 2026-03-09T17:20:48.084 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.083+0000 7f36e3fff640 1 --2- 192.168.123.100:0/256166564 >> v2:192.168.123.100:6800/3114914985 conn(0x7f36c003dd40 0x7f36c0040200 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f36dc0099c0 tx=0x7f36dc006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:48.085 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.083+0000 7f36e1ffb640 1 -- 192.168.123.100:0/256166564 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f36d4035320 con 0x7f36ec077020 2026-03-09T17:20:48.175 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.175+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) -- 0x7f36ec194970 con 0x7f36ec077020 2026-03-09T17:20:48.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.175+0000 7f36e1ffb640 1 -- 192.168.123.100:0/256166564 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) ==== 125+0+0 (secure 0 0 0) 0x7f36d40373d0 con 0x7f36ec077020 2026-03-09T17:20:48.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 >> v2:192.168.123.100:6800/3114914985 conn(0x7f36c003dd40 msgr2=0x7f36c0040200 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:48.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 --2- 192.168.123.100:0/256166564 >> v2:192.168.123.100:6800/3114914985 conn(0x7f36c003dd40 0x7f36c0040200 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f36dc0099c0 tx=0x7f36dc006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:48.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 msgr2=0x7f36ec19b240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:48.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 --2- 192.168.123.100:0/256166564 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 0x7f36ec19b240 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f36d4004300 tx=0x7f36d4004330 comp rx=0 tx=0).stop 2026-03-09T17:20:48.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 shutdown_connections 2026-03-09T17:20:48.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 --2- 192.168.123.100:0/256166564 >> v2:192.168.123.100:6800/3114914985 conn(0x7f36c003dd40 0x7f36c0040200 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:48.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 --2- 192.168.123.100:0/256166564 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f36ec077020 0x7f36ec19b240 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:48.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 >> 192.168.123.100:0/256166564 conn(0x7f36ec0fdaf0 msgr2=0x7f36ec0fe520 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:48.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 shutdown_connections 2026-03-09T17:20:48.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:48.183+0000 7f36f2afb640 1 -- 192.168.123.100:0/256166564 wait complete. 2026-03-09T17:20:48.240 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T17:20:48.240 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T17:20:49.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:49 vm00 bash[20770]: cluster 2026-03-09T17:20:48.060803+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T17:20:49.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:49 vm00 bash[20770]: cluster 2026-03-09T17:20:48.060803+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T17:20:49.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:49 vm00 bash[20770]: audit 2026-03-09T17:20:48.179518+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.100:0/256166564' entity='client.admin' 2026-03-09T17:20:49.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:49 vm00 bash[20770]: audit 2026-03-09T17:20:48.179518+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.100:0/256166564' entity='client.admin' 2026-03-09T17:20:52.907 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.a/config 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 -- 192.168.123.100:0/625275592 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 msgr2=0x7fd2c0102fb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 --2- 192.168.123.100:0/625275592 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 0x7fd2c0102fb0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7fd2a80099b0 tx=0x7fd2a802f2b0 comp rx=0 tx=0).stop 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 -- 192.168.123.100:0/625275592 shutdown_connections 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 --2- 192.168.123.100:0/625275592 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 0x7fd2c0102fb0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 -- 192.168.123.100:0/625275592 >> 192.168.123.100:0/625275592 conn(0x7fd2c00fc9d0 msgr2=0x7fd2c00fedf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 -- 192.168.123.100:0/625275592 shutdown_connections 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 -- 192.168.123.100:0/625275592 wait complete. 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 Processor -- start 2026-03-09T17:20:53.057 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 -- start start 2026-03-09T17:20:53.058 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 0x7fd2c019f730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:53.058 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2c542c640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd2c0105800 con 0x7fd2c0100ba0 2026-03-09T17:20:53.058 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2beffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 0x7fd2c019f730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:53.058 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2beffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 0x7fd2c019f730 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51368/0 (socket says 192.168.123.100:51368) 2026-03-09T17:20:53.058 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2beffd640 1 -- 192.168.123.100:0/3383093414 learned_addr learned my addr 192.168.123.100:0/3383093414 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:53.058 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2beffd640 1 -- 192.168.123.100:0/3383093414 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd2c019fc70 con 0x7fd2c0100ba0 2026-03-09T17:20:53.059 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.055+0000 7fd2beffd640 1 --2- 192.168.123.100:0/3383093414 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 0x7fd2c019f730 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fd2a80043c0 tx=0x7fd2a80043f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:53.059 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.059+0000 7fd29ffff640 1 -- 192.168.123.100:0/3383093414 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd2a8047070 con 0x7fd2c0100ba0 2026-03-09T17:20:53.059 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.059+0000 7fd29ffff640 1 -- 192.168.123.100:0/3383093414 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd2a803e070 con 0x7fd2c0100ba0 2026-03-09T17:20:53.059 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.059+0000 7fd29ffff640 1 -- 192.168.123.100:0/3383093414 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd2a8042740 con 0x7fd2c0100ba0 2026-03-09T17:20:53.059 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.059+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd2c019ff00 con 0x7fd2c0100ba0 2026-03-09T17:20:53.059 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.059+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd2c01a2bf0 con 0x7fd2c0100ba0 2026-03-09T17:20:53.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.059+0000 7fd29ffff640 1 -- 192.168.123.100:0/3383093414 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7fd2a8038470 con 0x7fd2c0100ba0 2026-03-09T17:20:53.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.059+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd2c0105fa0 con 0x7fd2c0100ba0 2026-03-09T17:20:53.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.059+0000 7fd29ffff640 1 --2- 192.168.123.100:0/3383093414 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd29403d980 0x7fd29403fe40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:53.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.063+0000 7fd2be7fc640 1 --2- 192.168.123.100:0/3383093414 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd29403d980 0x7fd29403fe40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:53.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.063+0000 7fd29ffff640 1 -- 192.168.123.100:0/3383093414 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fd2a8076be0 con 0x7fd2c0100ba0 2026-03-09T17:20:53.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.063+0000 7fd29ffff640 1 -- 192.168.123.100:0/3383093414 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd2a807a260 con 0x7fd2c0100ba0 2026-03-09T17:20:53.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.063+0000 7fd2be7fc640 1 --2- 192.168.123.100:0/3383093414 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd29403d980 0x7fd29403fe40 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fd2b4009a10 tx=0x7fd2b4006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:53.156 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.155+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) -- 0x7fd2c0199350 con 0x7fd29403d980 2026-03-09T17:20:53.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.159+0000 7fd29ffff640 1 -- 192.168.123.100:0/3383093414 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7fd2c0199350 con 0x7fd29403d980 2026-03-09T17:20:53.165 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd29403d980 msgr2=0x7fd29403fe40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:53.165 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 --2- 192.168.123.100:0/3383093414 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd29403d980 0x7fd29403fe40 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fd2b4009a10 tx=0x7fd2b4006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:53.165 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 msgr2=0x7fd2c019f730 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:53.165 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 --2- 192.168.123.100:0/3383093414 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 0x7fd2c019f730 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fd2a80043c0 tx=0x7fd2a80043f0 comp rx=0 tx=0).stop 2026-03-09T17:20:53.166 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 shutdown_connections 2026-03-09T17:20:53.166 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 --2- 192.168.123.100:0/3383093414 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd29403d980 0x7fd29403fe40 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:53.166 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 --2- 192.168.123.100:0/3383093414 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c0100ba0 0x7fd2c019f730 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:53.166 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 >> 192.168.123.100:0/3383093414 conn(0x7fd2c00fc9d0 msgr2=0x7fd2c01969a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:53.166 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 shutdown_connections 2026-03-09T17:20:53.166 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:53.163+0000 7fd2c542c640 1 -- 192.168.123.100:0/3383093414 wait complete. 2026-03-09T17:20:53.260 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm02 2026-03-09T17:20:53.260 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:20:53.260 DEBUG:teuthology.orchestra.run.vm02:> dd of=/etc/ceph/ceph.conf 2026-03-09T17:20:53.263 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:20:53.263 DEBUG:teuthology.orchestra.run.vm02:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:20:53.305 INFO:tasks.cephadm:Adding host vm02 to orchestrator... 2026-03-09T17:20:53.305 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch host add vm02 2026-03-09T17:20:53.433 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.432650+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.433 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.432650+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.433 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.435365+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.433 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.435365+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.436065+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.436065+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.438978+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.438978+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.444346+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.444346+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.447622+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:52.447622+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.160260+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.160260+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.160820+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.160820+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.161791+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.161791+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.162221+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.162221+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.303024+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.303024+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.305165+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.305165+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.307497+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:53.434 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:53 vm00 bash[20770]: audit 2026-03-09T17:20:53.307497+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: audit 2026-03-09T17:20:53.157490+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: audit 2026-03-09T17:20:53.157490+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: cephadm 2026-03-09T17:20:53.162803+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: cephadm 2026-03-09T17:20:53.162803+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: cephadm 2026-03-09T17:20:53.197587+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: cephadm 2026-03-09T17:20:53.197587+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: cephadm 2026-03-09T17:20:53.229402+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: cephadm 2026-03-09T17:20:53.229402+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: cephadm 2026-03-09T17:20:53.270779+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:20:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:54 vm00 bash[20770]: cephadm 2026-03-09T17:20:53.270779+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:20:57.911 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.a/config 2026-03-09T17:20:58.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 -- 192.168.123.100:0/2794847364 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 msgr2=0x7fe4d0102940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:58.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 --2- 192.168.123.100:0/2794847364 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 0x7fe4d0102940 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fe4bc009a00 tx=0x7fe4bc02f310 comp rx=0 tx=0).stop 2026-03-09T17:20:58.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 -- 192.168.123.100:0/2794847364 shutdown_connections 2026-03-09T17:20:58.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 --2- 192.168.123.100:0/2794847364 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 0x7fe4d0102940 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:58.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 -- 192.168.123.100:0/2794847364 >> 192.168.123.100:0/2794847364 conn(0x7fe4d00fe000 msgr2=0x7fe4d0100420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:58.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 -- 192.168.123.100:0/2794847364 shutdown_connections 2026-03-09T17:20:58.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 -- 192.168.123.100:0/2794847364 wait complete. 2026-03-09T17:20:58.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 Processor -- start 2026-03-09T17:20:58.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 -- start start 2026-03-09T17:20:58.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 0x7fe4d01993c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:58.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4d6410640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe4d010bed0 con 0x7fe4d0102560 2026-03-09T17:20:58.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4cffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 0x7fe4d01993c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:58.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4cffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 0x7fe4d01993c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51392/0 (socket says 192.168.123.100:51392) 2026-03-09T17:20:58.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4cffff640 1 -- 192.168.123.100:0/4115258859 learned_addr learned my addr 192.168.123.100:0/4115258859 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:20:58.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4cffff640 1 -- 192.168.123.100:0/4115258859 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe4d0199900 con 0x7fe4d0102560 2026-03-09T17:20:58.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.067+0000 7fe4cffff640 1 --2- 192.168.123.100:0/4115258859 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 0x7fe4d01993c0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fe4bc009b30 tx=0x7fe4bc002c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:58.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4cd7fa640 1 -- 192.168.123.100:0/4115258859 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe4bc0045b0 con 0x7fe4d0102560 2026-03-09T17:20:58.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4cd7fa640 1 -- 192.168.123.100:0/4115258859 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fe4bc046070 con 0x7fe4d0102560 2026-03-09T17:20:58.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4cd7fa640 1 -- 192.168.123.100:0/4115258859 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe4bc0416b0 con 0x7fe4d0102560 2026-03-09T17:20:58.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe4d0199b90 con 0x7fe4d0102560 2026-03-09T17:20:58.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe4d019dc90 con 0x7fe4d0102560 2026-03-09T17:20:58.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4cd7fa640 1 -- 192.168.123.100:0/4115258859 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7fe4bc038470 con 0x7fe4d0102560 2026-03-09T17:20:58.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe4d0103a40 con 0x7fe4d0102560 2026-03-09T17:20:58.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4cd7fa640 1 --2- 192.168.123.100:0/4115258859 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe4a403d930 0x7fe4a403fdf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:20:58.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.071+0000 7fe4cd7fa640 1 -- 192.168.123.100:0/4115258859 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fe4bc076860 con 0x7fe4d0102560 2026-03-09T17:20:58.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.075+0000 7fe4cf7fe640 1 --2- 192.168.123.100:0/4115258859 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe4a403d930 0x7fe4a403fdf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:20:58.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.075+0000 7fe4cf7fe640 1 --2- 192.168.123.100:0/4115258859 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe4a403d930 0x7fe4a403fdf0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fe4c00099c0 tx=0x7fe4c0006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:20:58.076 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.075+0000 7fe4cd7fa640 1 -- 192.168.123.100:0/4115258859 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe4bc0373d0 con 0x7fe4d0102560 2026-03-09T17:20:58.167 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:58.163+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm02", "target": ["mon-mgr", ""]}) -- 0x7fe4d019a050 con 0x7fe4a403d930 2026-03-09T17:20:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:58 vm00 bash[20770]: audit 2026-03-09T17:20:58.168706+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:58 vm00 bash[20770]: audit 2026-03-09T17:20:58.168706+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:20:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:58 vm00 bash[20770]: cephadm 2026-03-09T17:20:58.741012+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-09T17:20:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:20:58 vm00 bash[20770]: cephadm 2026-03-09T17:20:58.741012+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-09T17:20:59.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.923+0000 7fe4cd7fa640 1 -- 192.168.123.100:0/4115258859 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (secure 0 0 0) 0x7fe4d019a050 con 0x7fe4a403d930 2026-03-09T17:20:59.926 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm02' with addr '192.168.123.102' 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe4a403d930 msgr2=0x7fe4a403fdf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 --2- 192.168.123.100:0/4115258859 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe4a403d930 0x7fe4a403fdf0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fe4c00099c0 tx=0x7fe4c0006eb0 comp rx=0 tx=0).stop 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 msgr2=0x7fe4d01993c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 --2- 192.168.123.100:0/4115258859 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 0x7fe4d01993c0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fe4bc009b30 tx=0x7fe4bc002c80 comp rx=0 tx=0).stop 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 shutdown_connections 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 --2- 192.168.123.100:0/4115258859 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe4a403d930 0x7fe4a403fdf0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 --2- 192.168.123.100:0/4115258859 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4d0102560 0x7fe4d01993c0 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 >> 192.168.123.100:0/4115258859 conn(0x7fe4d00fe000 msgr2=0x7fe4d010a230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 shutdown_connections 2026-03-09T17:20:59.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:20:59.927+0000 7fe4d6410640 1 -- 192.168.123.100:0/4115258859 wait complete. 2026-03-09T17:20:59.984 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch host ls --format=json 2026-03-09T17:21:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:00 vm00 bash[20770]: audit 2026-03-09T17:20:59.925972+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:00 vm00 bash[20770]: audit 2026-03-09T17:20:59.925972+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:00 vm00 bash[20770]: cephadm 2026-03-09T17:20:59.926544+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm02 2026-03-09T17:21:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:00 vm00 bash[20770]: cephadm 2026-03-09T17:20:59.926544+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm02 2026-03-09T17:21:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:00 vm00 bash[20770]: audit 2026-03-09T17:20:59.926813+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:00 vm00 bash[20770]: audit 2026-03-09T17:20:59.926813+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:00 vm00 bash[20770]: audit 2026-03-09T17:21:00.202428+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:00 vm00 bash[20770]: audit 2026-03-09T17:21:00.202428+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:02.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:02 vm00 bash[20770]: cluster 2026-03-09T17:21:01.318514+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:02.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:02 vm00 bash[20770]: cluster 2026-03-09T17:21:01.318514+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:02.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:02 vm00 bash[20770]: audit 2026-03-09T17:21:01.471381+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:02.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:02 vm00 bash[20770]: audit 2026-03-09T17:21:01.471381+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:02.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:02 vm00 bash[20770]: audit 2026-03-09T17:21:02.012428+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:02.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:02 vm00 bash[20770]: audit 2026-03-09T17:21:02.012428+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:04.595 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.a/config 2026-03-09T17:21:04.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 -- 192.168.123.100:0/2276371665 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 msgr2=0x7fd20c1036f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:04.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 --2- 192.168.123.100:0/2276371665 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 0x7fd20c1036f0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fd1fc0099b0 tx=0x7fd1fc02f2b0 comp rx=0 tx=0).stop 2026-03-09T17:21:04.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 -- 192.168.123.100:0/2276371665 shutdown_connections 2026-03-09T17:21:04.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 --2- 192.168.123.100:0/2276371665 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 0x7fd20c1036f0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:04.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 -- 192.168.123.100:0/2276371665 >> 192.168.123.100:0/2276371665 conn(0x7fd20c0fcd10 msgr2=0x7fd20c0ff130 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:04.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 -- 192.168.123.100:0/2276371665 shutdown_connections 2026-03-09T17:21:04.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 -- 192.168.123.100:0/2276371665 wait complete. 2026-03-09T17:21:04.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 Processor -- start 2026-03-09T17:21:04.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 -- start start 2026-03-09T17:21:04.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 0x7fd20c1970c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:04.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd212fd5640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd20c102b10 con 0x7fd20c103310 2026-03-09T17:21:04.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd211fd3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 0x7fd20c1970c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:04.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd211fd3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 0x7fd20c1970c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:49762/0 (socket says 192.168.123.100:49762) 2026-03-09T17:21:04.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.775+0000 7fd211fd3640 1 -- 192.168.123.100:0/1695427032 learned_addr learned my addr 192.168.123.100:0/1695427032 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:21:04.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.779+0000 7fd211fd3640 1 -- 192.168.123.100:0/1695427032 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd20c197600 con 0x7fd20c103310 2026-03-09T17:21:04.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.779+0000 7fd211fd3640 1 --2- 192.168.123.100:0/1695427032 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 0x7fd20c1970c0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fd1fc02f860 tx=0x7fd1fc004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:04.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.779+0000 7fd1faffd640 1 -- 192.168.123.100:0/1695427032 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd1fc047070 con 0x7fd20c103310 2026-03-09T17:21:04.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.779+0000 7fd1faffd640 1 -- 192.168.123.100:0/1695427032 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd1fc03e070 con 0x7fd20c103310 2026-03-09T17:21:04.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.779+0000 7fd1faffd640 1 -- 192.168.123.100:0/1695427032 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd1fc042460 con 0x7fd20c103310 2026-03-09T17:21:04.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.779+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd20c197890 con 0x7fd20c103310 2026-03-09T17:21:04.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.779+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd20c197c90 con 0x7fd20c103310 2026-03-09T17:21:04.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.779+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd20c06b2b0 con 0x7fd20c103310 2026-03-09T17:21:04.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.783+0000 7fd1faffd640 1 -- 192.168.123.100:0/1695427032 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7fd1fc038550 con 0x7fd20c103310 2026-03-09T17:21:04.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.783+0000 7fd1faffd640 1 --2- 192.168.123.100:0/1695427032 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd1e803dcf0 0x7fd1e80401b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:04.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.783+0000 7fd1faffd640 1 -- 192.168.123.100:0/1695427032 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fd1fc077120 con 0x7fd20c103310 2026-03-09T17:21:04.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.783+0000 7fd1faffd640 1 -- 192.168.123.100:0/1695427032 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd1fc077590 con 0x7fd20c103310 2026-03-09T17:21:04.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.783+0000 7fd2117d2640 1 --2- 192.168.123.100:0/1695427032 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd1e803dcf0 0x7fd1e80401b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:04.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:04 vm00 bash[20770]: cluster 2026-03-09T17:21:03.318721+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:04.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:04 vm00 bash[20770]: cluster 2026-03-09T17:21:03.318721+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:04.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.787+0000 7fd2117d2640 1 --2- 192.168.123.100:0/1695427032 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd1e803dcf0 0x7fd1e80401b0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fd2000099c0 tx=0x7fd200006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:04.884 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.883+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7fd20c198420 con 0x7fd1e803dcf0 2026-03-09T17:21:04.884 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.883+0000 7fd1faffd640 1 -- 192.168.123.100:0/1695427032 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+155 (secure 0 0 0) 0x7fd20c198420 con 0x7fd1e803dcf0 2026-03-09T17:21:04.885 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:21:04.885 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.102", "hostname": "vm02", "labels": [], "status": ""}] 2026-03-09T17:21:04.889 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd1e803dcf0 msgr2=0x7fd1e80401b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:04.889 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 --2- 192.168.123.100:0/1695427032 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd1e803dcf0 0x7fd1e80401b0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fd2000099c0 tx=0x7fd200006eb0 comp rx=0 tx=0).stop 2026-03-09T17:21:04.889 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 msgr2=0x7fd20c1970c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:04.889 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 --2- 192.168.123.100:0/1695427032 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 0x7fd20c1970c0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fd1fc02f860 tx=0x7fd1fc004270 comp rx=0 tx=0).stop 2026-03-09T17:21:04.889 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 shutdown_connections 2026-03-09T17:21:04.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 --2- 192.168.123.100:0/1695427032 >> v2:192.168.123.100:6800/3114914985 conn(0x7fd1e803dcf0 0x7fd1e80401b0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:04.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 --2- 192.168.123.100:0/1695427032 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd20c103310 0x7fd20c1970c0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:04.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 >> 192.168.123.100:0/1695427032 conn(0x7fd20c0fcd10 msgr2=0x7fd20c109720 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:04.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 shutdown_connections 2026-03-09T17:21:04.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:04.887+0000 7fd212fd5640 1 -- 192.168.123.100:0/1695427032 wait complete. 2026-03-09T17:21:04.942 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T17:21:04.943 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd crush tunables default 2026-03-09T17:21:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.743542+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.743542+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.746132+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.746132+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.748797+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.748797+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.750926+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.750926+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.751552+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.751552+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.752161+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.752161+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.752581+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.752581+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: cephadm 2026-03-09T17:21:04.753164+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: cephadm 2026-03-09T17:21:04.753164+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: cephadm 2026-03-09T17:21:04.793150+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: cephadm 2026-03-09T17:21:04.793150+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: cephadm 2026-03-09T17:21:04.820455+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: cephadm 2026-03-09T17:21:04.820455+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: cephadm 2026-03-09T17:21:04.851476+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: cephadm 2026-03-09T17:21:04.851476+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.885530+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.885530+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.889893+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.889893+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.892779+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.892779+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.894809+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:05 vm00 bash[20770]: audit 2026-03-09T17:21:04.894809+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:07.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:06 vm00 bash[20770]: cluster 2026-03-09T17:21:05.318939+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:07.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:06 vm00 bash[20770]: cluster 2026-03-09T17:21:05.318939+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:08.602 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.a/config 2026-03-09T17:21:08.751 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.747+0000 7f3520f53640 1 -- 192.168.123.100:0/732638146 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 msgr2=0x7f351c101800 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:08.751 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.747+0000 7f3520f53640 1 --2- 192.168.123.100:0/732638146 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 0x7f351c101800 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f3504009a00 tx=0x7f350402f310 comp rx=0 tx=0).stop 2026-03-09T17:21:08.751 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 -- 192.168.123.100:0/732638146 shutdown_connections 2026-03-09T17:21:08.751 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 --2- 192.168.123.100:0/732638146 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 0x7f351c101800 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:08.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 -- 192.168.123.100:0/732638146 >> 192.168.123.100:0/732638146 conn(0x7f351c0fcec0 msgr2=0x7f351c0ff2e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:08.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 -- 192.168.123.100:0/732638146 shutdown_connections 2026-03-09T17:21:08.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 -- 192.168.123.100:0/732638146 wait complete. 2026-03-09T17:21:08.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 Processor -- start 2026-03-09T17:21:08.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 -- start start 2026-03-09T17:21:08.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 0x7f351c0721c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:08.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f351c06b6b0 con 0x7f351c101420 2026-03-09T17:21:08.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f351a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 0x7f351c0721c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:08.753 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f351a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 0x7f351c0721c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:49784/0 (socket says 192.168.123.100:49784) 2026-03-09T17:21:08.753 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f351a575640 1 -- 192.168.123.100:0/3875453305 learned_addr learned my addr 192.168.123.100:0/3875453305 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:21:08.753 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f351a575640 1 -- 192.168.123.100:0/3875453305 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f351c072700 con 0x7f351c101420 2026-03-09T17:21:08.753 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f351a575640 1 --2- 192.168.123.100:0/3875453305 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 0x7f351c0721c0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f3504009b30 tx=0x7f3504002c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:08.753 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f35037fe640 1 -- 192.168.123.100:0/3875453305 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f35040045b0 con 0x7f351c101420 2026-03-09T17:21:08.753 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f35037fe640 1 -- 192.168.123.100:0/3875453305 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f3504046070 con 0x7f351c101420 2026-03-09T17:21:08.754 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f35037fe640 1 -- 192.168.123.100:0/3875453305 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f35040416b0 con 0x7f351c101420 2026-03-09T17:21:08.754 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f351c072990 con 0x7f351c101420 2026-03-09T17:21:08.754 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f351c072e50 con 0x7f351c101420 2026-03-09T17:21:08.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f35037fe640 1 -- 192.168.123.100:0/3875453305 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f3504038470 con 0x7f351c101420 2026-03-09T17:21:08.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.751+0000 7f35037fe640 1 --2- 192.168.123.100:0/3875453305 >> v2:192.168.123.100:6800/3114914985 conn(0x7f34ec03d930 0x7f34ec03fdf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:08.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.755+0000 7f3519d74640 1 --2- 192.168.123.100:0/3875453305 >> v2:192.168.123.100:6800/3114914985 conn(0x7f34ec03d930 0x7f34ec03fdf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:08.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.755+0000 7f35037fe640 1 -- 192.168.123.100:0/3875453305 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7f3504038790 con 0x7f351c101420 2026-03-09T17:21:08.756 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.755+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f351c102900 con 0x7f351c101420 2026-03-09T17:21:08.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.755+0000 7f35037fe640 1 -- 192.168.123.100:0/3875453305 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f35040373d0 con 0x7f351c101420 2026-03-09T17:21:08.759 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.755+0000 7f3519d74640 1 --2- 192.168.123.100:0/3875453305 >> v2:192.168.123.100:6800/3114914985 conn(0x7f34ec03d930 0x7f34ec03fdf0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f35100099c0 tx=0x7f3510006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:08.849 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.847+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) -- 0x7f351c101800 con 0x7f351c101420 2026-03-09T17:21:08.902 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.899+0000 7f35037fe640 1 -- 192.168.123.100:0/3875453305 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) ==== 124+0+0 (secure 0 0 0) 0x7f3504038a70 con 0x7f351c101420 2026-03-09T17:21:08.904 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-09T17:21:08.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.903+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 >> v2:192.168.123.100:6800/3114914985 conn(0x7f34ec03d930 msgr2=0x7f34ec03fdf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:08.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.903+0000 7f3520f53640 1 --2- 192.168.123.100:0/3875453305 >> v2:192.168.123.100:6800/3114914985 conn(0x7f34ec03d930 0x7f34ec03fdf0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f35100099c0 tx=0x7f3510006eb0 comp rx=0 tx=0).stop 2026-03-09T17:21:08.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.903+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 msgr2=0x7f351c0721c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:08.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.903+0000 7f3520f53640 1 --2- 192.168.123.100:0/3875453305 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 0x7f351c0721c0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f3504009b30 tx=0x7f3504002c80 comp rx=0 tx=0).stop 2026-03-09T17:21:08.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.907+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 shutdown_connections 2026-03-09T17:21:08.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.907+0000 7f3520f53640 1 --2- 192.168.123.100:0/3875453305 >> v2:192.168.123.100:6800/3114914985 conn(0x7f34ec03d930 0x7f34ec03fdf0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:08.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.907+0000 7f3520f53640 1 --2- 192.168.123.100:0/3875453305 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f351c101420 0x7f351c0721c0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:08.908 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.907+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 >> 192.168.123.100:0/3875453305 conn(0x7f351c0fcec0 msgr2=0x7f351c109060 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:08.908 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.907+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 shutdown_connections 2026-03-09T17:21:08.908 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:08.907+0000 7f3520f53640 1 -- 192.168.123.100:0/3875453305 wait complete. 2026-03-09T17:21:08.959 INFO:tasks.cephadm:Adding mon.a on vm00 2026-03-09T17:21:08.959 INFO:tasks.cephadm:Adding mon.c on vm00 2026-03-09T17:21:08.959 INFO:tasks.cephadm:Adding mon.b on vm02 2026-03-09T17:21:08.959 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch apply mon '3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b' 2026-03-09T17:21:09.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:08 vm00 bash[20770]: cluster 2026-03-09T17:21:07.319228+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:09.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:08 vm00 bash[20770]: cluster 2026-03-09T17:21:07.319228+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:09.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:08 vm00 bash[20770]: audit 2026-03-09T17:21:08.851001+0000 mon.a (mon.0) 128 : audit [INF] from='client.? 192.168.123.100:0/3875453305' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T17:21:09.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:08 vm00 bash[20770]: audit 2026-03-09T17:21:08.851001+0000 mon.a (mon.0) 128 : audit [INF] from='client.? 192.168.123.100:0/3875453305' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T17:21:10.067 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:10.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- 192.168.123.102:0/1958464665 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 msgr2=0x7f05e4101450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:10.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 --2- 192.168.123.102:0/1958464665 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 0x7f05e4101450 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f05d40099b0 tx=0x7f05d402f2b0 comp rx=0 tx=0).stop 2026-03-09T17:21:10.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- 192.168.123.102:0/1958464665 shutdown_connections 2026-03-09T17:21:10.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 --2- 192.168.123.102:0/1958464665 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 0x7f05e4101450 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:10.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- 192.168.123.102:0/1958464665 >> 192.168.123.102:0/1958464665 conn(0x7f05e40fcec0 msgr2=0x7f05e40ff2e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:10.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- 192.168.123.102:0/1958464665 shutdown_connections 2026-03-09T17:21:10.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- 192.168.123.102:0/1958464665 wait complete. 2026-03-09T17:21:10.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 Processor -- start 2026-03-09T17:21:10.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- start start 2026-03-09T17:21:10.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 0x7f05e419b660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:10.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f05e410e1b0 con 0x7f05e4101070 2026-03-09T17:21:10.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05e8df3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 0x7f05e419b660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:10.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05e8df3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 0x7f05e419b660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:44782/0 (socket says 192.168.123.102:44782) 2026-03-09T17:21:10.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05e8df3640 1 -- 192.168.123.102:0/3492749662 learned_addr learned my addr 192.168.123.102:0/3492749662 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:21:10.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05e8df3640 1 -- 192.168.123.102:0/3492749662 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f05e419bba0 con 0x7f05e4101070 2026-03-09T17:21:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05e8df3640 1 --2- 192.168.123.102:0/3492749662 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 0x7f05e419b660 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f05d40042c0 tx=0x7f05d40042f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05d9ffb640 1 -- 192.168.123.102:0/3492749662 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f05d4047070 con 0x7f05e4101070 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05d9ffb640 1 -- 192.168.123.102:0/3492749662 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f05d403e070 con 0x7f05e4101070 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05d9ffb640 1 -- 192.168.123.102:0/3492749662 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f05d4042710 con 0x7f05e4101070 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f05e419be30 con 0x7f05e4101070 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f05e419c1d0 con 0x7f05e4101070 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05d9ffb640 1 -- 192.168.123.102:0/3492749662 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f05d4038550 con 0x7f05e4101070 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f05ac005180 con 0x7f05e4101070 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05d9ffb640 1 --2- 192.168.123.102:0/3492749662 >> v2:192.168.123.100:6800/3114914985 conn(0x7f05bc03dcf0 0x7f05bc0401b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05d9ffb640 1 -- 192.168.123.102:0/3492749662 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f05d4076d50 con 0x7f05e4101070 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.217+0000 7f05dbfff640 1 --2- 192.168.123.102:0/3492749662 >> v2:192.168.123.100:6800/3114914985 conn(0x7f05bc03dcf0 0x7f05bc0401b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.221+0000 7f05dbfff640 1 --2- 192.168.123.102:0/3492749662 >> v2:192.168.123.100:6800/3114914985 conn(0x7f05bc03dcf0 0x7f05bc0401b0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f05cc0099c0 tx=0x7f05cc006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.221+0000 7f05d9ffb640 1 -- 192.168.123.102:0/3492749662 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f05d402fe20 con 0x7f05e4101070 2026-03-09T17:21:10.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:09 vm00 bash[20770]: audit 2026-03-09T17:21:08.902947+0000 mon.a (mon.0) 129 : audit [INF] from='client.? 192.168.123.100:0/3875453305' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T17:21:10.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:09 vm00 bash[20770]: audit 2026-03-09T17:21:08.902947+0000 mon.a (mon.0) 129 : audit [INF] from='client.? 192.168.123.100:0/3875453305' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T17:21:10.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:09 vm00 bash[20770]: cluster 2026-03-09T17:21:08.905054+0000 mon.a (mon.0) 130 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:10.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:09 vm00 bash[20770]: cluster 2026-03-09T17:21:08.905054+0000 mon.a (mon.0) 130 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:10.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:09 vm00 bash[20770]: cluster 2026-03-09T17:21:09.319459+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:10.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:09 vm00 bash[20770]: cluster 2026-03-09T17:21:09.319459+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:10.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.317+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b", "target": ["mon-mgr", ""]}) -- 0x7f05ac002cc0 con 0x7f05bc03dcf0 2026-03-09T17:21:10.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05d9ffb640 1 -- 192.168.123.102:0/3492749662 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f05ac002cc0 con 0x7f05bc03dcf0 2026-03-09T17:21:10.326 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled mon update... 2026-03-09T17:21:10.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 >> v2:192.168.123.100:6800/3114914985 conn(0x7f05bc03dcf0 msgr2=0x7f05bc0401b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:10.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 --2- 192.168.123.102:0/3492749662 >> v2:192.168.123.100:6800/3114914985 conn(0x7f05bc03dcf0 0x7f05bc0401b0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f05cc0099c0 tx=0x7f05cc006eb0 comp rx=0 tx=0).stop 2026-03-09T17:21:10.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 msgr2=0x7f05e419b660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:10.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 --2- 192.168.123.102:0/3492749662 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 0x7f05e419b660 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f05d40042c0 tx=0x7f05d40042f0 comp rx=0 tx=0).stop 2026-03-09T17:21:10.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 shutdown_connections 2026-03-09T17:21:10.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 --2- 192.168.123.102:0/3492749662 >> v2:192.168.123.100:6800/3114914985 conn(0x7f05bc03dcf0 0x7f05bc0401b0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:10.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 --2- 192.168.123.102:0/3492749662 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f05e4101070 0x7f05e419b660 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:10.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 >> 192.168.123.102:0/3492749662 conn(0x7f05e40fcec0 msgr2=0x7f05e40fe2a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:10.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 shutdown_connections 2026-03-09T17:21:10.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:10.325+0000 7f05eb07e640 1 -- 192.168.123.102:0/3492749662 wait complete. 2026-03-09T17:21:10.395 DEBUG:teuthology.orchestra.run.vm00:mon.c> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.c.service 2026-03-09T17:21:10.396 DEBUG:teuthology.orchestra.run.vm02:mon.b> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.b.service 2026-03-09T17:21:10.397 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T17:21:10.397 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph mon dump -f json 2026-03-09T17:21:11.545 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.322614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.322614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: cephadm 2026-03-09T17:21:10.323769+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b;count:3 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: cephadm 2026-03-09T17:21:10.323769+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b;count:3 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.326431+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.326431+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.327015+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.327015+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.327973+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.327973+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.328352+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.328352+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.331071+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.331071+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.332010+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.332010+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.332387+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: audit 2026-03-09T17:21:10.332387+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: cephadm 2026-03-09T17:21:10.332879+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm02 2026-03-09T17:21:11.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:11 vm00 bash[20770]: cephadm 2026-03-09T17:21:10.332879+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm02 2026-03-09T17:21:11.879 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.873+0000 7f7f619fc640 1 -- 192.168.123.102:0/156017951 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c073c50 msgr2=0x7f7f5c074030 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.873+0000 7f7f619fc640 1 --2- 192.168.123.102:0/156017951 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c073c50 0x7f7f5c074030 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f7f4c007920 tx=0x7f7f4c0300e0 comp rx=0 tx=0).stop 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.873+0000 7f7f619fc640 1 -- 192.168.123.102:0/156017951 shutdown_connections 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.873+0000 7f7f619fc640 1 --2- 192.168.123.102:0/156017951 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c073c50 0x7f7f5c074030 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.873+0000 7f7f619fc640 1 -- 192.168.123.102:0/156017951 >> 192.168.123.102:0/156017951 conn(0x7f7f5c06d270 msgr2=0x7f7f5c06d680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.873+0000 7f7f619fc640 1 -- 192.168.123.102:0/156017951 shutdown_connections 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.873+0000 7f7f619fc640 1 -- 192.168.123.102:0/156017951 wait complete. 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f619fc640 1 Processor -- start 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f619fc640 1 -- start start 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f619fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c07e2e0 0x7f7f5c07e6c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f619fc640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7f5c077e90 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f609fa640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c07e2e0 0x7f7f5c07e6c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f609fa640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c07e2e0 0x7f7f5c07e6c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:44814/0 (socket says 192.168.123.102:44814) 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f609fa640 1 -- 192.168.123.102:0/1861327029 learned_addr learned my addr 192.168.123.102:0/1861327029 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f609fa640 1 -- 192.168.123.102:0/1861327029 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7f5c086310 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f609fa640 1 --2- 192.168.123.102:0/1861327029 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c07e2e0 0x7f7f5c07e6c0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f7f4c030f70 tx=0x7f7f4c0036f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:11.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f59ffb640 1 -- 192.168.123.102:0/1861327029 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7f4c003af0 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7f5c07ec00 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7f5c07f0c0 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f59ffb640 1 -- 192.168.123.102:0/1861327029 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f7f4c034040 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f59ffb640 1 -- 192.168.123.102:0/1861327029 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7f4c042440 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f59ffb640 1 -- 192.168.123.102:0/1861327029 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f7f4c040020 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f59ffb640 1 --2- 192.168.123.102:0/1861327029 >> v2:192.168.123.100:6800/3114914985 conn(0x7f7f3c03dcf0 0x7f7f3c0401b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:11.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f59ffb640 1 -- 192.168.123.102:0/1861327029 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f7f4c0773f0 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f5bfff640 1 --2- 192.168.123.102:0/1861327029 >> v2:192.168.123.100:6800/3114914985 conn(0x7f7f3c03dcf0 0x7f7f3c0401b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:11.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f5bfff640 1 --2- 192.168.123.102:0/1861327029 >> v2:192.168.123.100:6800/3114914985 conn(0x7f7f3c03dcf0 0x7f7f3c0401b0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f7f5400ad30 tx=0x7f7f540093f0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:11.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.877+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7f5c074f00 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.889 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.881+0000 7f7f59ffb640 1 -- 192.168.123.102:0/1861327029 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7f4c0409d0 con 0x7f7f5c07e2e0 2026-03-09T17:21:11.954 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:33.185519+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:11.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:11.977+0000 7f7f59ffb640 1 -- 192.168.123.102:0/1861327029 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_map magic: 0 ==== 309+0+0 (secure 0 0 0) 0x7f7f4c04a340 con 0x7f7f5c07e2e0 2026-03-09T17:21:12.035 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:12.029+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f7f5c073c50 con 0x7f7f5c07e2e0 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:33.456437+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:33.456437+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:33.920857+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:33.920857+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:33.992891+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:33.992891+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.288954+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.288954+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.292658+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.292658+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:35.289904+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:35.289904+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.650496+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.650496+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:35.651340+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:35.651340+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.653942+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.653942+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.935663+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.935663+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:36.197702+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.100:0/2526236106' entity='client.admin' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:36.197702+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.100:0/2526236106' entity='client.admin' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:36.461848+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/360443459' entity='client.admin' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:36.461848+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/360443459' entity='client.admin' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.932140+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:35.932140+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:35.932888+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:35.932888+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:36.806668+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:36.806668+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:36.832645+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/1830910416' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:36.832645+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/1830910416' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:37.114337+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:37.114337+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.100:0/3276404274' entity='mgr.y' 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:37.807224+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.100:0/1830910416' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:37.807224+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.100:0/1830910416' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:37.810143+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:37.810143+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:38.186451+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.100:0/4251548214' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:21:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:38.186451+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.100:0/4251548214' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.305050+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.305050+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.305469+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon y 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.305469+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon y 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.310766+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.310766+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.310939+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00557159s) 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.310939+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00557159s) 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.315943+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.315943+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.316779+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.316779+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.317616+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.317616+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.317973+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.317973+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.318294+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.318294+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.324863+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon y is now available 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:41.324863+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon y is now available 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.342022+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.342022+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.342995+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.342995+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.344551+0000 mon.a (mon.0) 88 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:41.344551+0000 mon.a (mon.0) 88 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:42.314698+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e11: y(active, since 1.00933s) 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:42.314698+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e11: y(active, since 1.00933s) 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.439266+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Bus STARTING 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.439266+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Bus STARTING 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.553802+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.553802+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.554518+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Client ('192.168.123.100', 53988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.554518+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Client ('192.168.123.100', 53988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:42.598073+0000 mgr.y (mgr.14150) 6 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:42.598073+0000 mgr.y (mgr.14150) 6 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.654759+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.654759+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.654806+0000 mgr.y (mgr.14150) 8 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Bus STARTED 2026-03-09T17:21:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:42.654806+0000 mgr.y (mgr.14150) 8 : cephadm [INF] [09/Mar/2026:17:20:42] ENGINE Bus STARTED 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:42.662692+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:42.662692+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:42.667487+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:42.667487+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:43.108829+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:43.108829+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:42.951535+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:42.951535+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:43.388645+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.100:0/1319275870' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:43.388645+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.100:0/1319275870' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:43.708435+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.100:0/3590720498' entity='client.admin' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:43.708435+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.100:0/3590720498' entity='client.admin' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:44.113453+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:44.113453+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:46.056239+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:46.056239+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:46.680281+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:46.680281+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:48.060803+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:20:48.060803+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:48.179518+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.100:0/256166564' entity='client.admin' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:48.179518+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.100:0/256166564' entity='client.admin' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.432650+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.432650+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.435365+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.435365+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.436065+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.436065+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.438978+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.438978+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.444346+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.444346+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.447622+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:52.447622+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.160260+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.160260+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.160820+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.160820+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.161791+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.161791+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.162221+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.162221+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.303024+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.303024+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.305165+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.305165+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.307497+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.307497+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.157490+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:53.157490+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:53.162803+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:53.162803+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:53.197587+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:53.197587+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:12.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:53.229402+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:53.229402+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:53.270779+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:53.270779+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:58.168706+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:58.168706+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:58.741012+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:58.741012+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:59.925972+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:59.925972+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:59.926544+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm02 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:20:59.926544+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm02 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:59.926813+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:20:59.926813+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:00.202428+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:00.202428+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:01.318514+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:01.318514+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:01.471381+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:01.471381+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:02.012428+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:02.012428+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:03.318721+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:03.318721+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.743542+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.743542+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.746132+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.746132+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.748797+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.748797+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.750926+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.750926+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.751552+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.751552+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.752161+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.752161+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.752581+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.752581+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:04.753164+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:04.753164+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:04.793150+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:04.793150+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:04.820455+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:04.820455+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:04.851476+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:04.851476+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.885530+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.885530+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.889893+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.889893+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.892779+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.892779+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.894809+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:04.894809+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:05.318939+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:05.318939+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:07.319228+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:07.319228+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:08.851001+0000 mon.a (mon.0) 128 : audit [INF] from='client.? 192.168.123.100:0/3875453305' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:08.851001+0000 mon.a (mon.0) 128 : audit [INF] from='client.? 192.168.123.100:0/3875453305' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:08.902947+0000 mon.a (mon.0) 129 : audit [INF] from='client.? 192.168.123.100:0/3875453305' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T17:21:12.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:08.902947+0000 mon.a (mon.0) 129 : audit [INF] from='client.? 192.168.123.100:0/3875453305' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:08.905054+0000 mon.a (mon.0) 130 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:08.905054+0000 mon.a (mon.0) 130 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:09.319459+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cluster 2026-03-09T17:21:09.319459+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.322614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.322614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:10.323769+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b;count:3 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:10.323769+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm02:192.168.123.102=b;count:3 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.326431+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.326431+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.327015+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.327015+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.327973+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.327973+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.328352+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.328352+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.331071+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.331071+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.332010+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.332010+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.332387+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: audit 2026-03-09T17:21:10.332387+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:10.332879+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm02 2026-03-09T17:21:12.389 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:11 vm02 bash[23351]: cephadm 2026-03-09T17:21:10.332879+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm02 2026-03-09T17:21:12.649 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:12.649 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:13.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:13.037 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:12 vm00 bash[28333]: debug 2026-03-09T17:21:12.847+0000 7f108bc13640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T17:21:16.995 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.989+0000 7f7f59ffb640 1 -- 192.168.123.102:0/1861327029 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) ==== 95+0+1031 (secure 0 0 0) 0x7f7f4c048d50 con 0x7f7f5c07e2e0 2026-03-09T17:21:16.996 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:21:16.996 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":2,"fsid":"16190428-1bdc-11f1-aea4-d920f1c7e51e","modified":"2026-03-09T17:21:11.981246Z","created":"2026-03-09T17:20:17.747377Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T17:21:16.996 INFO:teuthology.orchestra.run.vm02.stderr:dumped monmap epoch 2 2026-03-09T17:21:16.998 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 >> v2:192.168.123.100:6800/3114914985 conn(0x7f7f3c03dcf0 msgr2=0x7f7f3c0401b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:16.998 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 --2- 192.168.123.102:0/1861327029 >> v2:192.168.123.100:6800/3114914985 conn(0x7f7f3c03dcf0 0x7f7f3c0401b0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f7f5400ad30 tx=0x7f7f540093f0 comp rx=0 tx=0).stop 2026-03-09T17:21:16.998 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c07e2e0 msgr2=0x7f7f5c07e6c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:16.998 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 --2- 192.168.123.102:0/1861327029 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c07e2e0 0x7f7f5c07e6c0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f7f4c030f70 tx=0x7f7f4c0036f0 comp rx=0 tx=0).stop 2026-03-09T17:21:16.999 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 shutdown_connections 2026-03-09T17:21:16.999 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 --2- 192.168.123.102:0/1861327029 >> v2:192.168.123.100:6800/3114914985 conn(0x7f7f3c03dcf0 0x7f7f3c0401b0 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:16.999 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 --2- 192.168.123.102:0/1861327029 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f5c07e2e0 0x7f7f5c07e6c0 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:16.999 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 >> 192.168.123.102:0/1861327029 conn(0x7f7f5c06d270 msgr2=0x7f7f5c076100 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:16.999 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 shutdown_connections 2026-03-09T17:21:16.999 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:16.993+0000 7f7f619fc640 1 -- 192.168.123.102:0/1861327029 wait complete. 2026-03-09T17:21:17.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:11.984261+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:17.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:11.984261+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:17.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:11.984930+0000 mon.a (mon.0) 145 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:11.984930+0000 mon.a (mon.0) 145 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:11.987091+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:11.987091+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:12.036187+0000 mon.a (mon.0) 147 : audit [DBG] from='client.? 192.168.123.102:0/1861327029' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:12.036187+0000 mon.a (mon.0) 147 : audit [DBG] from='client.? 192.168.123.102:0/1861327029' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:12.859502+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:12.859502+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:12.972005+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:12.972005+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:13.319874+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:13.319874+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:13.856700+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:13.856700+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:13.972212+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:13.972212+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:13.976886+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:13.976886+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:14.856514+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:14.856514+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:14.972036+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:14.972036+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:15.320124+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:15.320124+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:15.856756+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:15.856756+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:15.971908+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:15.971908+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:16.856725+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:16.856725+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:16.972061+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:16.972061+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.990668+0000 mon.a (mon.0) 158 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.990668+0000 mon.a (mon.0) 158 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994233+0000 mon.a (mon.0) 159 : cluster [DBG] monmap epoch 2 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994233+0000 mon.a (mon.0) 159 : cluster [DBG] monmap epoch 2 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994279+0000 mon.a (mon.0) 160 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994279+0000 mon.a (mon.0) 160 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994324+0000 mon.a (mon.0) 161 : cluster [DBG] last_changed 2026-03-09T17:21:11.981246+0000 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994324+0000 mon.a (mon.0) 161 : cluster [DBG] last_changed 2026-03-09T17:21:11.981246+0000 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994363+0000 mon.a (mon.0) 162 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994363+0000 mon.a (mon.0) 162 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994403+0000 mon.a (mon.0) 163 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994403+0000 mon.a (mon.0) 163 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994442+0000 mon.a (mon.0) 164 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994442+0000 mon.a (mon.0) 164 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994481+0000 mon.a (mon.0) 165 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994481+0000 mon.a (mon.0) 165 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994521+0000 mon.a (mon.0) 166 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.994521+0000 mon.a (mon.0) 166 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.995022+0000 mon.a (mon.0) 167 : cluster [DBG] fsmap 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.995022+0000 mon.a (mon.0) 167 : cluster [DBG] fsmap 2026-03-09T17:21:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.995105+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.995105+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.995359+0000 mon.a (mon.0) 169 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.995359+0000 mon.a (mon.0) 169 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.995702+0000 mon.a (mon.0) 170 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: cluster 2026-03-09T17:21:16.995702+0000 mon.a (mon.0) 170 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.001874+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.001874+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.006467+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.006467+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.012771+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.012771+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.018815+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.018815+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.028788+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:17 vm02 bash[23351]: audit 2026-03-09T17:21:17.028788+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:11.984261+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:11.984261+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:11.984930+0000 mon.a (mon.0) 145 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:11.984930+0000 mon.a (mon.0) 145 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:11.987091+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:11.987091+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:12.036187+0000 mon.a (mon.0) 147 : audit [DBG] from='client.? 192.168.123.102:0/1861327029' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:12.036187+0000 mon.a (mon.0) 147 : audit [DBG] from='client.? 192.168.123.102:0/1861327029' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:12.859502+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:12.859502+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:12.972005+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:12.972005+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:13.319874+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:13.319874+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:13.856700+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:13.856700+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:13.972212+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:13.972212+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:13.976886+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:13.976886+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:14.856514+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:14.856514+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:14.972036+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:14.972036+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:15.320124+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:15.320124+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:15.856756+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:15.856756+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:15.971908+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:15.971908+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:16.856725+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:16.856725+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:16.972061+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:16.972061+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.990668+0000 mon.a (mon.0) 158 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.990668+0000 mon.a (mon.0) 158 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994233+0000 mon.a (mon.0) 159 : cluster [DBG] monmap epoch 2 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994233+0000 mon.a (mon.0) 159 : cluster [DBG] monmap epoch 2 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994279+0000 mon.a (mon.0) 160 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994279+0000 mon.a (mon.0) 160 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994324+0000 mon.a (mon.0) 161 : cluster [DBG] last_changed 2026-03-09T17:21:11.981246+0000 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994324+0000 mon.a (mon.0) 161 : cluster [DBG] last_changed 2026-03-09T17:21:11.981246+0000 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994363+0000 mon.a (mon.0) 162 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994363+0000 mon.a (mon.0) 162 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994403+0000 mon.a (mon.0) 163 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994403+0000 mon.a (mon.0) 163 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994442+0000 mon.a (mon.0) 164 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994442+0000 mon.a (mon.0) 164 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994481+0000 mon.a (mon.0) 165 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994481+0000 mon.a (mon.0) 165 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994521+0000 mon.a (mon.0) 166 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.994521+0000 mon.a (mon.0) 166 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.995022+0000 mon.a (mon.0) 167 : cluster [DBG] fsmap 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.995022+0000 mon.a (mon.0) 167 : cluster [DBG] fsmap 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.995105+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.995105+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.995359+0000 mon.a (mon.0) 169 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.995359+0000 mon.a (mon.0) 169 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.995702+0000 mon.a (mon.0) 170 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: cluster 2026-03-09T17:21:16.995702+0000 mon.a (mon.0) 170 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.001874+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.001874+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.006467+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.006467+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.012771+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.012771+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.018815+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.018815+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.028788+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:17.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:17 vm00 bash[20770]: audit 2026-03-09T17:21:17.028788+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:18.070 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T17:21:18.071 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph mon dump -f json 2026-03-09T17:21:18.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:18 vm02 bash[23351]: cluster 2026-03-09T17:21:17.320359+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:18.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:18 vm02 bash[23351]: cluster 2026-03-09T17:21:17.320359+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:18.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:18 vm02 bash[23351]: audit 2026-03-09T17:21:17.856637+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:18.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:18 vm02 bash[23351]: audit 2026-03-09T17:21:17.856637+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:18 vm02 bash[23351]: audit 2026-03-09T17:21:17.972230+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:18 vm02 bash[23351]: audit 2026-03-09T17:21:17.972230+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:18 vm00 bash[20770]: cluster 2026-03-09T17:21:17.320359+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:18 vm00 bash[20770]: cluster 2026-03-09T17:21:17.320359+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:18 vm00 bash[20770]: audit 2026-03-09T17:21:17.856637+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:18 vm00 bash[20770]: audit 2026-03-09T17:21:17.856637+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:18 vm00 bash[20770]: audit 2026-03-09T17:21:17.972230+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:18 vm00 bash[20770]: audit 2026-03-09T17:21:17.972230+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:19.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:21:18 vm00 bash[21037]: debug 2026-03-09T17:21:18.971+0000 7f588b5b8640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T17:21:21.804 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:21:24.269 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:18.862169+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:24.269 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:18.862169+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:24.269 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:18.862322+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:18.862322+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:18.862371+0000 mon.a (mon.0) 181 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:18.862371+0000 mon.a (mon.0) 181 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:18.863344+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:18.863344+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:18.865445+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:18.865445+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:19.320526+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:19.320526+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:19.856943+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:19.856943+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:20.856837+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:20.856837+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:21.320688+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:21.320688+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:21.856993+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:21.856993+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:22.856995+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:22.856995+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.320842+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.320842+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.857130+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.857130+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.864262+0000 mon.a (mon.0) 188 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.864262+0000 mon.a (mon.0) 188 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867334+0000 mon.a (mon.0) 189 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867334+0000 mon.a (mon.0) 189 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867343+0000 mon.a (mon.0) 190 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867343+0000 mon.a (mon.0) 190 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867346+0000 mon.a (mon.0) 191 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867346+0000 mon.a (mon.0) 191 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867349+0000 mon.a (mon.0) 192 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867349+0000 mon.a (mon.0) 192 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867354+0000 mon.a (mon.0) 193 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867354+0000 mon.a (mon.0) 193 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867357+0000 mon.a (mon.0) 194 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867357+0000 mon.a (mon.0) 194 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867362+0000 mon.a (mon.0) 195 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867362+0000 mon.a (mon.0) 195 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867369+0000 mon.a (mon.0) 196 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867369+0000 mon.a (mon.0) 196 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867375+0000 mon.a (mon.0) 197 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867375+0000 mon.a (mon.0) 197 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867782+0000 mon.a (mon.0) 198 : cluster [DBG] fsmap 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867782+0000 mon.a (mon.0) 198 : cluster [DBG] fsmap 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867802+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867802+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867911+0000 mon.a (mon.0) 200 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T17:21:24.270 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.867911+0000 mon.a (mon.0) 200 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.868000+0000 mon.a (mon.0) 201 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.868000+0000 mon.a (mon.0) 201 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.870356+0000 mon.a (mon.0) 202 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.870356+0000 mon.a (mon.0) 202 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.870494+0000 mon.a (mon.0) 203 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.870494+0000 mon.a (mon.0) 203 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.870515+0000 mon.a (mon.0) 204 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] is down (out of quorum) 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cluster 2026-03-09T17:21:23.870515+0000 mon.a (mon.0) 204 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] is down (out of quorum) 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.872922+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.872922+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.875006+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.875006+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.877240+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.877240+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.880182+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.880182+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.883104+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.883104+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.883872+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.883872+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.884325+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: audit 2026-03-09T17:21:23.884325+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cephadm 2026-03-09T17:21:23.884850+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cephadm 2026-03-09T17:21:23.884850+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cephadm 2026-03-09T17:21:23.884977+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:24.271 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:23 vm00 bash[20770]: cephadm 2026-03-09T17:21:23.884977+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:18.862169+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:18.862169+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:18.862322+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:18.862322+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:18.862371+0000 mon.a (mon.0) 181 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:18.862371+0000 mon.a (mon.0) 181 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:18.863344+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:18.863344+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:18.865445+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:18.865445+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:19.320526+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:19.320526+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:19.856943+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:19.856943+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:20.856837+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:20.856837+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:21.320688+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:21.320688+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:21.856993+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:21.856993+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:22.856995+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:22.856995+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.320842+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.320842+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.857130+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.857130+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.864262+0000 mon.a (mon.0) 188 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.864262+0000 mon.a (mon.0) 188 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867334+0000 mon.a (mon.0) 189 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867334+0000 mon.a (mon.0) 189 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867343+0000 mon.a (mon.0) 190 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867343+0000 mon.a (mon.0) 190 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867346+0000 mon.a (mon.0) 191 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867346+0000 mon.a (mon.0) 191 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867349+0000 mon.a (mon.0) 192 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867349+0000 mon.a (mon.0) 192 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867354+0000 mon.a (mon.0) 193 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867354+0000 mon.a (mon.0) 193 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867357+0000 mon.a (mon.0) 194 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867357+0000 mon.a (mon.0) 194 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867362+0000 mon.a (mon.0) 195 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867362+0000 mon.a (mon.0) 195 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867369+0000 mon.a (mon.0) 196 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867369+0000 mon.a (mon.0) 196 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867375+0000 mon.a (mon.0) 197 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867375+0000 mon.a (mon.0) 197 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867782+0000 mon.a (mon.0) 198 : cluster [DBG] fsmap 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867782+0000 mon.a (mon.0) 198 : cluster [DBG] fsmap 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867802+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867802+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867911+0000 mon.a (mon.0) 200 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.867911+0000 mon.a (mon.0) 200 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.868000+0000 mon.a (mon.0) 201 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.868000+0000 mon.a (mon.0) 201 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.870356+0000 mon.a (mon.0) 202 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.870356+0000 mon.a (mon.0) 202 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.870494+0000 mon.a (mon.0) 203 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.870494+0000 mon.a (mon.0) 203 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.870515+0000 mon.a (mon.0) 204 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] is down (out of quorum) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cluster 2026-03-09T17:21:23.870515+0000 mon.a (mon.0) 204 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] is down (out of quorum) 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.872922+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.872922+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.875006+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.875006+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.877240+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.877240+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.880182+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.880182+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.883104+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.883104+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.883872+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.883872+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.884325+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: audit 2026-03-09T17:21:23.884325+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cephadm 2026-03-09T17:21:23.884850+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cephadm 2026-03-09T17:21:23.884850+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cephadm 2026-03-09T17:21:23.884977+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:23 vm02 bash[23351]: cephadm 2026-03-09T17:21:23.884977+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.945+0000 7fcd18b73640 1 -- 192.168.123.102:0/3573063308 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 msgr2=0x7fccf4005680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.945+0000 7fcd18b73640 1 --2- 192.168.123.102:0/3573063308 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 0x7fccf4005680 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7fcd08002a00 tx=0x7fcd08030570 comp rx=0 tx=0).stop 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- 192.168.123.102:0/3573063308 shutdown_connections 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 --2- 192.168.123.102:0/3573063308 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 0x7fccf4005680 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- 192.168.123.102:0/3573063308 >> 192.168.123.102:0/3573063308 conn(0x7fcd141006d0 msgr2=0x7fcd14102ac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- 192.168.123.102:0/3573063308 shutdown_connections 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- 192.168.123.102:0/3573063308 wait complete. 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 Processor -- start 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- start start 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 0x7fcd141a1580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 0x7fcd1419b650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd1419bb90 0x7fcd1419c040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fcd1410bc60 con 0x7fcd1410a640 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fcd1410bae0 con 0x7fcd1419bb90 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fcd1410bde0 con 0x7fcd141a1ac0 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 0x7fcd141a1580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd11d74640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 0x7fcd1419b650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 0x7fcd141a1580 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:46720/0 (socket says 192.168.123.102:46720) 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 -- 192.168.123.102:0/1713209376 learned_addr learned my addr 192.168.123.102:0/1713209376 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:21:24.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd11d74640 1 -- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 msgr2=0x7fcd1419b650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 13 2026-03-09T17:21:24.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd11d74640 1 -- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 msgr2=0x7fcd1419b650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T17:21:24.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd11d74640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 0x7fcd1419b650 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T17:21:24.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd11d74640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 0x7fcd1419b650 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T17:21:24.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 -- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 msgr2=0x7fcd1419b650 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:21:24.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 0x7fcd1419b650 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:24.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 -- 192.168.123.102:0/1713209376 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd1419bb90 msgr2=0x7fcd1419c040 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:24.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd1419bb90 0x7fcd1419c040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:24.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 -- 192.168.123.102:0/1713209376 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fcd141a8330 con 0x7fcd1410a640 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd12575640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 0x7fcd141a1580 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fcd08015090 tx=0x7fcd08039e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fccfb7fe640 1 -- 192.168.123.102:0/1713209376 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fcd0803e040 con 0x7fcd1410a640 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fcd141a85c0 con 0x7fcd1410a640 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fcd141a8ad0 con 0x7fcd1410a640 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fccfb7fe640 1 -- 192.168.123.102:0/1713209376 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fcd080054c0 con 0x7fcd1410a640 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fccfb7fe640 1 -- 192.168.123.102:0/1713209376 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fcd08004e50 con 0x7fcd1410a640 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fccfb7fe640 1 -- 192.168.123.102:0/1713209376 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7fcd08041030 con 0x7fcd1410a640 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fccfb7fe640 1 --2- 192.168.123.102:0/1713209376 >> v2:192.168.123.100:6800/3114914985 conn(0x7fcce8046920 0x7fcce8048de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fccfb7fe640 1 -- 192.168.123.102:0/1713209376 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7fcd080777c0 con 0x7fcd1410a640 2026-03-09T17:21:24.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.949+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fcce0005180 con 0x7fcd1410a640 2026-03-09T17:21:24.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.953+0000 7fccfb7fe640 1 -- 192.168.123.102:0/1713209376 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fcd08035c60 con 0x7fcd1410a640 2026-03-09T17:21:24.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.953+0000 7fcd11d74640 1 --2- 192.168.123.102:0/1713209376 >> v2:192.168.123.100:6800/3114914985 conn(0x7fcce8046920 0x7fcce8048de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:24.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:24.957+0000 7fcd11d74640 1 --2- 192.168.123.102:0/1713209376 >> v2:192.168.123.100:6800/3114914985 conn(0x7fcce8046920 0x7fcce8048de0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fccfc006fb0 tx=0x7fccfc008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:25.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.161+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7fcce0005470 con 0x7fcd1410a640 2026-03-09T17:21:25.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.161+0000 7fccfb7fe640 1 -- 192.168.123.102:0/1713209376 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 3 v3) ==== 95+0+1307 (secure 0 0 0) 0x7fcd08043e30 con 0x7fcd1410a640 2026-03-09T17:21:25.164 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:21:25.165 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":3,"fsid":"16190428-1bdc-11f1-aea4-d920f1c7e51e","modified":"2026-03-09T17:21:18.858224Z","created":"2026-03-09T17:20:17.747377Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3301","nonce":0},{"type":"v1","addr":"192.168.123.100:6790","nonce":0}]},"addr":"192.168.123.100:6790/0","public_addr":"192.168.123.100:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T17:21:25.165 INFO:teuthology.orchestra.run.vm02.stderr:dumped monmap epoch 3 2026-03-09T17:21:25.167 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.161+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 >> v2:192.168.123.100:6800/3114914985 conn(0x7fcce8046920 msgr2=0x7fcce8048de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:25.167 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.161+0000 7fcd18b73640 1 --2- 192.168.123.102:0/1713209376 >> v2:192.168.123.100:6800/3114914985 conn(0x7fcce8046920 0x7fcce8048de0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fccfc006fb0 tx=0x7fccfc008040 comp rx=0 tx=0).stop 2026-03-09T17:21:25.167 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.161+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 msgr2=0x7fcd141a1580 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:25.167 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.161+0000 7fcd18b73640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 0x7fcd141a1580 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fcd08015090 tx=0x7fcd08039e20 comp rx=0 tx=0).stop 2026-03-09T17:21:25.167 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.165+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 shutdown_connections 2026-03-09T17:21:25.167 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.165+0000 7fcd18b73640 1 --2- 192.168.123.102:0/1713209376 >> v2:192.168.123.100:6800/3114914985 conn(0x7fcce8046920 0x7fcce8048de0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:25.167 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.165+0000 7fcd18b73640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd1419bb90 0x7fcd1419c040 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:25.168 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.165+0000 7fcd18b73640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fcd141a1ac0 0x7fcd1419b650 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:25.168 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.165+0000 7fcd18b73640 1 --2- 192.168.123.102:0/1713209376 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd1410a640 0x7fcd141a1580 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:25.168 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.165+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 >> 192.168.123.102:0/1713209376 conn(0x7fcd141006d0 msgr2=0x7fcd14108180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:25.168 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.165+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 shutdown_connections 2026-03-09T17:21:25.168 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:25.165+0000 7fcd18b73640 1 -- 192.168.123.102:0/1713209376 wait complete. 2026-03-09T17:21:25.231 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T17:21:25.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph config generate-minimal-conf 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:23.919023+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:23.919023+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:23.935340+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:23.935340+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.967394+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.967394+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.971821+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.971821+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.980659+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.231 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.980659+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.988274+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.988274+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.991924+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:23.991924+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.004126+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.004126+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.007247+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.007247+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.012483+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.012483+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.015214+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.015214+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.015429+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.015429+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.015743+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.015743+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.016152+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.016152+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.016521+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.016521+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.017026+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.017026+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.378904+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.378904+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.382343+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.382343+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.382924+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.382924+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.383086+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.383086+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.383456+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.383456+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.383801+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.383801+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.384226+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.384226+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.731204+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.731204+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.734528+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.734528+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.735088+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.735088+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.735265+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.735265+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.735702+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.735702+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.736017+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.736017+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.736437+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm02 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: cephadm 2026-03-09T17:21:24.736437+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm02 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.857424+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:25.232 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:24 vm02 bash[23351]: audit 2026-03-09T17:21:24.857424+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:23.919023+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:23.919023+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:23.935340+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:23.935340+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.967394+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.967394+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.971821+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.971821+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.980659+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.980659+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.988274+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.988274+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.991924+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:23.991924+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.004126+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.004126+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.007247+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.239 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.007247+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.012483+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.012483+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.015214+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.015214+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.015429+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.015429+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.015743+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.015743+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.016152+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.016152+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.016521+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.016521+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.017026+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.017026+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.378904+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.378904+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.382343+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.382343+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.382924+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.382924+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.383086+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.383086+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.383456+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.383456+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.383801+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.383801+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.384226+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.384226+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.731204+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.731204+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.734528+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.734528+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.735088+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.735088+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.735265+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.735265+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.735702+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.735702+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.736017+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.736017+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.736437+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm02 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: cephadm 2026-03-09T17:21:24.736437+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm02 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.857424+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:25.240 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:24 vm00 bash[20770]: audit 2026-03-09T17:21:24.857424+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:11.984261+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:11.984261+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:11.984930+0000 mon.a (mon.0) 145 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:11.984930+0000 mon.a (mon.0) 145 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:11.987091+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:11.987091+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:12.036187+0000 mon.a (mon.0) 147 : audit [DBG] from='client.? 192.168.123.102:0/1861327029' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:12.036187+0000 mon.a (mon.0) 147 : audit [DBG] from='client.? 192.168.123.102:0/1861327029' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:12.859502+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:12.859502+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:12.972005+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:12.972005+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:13.319874+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:13.319874+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:13.856700+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:13.856700+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:13.972212+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:13.972212+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:13.976886+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:13.976886+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:14.856514+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:14.856514+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:14.972036+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:14.972036+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.146 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:15.320124+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:15.320124+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:15.856756+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:15.856756+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:15.971908+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:15.971908+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:16.856725+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:16.856725+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:16.972061+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:16.972061+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.990668+0000 mon.a (mon.0) 158 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.990668+0000 mon.a (mon.0) 158 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994233+0000 mon.a (mon.0) 159 : cluster [DBG] monmap epoch 2 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994233+0000 mon.a (mon.0) 159 : cluster [DBG] monmap epoch 2 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994279+0000 mon.a (mon.0) 160 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994279+0000 mon.a (mon.0) 160 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994324+0000 mon.a (mon.0) 161 : cluster [DBG] last_changed 2026-03-09T17:21:11.981246+0000 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994324+0000 mon.a (mon.0) 161 : cluster [DBG] last_changed 2026-03-09T17:21:11.981246+0000 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994363+0000 mon.a (mon.0) 162 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994363+0000 mon.a (mon.0) 162 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994403+0000 mon.a (mon.0) 163 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994403+0000 mon.a (mon.0) 163 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994442+0000 mon.a (mon.0) 164 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994442+0000 mon.a (mon.0) 164 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994481+0000 mon.a (mon.0) 165 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994481+0000 mon.a (mon.0) 165 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994521+0000 mon.a (mon.0) 166 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.994521+0000 mon.a (mon.0) 166 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.995022+0000 mon.a (mon.0) 167 : cluster [DBG] fsmap 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.995022+0000 mon.a (mon.0) 167 : cluster [DBG] fsmap 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.995105+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.995105+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.995359+0000 mon.a (mon.0) 169 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.995359+0000 mon.a (mon.0) 169 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.995702+0000 mon.a (mon.0) 170 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:16.995702+0000 mon.a (mon.0) 170 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.001874+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.001874+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.006467+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.006467+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.012771+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.012771+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.018815+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.018815+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.028788+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.028788+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:17.320359+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:17.320359+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.856637+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.856637+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.972230+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:17.972230+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:18.862169+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:18.862169+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:18.862322+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:18.862322+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:18.862371+0000 mon.a (mon.0) 181 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:18.862371+0000 mon.a (mon.0) 181 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:18.863344+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.147 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:18.863344+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:18.865445+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:18.865445+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:19.320526+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:19.320526+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:19.856943+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:19.856943+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:20.856837+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:20.856837+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:21.320688+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:21.320688+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:21.856993+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:21.856993+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:22.856995+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:22.856995+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.320842+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.320842+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.857130+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.857130+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.864262+0000 mon.a (mon.0) 188 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.864262+0000 mon.a (mon.0) 188 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867334+0000 mon.a (mon.0) 189 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867334+0000 mon.a (mon.0) 189 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867343+0000 mon.a (mon.0) 190 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867343+0000 mon.a (mon.0) 190 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867346+0000 mon.a (mon.0) 191 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867346+0000 mon.a (mon.0) 191 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867349+0000 mon.a (mon.0) 192 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867349+0000 mon.a (mon.0) 192 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867354+0000 mon.a (mon.0) 193 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867354+0000 mon.a (mon.0) 193 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867357+0000 mon.a (mon.0) 194 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867357+0000 mon.a (mon.0) 194 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867362+0000 mon.a (mon.0) 195 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867362+0000 mon.a (mon.0) 195 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867369+0000 mon.a (mon.0) 196 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867369+0000 mon.a (mon.0) 196 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867375+0000 mon.a (mon.0) 197 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867375+0000 mon.a (mon.0) 197 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867782+0000 mon.a (mon.0) 198 : cluster [DBG] fsmap 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867782+0000 mon.a (mon.0) 198 : cluster [DBG] fsmap 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867802+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867802+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867911+0000 mon.a (mon.0) 200 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.867911+0000 mon.a (mon.0) 200 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.868000+0000 mon.a (mon.0) 201 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.868000+0000 mon.a (mon.0) 201 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.870356+0000 mon.a (mon.0) 202 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.870356+0000 mon.a (mon.0) 202 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.870494+0000 mon.a (mon.0) 203 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.870494+0000 mon.a (mon.0) 203 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.870515+0000 mon.a (mon.0) 204 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] is down (out of quorum) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cluster 2026-03-09T17:21:23.870515+0000 mon.a (mon.0) 204 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] is down (out of quorum) 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.872922+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.872922+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.875006+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.148 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.875006+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.877240+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.877240+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.880182+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.880182+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.883104+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.883104+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.883872+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.883872+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.884325+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.884325+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:23.884850+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:23.884850+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:23.884977+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:23.884977+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:23.919023+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:23.919023+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:23.935340+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:23.935340+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.967394+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.967394+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.971821+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.971821+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.980659+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.980659+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.988274+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.988274+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.991924+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:23.991924+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.004126+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.004126+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.007247+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.007247+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.012483+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.012483+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.015214+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.015214+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.015429+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.015429+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.015743+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.015743+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.016152+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.016152+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.016521+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.016521+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.017026+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.017026+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.378904+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.149 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.378904+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.382343+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.382343+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.382924+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.382924+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.383086+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.383086+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.383456+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.383456+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.383801+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.383801+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.384226+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.384226+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.731204+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.731204+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.734528+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.734528+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.735088+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.735088+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.735265+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.735265+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.735702+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.735702+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.736017+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.736017+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.736437+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm02 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: cephadm 2026-03-09T17:21:24.736437+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm02 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.857424+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.150 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:25 vm00 bash[28333]: audit 2026-03-09T17:21:24.857424+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:20.859235+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:20.859235+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.321077+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.321077+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.871294+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.871294+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.872419+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.872419+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.873469+0000 mon.a (mon.0) 243 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.873469+0000 mon.a (mon.0) 243 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.875289+0000 mon.a (mon.0) 244 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.875289+0000 mon.a (mon.0) 244 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878938+0000 mon.a (mon.0) 245 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878938+0000 mon.a (mon.0) 245 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878954+0000 mon.a (mon.0) 246 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878954+0000 mon.a (mon.0) 246 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878965+0000 mon.a (mon.0) 247 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878965+0000 mon.a (mon.0) 247 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878975+0000 mon.a (mon.0) 248 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878975+0000 mon.a (mon.0) 248 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878984+0000 mon.a (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878984+0000 mon.a (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878995+0000 mon.a (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.878995+0000 mon.a (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879004+0000 mon.a (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879004+0000 mon.a (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879014+0000 mon.a (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879014+0000 mon.a (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879024+0000 mon.a (mon.0) 253 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879024+0000 mon.a (mon.0) 253 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879264+0000 mon.a (mon.0) 254 : cluster [DBG] fsmap 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879264+0000 mon.a (mon.0) 254 : cluster [DBG] fsmap 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879285+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879285+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879399+0000 mon.a (mon.0) 256 : cluster [DBG] mgrmap e13: y(active, since 44s) 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879399+0000 mon.a (mon.0) 256 : cluster [DBG] mgrmap e13: y(active, since 44s) 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879480+0000 mon.a (mon.0) 257 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879480+0000 mon.a (mon.0) 257 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879492+0000 mon.a (mon.0) 258 : cluster [INF] Cluster is now healthy 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.879492+0000 mon.a (mon.0) 258 : cluster [INF] Cluster is now healthy 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.882204+0000 mon.a (mon.0) 259 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:26 vm02 bash[23351]: cluster 2026-03-09T17:21:25.882204+0000 mon.a (mon.0) 259 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:26.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:20.859235+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:20.859235+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.321077+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.321077+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.871294+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.871294+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.872419+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.872419+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.873469+0000 mon.a (mon.0) 243 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.873469+0000 mon.a (mon.0) 243 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.875289+0000 mon.a (mon.0) 244 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.875289+0000 mon.a (mon.0) 244 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878938+0000 mon.a (mon.0) 245 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878938+0000 mon.a (mon.0) 245 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878954+0000 mon.a (mon.0) 246 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878954+0000 mon.a (mon.0) 246 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878965+0000 mon.a (mon.0) 247 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878965+0000 mon.a (mon.0) 247 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878975+0000 mon.a (mon.0) 248 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878975+0000 mon.a (mon.0) 248 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878984+0000 mon.a (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878984+0000 mon.a (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878995+0000 mon.a (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.878995+0000 mon.a (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879004+0000 mon.a (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879004+0000 mon.a (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879014+0000 mon.a (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879014+0000 mon.a (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879024+0000 mon.a (mon.0) 253 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879024+0000 mon.a (mon.0) 253 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879264+0000 mon.a (mon.0) 254 : cluster [DBG] fsmap 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879264+0000 mon.a (mon.0) 254 : cluster [DBG] fsmap 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879285+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879285+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879399+0000 mon.a (mon.0) 256 : cluster [DBG] mgrmap e13: y(active, since 44s) 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879399+0000 mon.a (mon.0) 256 : cluster [DBG] mgrmap e13: y(active, since 44s) 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879480+0000 mon.a (mon.0) 257 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879480+0000 mon.a (mon.0) 257 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879492+0000 mon.a (mon.0) 258 : cluster [INF] Cluster is now healthy 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.879492+0000 mon.a (mon.0) 258 : cluster [INF] Cluster is now healthy 2026-03-09T17:21:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.882204+0000 mon.a (mon.0) 259 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:26 vm00 bash[20770]: cluster 2026-03-09T17:21:25.882204+0000 mon.a (mon.0) 259 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:20.859235+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:20.859235+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.321077+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.321077+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.871294+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.871294+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.872419+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.872419+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.873469+0000 mon.a (mon.0) 243 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.873469+0000 mon.a (mon.0) 243 : cluster [INF] mon.a calling monitor election 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.875289+0000 mon.a (mon.0) 244 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.875289+0000 mon.a (mon.0) 244 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878938+0000 mon.a (mon.0) 245 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878938+0000 mon.a (mon.0) 245 : cluster [DBG] monmap epoch 3 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878954+0000 mon.a (mon.0) 246 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878954+0000 mon.a (mon.0) 246 : cluster [DBG] fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878965+0000 mon.a (mon.0) 247 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878965+0000 mon.a (mon.0) 247 : cluster [DBG] last_changed 2026-03-09T17:21:18.858224+0000 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878975+0000 mon.a (mon.0) 248 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878975+0000 mon.a (mon.0) 248 : cluster [DBG] created 2026-03-09T17:20:17.747377+0000 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878984+0000 mon.a (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878984+0000 mon.a (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878995+0000 mon.a (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.878995+0000 mon.a (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879004+0000 mon.a (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879004+0000 mon.a (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879014+0000 mon.a (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879014+0000 mon.a (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879024+0000 mon.a (mon.0) 253 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879024+0000 mon.a (mon.0) 253 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879264+0000 mon.a (mon.0) 254 : cluster [DBG] fsmap 2026-03-09T17:21:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879264+0000 mon.a (mon.0) 254 : cluster [DBG] fsmap 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879285+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879285+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879399+0000 mon.a (mon.0) 256 : cluster [DBG] mgrmap e13: y(active, since 44s) 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879399+0000 mon.a (mon.0) 256 : cluster [DBG] mgrmap e13: y(active, since 44s) 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879480+0000 mon.a (mon.0) 257 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879480+0000 mon.a (mon.0) 257 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879492+0000 mon.a (mon.0) 258 : cluster [INF] Cluster is now healthy 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.879492+0000 mon.a (mon.0) 258 : cluster [INF] Cluster is now healthy 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.882204+0000 mon.a (mon.0) 259 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:26.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:26 vm00 bash[28333]: cluster 2026-03-09T17:21:25.882204+0000 mon.a (mon.0) 259 : cluster [INF] overall HEALTH_OK 2026-03-09T17:21:27.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:27 vm00 bash[20770]: audit 2026-03-09T17:21:26.857543+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:27.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:27 vm00 bash[20770]: audit 2026-03-09T17:21:26.857543+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:27.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:27 vm00 bash[28333]: audit 2026-03-09T17:21:26.857543+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:27.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:27 vm00 bash[28333]: audit 2026-03-09T17:21:26.857543+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:27.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:27 vm02 bash[23351]: audit 2026-03-09T17:21:26.857543+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:27.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:27 vm02 bash[23351]: audit 2026-03-09T17:21:26.857543+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:21:28.153 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:21:27 vm00 bash[21037]: debug 2026-03-09T17:21:27.855+0000 7f588b5b8640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T17:21:28.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:28 vm00 bash[20770]: cluster 2026-03-09T17:21:27.321240+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:28.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:28 vm00 bash[20770]: cluster 2026-03-09T17:21:27.321240+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:28 vm00 bash[28333]: cluster 2026-03-09T17:21:27.321240+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:28 vm00 bash[28333]: cluster 2026-03-09T17:21:27.321240+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:28.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:28 vm02 bash[23351]: cluster 2026-03-09T17:21:27.321240+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:28.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:28 vm02 bash[23351]: cluster 2026-03-09T17:21:27.321240+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:29.852 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- 192.168.123.100:0/1345388719 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 msgr2=0x7f0fe8113140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 --2- 192.168.123.100:0/1345388719 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 0x7f0fe8113140 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f0fdc009a30 tx=0x7f0fdc02f2f0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- 192.168.123.100:0/1345388719 shutdown_connections 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 --2- 192.168.123.100:0/1345388719 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fe8113680 0x7f0fe8115b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 --2- 192.168.123.100:0/1345388719 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 0x7f0fe8113140 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 --2- 192.168.123.100:0/1345388719 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0fe8101090 0x7f0fe8101470 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- 192.168.123.100:0/1345388719 >> 192.168.123.100:0/1345388719 conn(0x7f0fe8077fc0 msgr2=0x7f0fe80ff370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- 192.168.123.100:0/1345388719 shutdown_connections 2026-03-09T17:21:30.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- 192.168.123.100:0/1345388719 wait complete. 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 Processor -- start 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- start start 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fe8101090 0x7f0fe81a0d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 0x7f0fe81a1290 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0fe8113680 0x7f0fe81a5620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0fe8118940 con 0x7f0fe81019b0 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f0fe81187c0 con 0x7f0fe8101090 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe7577640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f0fe8118ac0 con 0x7f0fe8113680 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe6575640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fe8101090 0x7f0fe81a0d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:30.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe6575640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fe8101090 0x7f0fe81a0d50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:35524/0 (socket says 192.168.123.100:35524) 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe6575640 1 -- 192.168.123.100:0/2594849089 learned_addr learned my addr 192.168.123.100:0/2594849089 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe6d76640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0fe8113680 0x7f0fe81a5620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.007+0000 7f0fe5d74640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 0x7f0fe81a1290 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe6575640 1 -- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0fe8113680 msgr2=0x7f0fe81a5620 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe6575640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0fe8113680 0x7f0fe81a5620 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe6575640 1 -- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 msgr2=0x7f0fe81a1290 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe6575640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 0x7f0fe81a1290 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe6575640 1 -- 192.168.123.100:0/2594849089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0fe81a5d00 con 0x7f0fe8101090 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe5d74640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 0x7f0fe81a1290 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:21:30.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe6d76640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0fe8113680 0x7f0fe81a5620 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:21:30.012 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe6575640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fe8101090 0x7f0fe81a0d50 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f0fd400cab0 tx=0x7f0fd400cf80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:30.012 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fd37fe640 1 -- 192.168.123.100:0/2594849089 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0fd4013070 con 0x7f0fe8101090 2026-03-09T17:21:30.012 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0fe81a5ff0 con 0x7f0fe8101090 2026-03-09T17:21:30.012 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0fe81ad8d0 con 0x7f0fe8101090 2026-03-09T17:21:30.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fd37fe640 1 -- 192.168.123.100:0/2594849089 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0fd4007da0 con 0x7f0fe8101090 2026-03-09T17:21:30.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fd37fe640 1 -- 192.168.123.100:0/2594849089 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0fd4002e50 con 0x7f0fe8101090 2026-03-09T17:21:30.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fd37fe640 1 -- 192.168.123.100:0/2594849089 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f0fd4020020 con 0x7f0fe8101090 2026-03-09T17:21:30.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0fe8101470 con 0x7f0fe8101090 2026-03-09T17:21:30.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fd37fe640 1 --2- 192.168.123.100:0/2594849089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f0fc403df50 0x7f0fc4040410 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:30.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fd37fe640 1 -- 192.168.123.100:0/2594849089 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f0fd4051df0 con 0x7f0fe8101090 2026-03-09T17:21:30.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe5d74640 1 --2- 192.168.123.100:0/2594849089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f0fc403df50 0x7f0fc4040410 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:30.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.011+0000 7f0fe5d74640 1 --2- 192.168.123.100:0/2594849089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f0fc403df50 0x7f0fc4040410 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8112c40 tx=0x7f0fdc0057d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:30.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.015+0000 7f0fd37fe640 1 -- 192.168.123.100:0/2594849089 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0fd400f510 con 0x7f0fe8101090 2026-03-09T17:21:30.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7f0fe80fe4e0 con 0x7f0fe8101090 2026-03-09T17:21:30.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fd37fe640 1 -- 192.168.123.100:0/2594849089 <== mon.1 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v9) ==== 76+0+289 (secure 0 0 0) 0x7f0fd401d1d0 con 0x7f0fe8101090 2026-03-09T17:21:30.113 INFO:teuthology.orchestra.run.vm00.stdout:# minimal ceph.conf for 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:30.113 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-09T17:21:30.113 INFO:teuthology.orchestra.run.vm00.stdout: fsid = 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:21:30.113 INFO:teuthology.orchestra.run.vm00.stdout: mon_host = [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f0fc403df50 msgr2=0x7f0fc4040410 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 --2- 192.168.123.100:0/2594849089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f0fc403df50 0x7f0fc4040410 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8112c40 tx=0x7f0fdc0057d0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fe8101090 msgr2=0x7f0fe81a0d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fe8101090 0x7f0fe81a0d50 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f0fd400cab0 tx=0x7f0fd400cf80 comp rx=0 tx=0).stop 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 shutdown_connections 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 --2- 192.168.123.100:0/2594849089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f0fc403df50 0x7f0fc4040410 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0fe8113680 0x7f0fe81a5620 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0fe81019b0 0x7f0fe81a1290 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.114 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 --2- 192.168.123.100:0/2594849089 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fe8101090 0x7f0fe81a0d50 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:30.115 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 >> 192.168.123.100:0/2594849089 conn(0x7f0fe8077fc0 msgr2=0x7f0fe8112c90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:30.115 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 shutdown_connections 2026-03-09T17:21:30.115 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:30.111+0000 7f0fe7577640 1 -- 192.168.123.100:0/2594849089 wait complete. 2026-03-09T17:21:30.171 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T17:21:30.171 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:21:30.171 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T17:21:30.221 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:21:30.222 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:21:30.270 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:21:30.270 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T17:21:30.277 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:21:30.277 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:21:30.325 INFO:tasks.cephadm:Adding mgr.y on vm00 2026-03-09T17:21:30.325 INFO:tasks.cephadm:Adding mgr.x on vm02 2026-03-09T17:21:30.325 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch apply mgr '2;vm00=y;vm02=x' 2026-03-09T17:21:30.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:30 vm00 bash[20770]: cluster 2026-03-09T17:21:29.321451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:30.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:30 vm00 bash[20770]: cluster 2026-03-09T17:21:29.321451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:30.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:30 vm00 bash[20770]: audit 2026-03-09T17:21:30.113578+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.100:0/2594849089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:30.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:30 vm00 bash[20770]: audit 2026-03-09T17:21:30.113578+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.100:0/2594849089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:30.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:30 vm00 bash[28333]: cluster 2026-03-09T17:21:29.321451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:30.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:30 vm00 bash[28333]: cluster 2026-03-09T17:21:29.321451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:30.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:30 vm00 bash[28333]: audit 2026-03-09T17:21:30.113578+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.100:0/2594849089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:30.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:30 vm00 bash[28333]: audit 2026-03-09T17:21:30.113578+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.100:0/2594849089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:30.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:30 vm02 bash[23351]: cluster 2026-03-09T17:21:29.321451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:30.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:30 vm02 bash[23351]: cluster 2026-03-09T17:21:29.321451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:30.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:30 vm02 bash[23351]: audit 2026-03-09T17:21:30.113578+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.100:0/2594849089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:30.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:30 vm02 bash[23351]: audit 2026-03-09T17:21:30.113578+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.100:0/2594849089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:32.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:32 vm02 bash[23351]: cluster 2026-03-09T17:21:31.321606+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:32.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:32 vm02 bash[23351]: cluster 2026-03-09T17:21:31.321606+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:32.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:32 vm00 bash[28333]: cluster 2026-03-09T17:21:31.321606+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:32.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:32 vm00 bash[28333]: cluster 2026-03-09T17:21:31.321606+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:32.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:32 vm00 bash[20770]: cluster 2026-03-09T17:21:31.321606+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:32.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:32 vm00 bash[20770]: cluster 2026-03-09T17:21:31.321606+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:33.975 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:21:34.113 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- 192.168.123.102:0/4215859507 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 msgr2=0x7f56c4105980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:34.113 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 --2- 192.168.123.102:0/4215859507 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 0x7f56c4105980 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f56b0009a30 tx=0x7f56b002f220 comp rx=0 tx=0).stop 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- 192.168.123.102:0/4215859507 shutdown_connections 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 --2- 192.168.123.102:0/4215859507 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f56c41087f0 0x7f56c410ac00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 --2- 192.168.123.102:0/4215859507 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f56c4105ec0 0x7f56c41082b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 --2- 192.168.123.102:0/4215859507 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 0x7f56c4105980 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- 192.168.123.102:0/4215859507 >> 192.168.123.102:0/4215859507 conn(0x7f56c40fd110 msgr2=0x7f56c40ff550 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- 192.168.123.102:0/4215859507 shutdown_connections 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- 192.168.123.102:0/4215859507 wait complete. 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 Processor -- start 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- start start 2026-03-09T17:21:34.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 0x7f56c4102e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f56c4105ec0 0x7f56c41014c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c37fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f56c4105ec0 0x7f56c41014c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c3fff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 0x7f56c4102e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c3fff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 0x7f56c4102e70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:53394/0 (socket says 192.168.123.102:53394) 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f56c41087f0 0x7f56c4101a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f56c410dd10 con 0x7f56c4105ec0 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f56c410db90 con 0x7f56c4103590 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56ca948640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f56c410de90 con 0x7f56c41087f0 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c3fff640 1 -- 192.168.123.102:0/2251561613 learned_addr learned my addr 192.168.123.102:0/2251561613 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c3fff640 1 -- 192.168.123.102:0/2251561613 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f56c41087f0 msgr2=0x7f56c4101a00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c3fff640 1 --2- 192.168.123.102:0/2251561613 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f56c41087f0 0x7f56c4101a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c3fff640 1 -- 192.168.123.102:0/2251561613 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f56c4105ec0 msgr2=0x7f56c41014c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c3fff640 1 --2- 192.168.123.102:0/2251561613 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f56c4105ec0 0x7f56c41014c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c3fff640 1 -- 192.168.123.102:0/2251561613 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f56c4101f40 con 0x7f56c4103590 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.109+0000 7f56c37fe640 1 --2- 192.168.123.102:0/2251561613 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f56c4105ec0 0x7f56c41014c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:21:34.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56c3fff640 1 --2- 192.168.123.102:0/2251561613 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 0x7f56c4102e70 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f56b0009a00 tx=0x7f56b002fdf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:34.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56c17fa640 1 -- 192.168.123.102:0/2251561613 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f56b0004280 con 0x7f56c4103590 2026-03-09T17:21:34.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56c17fa640 1 -- 192.168.123.102:0/2251561613 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f56b0004420 con 0x7f56c4103590 2026-03-09T17:21:34.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56c17fa640 1 -- 192.168.123.102:0/2251561613 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f56b00055f0 con 0x7f56c4103590 2026-03-09T17:21:34.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f56c41021d0 con 0x7f56c4103590 2026-03-09T17:21:34.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f56c41a9190 con 0x7f56c4103590 2026-03-09T17:21:34.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56c17fa640 1 -- 192.168.123.102:0/2251561613 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f56b0038770 con 0x7f56c4103590 2026-03-09T17:21:34.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56c17fa640 1 --2- 192.168.123.102:0/2251561613 >> v2:192.168.123.100:6800/3114914985 conn(0x7f56a403de00 0x7f56a40402c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:34.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56c17fa640 1 -- 192.168.123.102:0/2251561613 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f56b0077750 con 0x7f56c4103590 2026-03-09T17:21:34.120 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.113+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5690005180 con 0x7f56c4103590 2026-03-09T17:21:34.120 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.117+0000 7f56c37fe640 1 --2- 192.168.123.102:0/2251561613 >> v2:192.168.123.100:6800/3114914985 conn(0x7f56a403de00 0x7f56a40402c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:34.120 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.117+0000 7f56c17fa640 1 -- 192.168.123.102:0/2251561613 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f56b004ad60 con 0x7f56c4103590 2026-03-09T17:21:34.121 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.117+0000 7f56c37fe640 1 --2- 192.168.123.102:0/2251561613 >> v2:192.168.123.100:6800/3114914985 conn(0x7f56a403de00 0x7f56a40402c0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f56b4006fd0 tx=0x7f56b4008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:34.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.213+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm02=x", "target": ["mon-mgr", ""]}) -- 0x7f5690002bf0 con 0x7f56a403de00 2026-03-09T17:21:34.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.217+0000 7f56c17fa640 1 -- 192.168.123.102:0/2251561613 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f5690002bf0 con 0x7f56a403de00 2026-03-09T17:21:34.223 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled mgr update... 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 >> v2:192.168.123.100:6800/3114914985 conn(0x7f56a403de00 msgr2=0x7f56a40402c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 --2- 192.168.123.102:0/2251561613 >> v2:192.168.123.100:6800/3114914985 conn(0x7f56a403de00 0x7f56a40402c0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f56b4006fd0 tx=0x7f56b4008040 comp rx=0 tx=0).stop 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 msgr2=0x7f56c4102e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 --2- 192.168.123.102:0/2251561613 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 0x7f56c4102e70 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f56b0009a00 tx=0x7f56b002fdf0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 shutdown_connections 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 --2- 192.168.123.102:0/2251561613 >> v2:192.168.123.100:6800/3114914985 conn(0x7f56a403de00 0x7f56a40402c0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 --2- 192.168.123.102:0/2251561613 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f56c41087f0 0x7f56c4101a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 --2- 192.168.123.102:0/2251561613 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f56c4105ec0 0x7f56c41014c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 --2- 192.168.123.102:0/2251561613 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f56c4103590 0x7f56c4102e70 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 >> 192.168.123.102:0/2251561613 conn(0x7f56c40fd110 msgr2=0x7f56c4104190 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 shutdown_connections 2026-03-09T17:21:34.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:21:34.221+0000 7f56ca948640 1 -- 192.168.123.102:0/2251561613 wait complete. 2026-03-09T17:21:34.297 DEBUG:teuthology.orchestra.run.vm02:mgr.x> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.x.service 2026-03-09T17:21:34.297 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T17:21:34.298 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:21:34.298 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T17:21:34.300 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:21:34.300 DEBUG:teuthology.orchestra.run.vm00:> ls /dev/[sv]d? 2026-03-09T17:21:34.345 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vda 2026-03-09T17:21:34.345 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdb 2026-03-09T17:21:34.345 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdc 2026-03-09T17:21:34.345 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdd 2026-03-09T17:21:34.345 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vde 2026-03-09T17:21:34.345 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T17:21:34.345 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T17:21:34.345 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdb 2026-03-09T17:21:34.389 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdb 2026-03-09T17:21:34.389 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:21:34.389 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T17:21:34.389 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:21:34.389 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-09 17:12:40.585752003 +0000 2026-03-09T17:21:34.389 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-09 17:12:39.597752003 +0000 2026-03-09T17:21:34.389 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-09 17:12:39.597752003 +0000 2026-03-09T17:21:34.389 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-09T17:21:34.389 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T17:21:34.436 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-09T17:21:34.436 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-09T17:21:34.437 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000195657 s, 2.6 MB/s 2026-03-09T17:21:34.437 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T17:21:34.482 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdc 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: cluster 2026-03-09T17:21:33.321782+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: cluster 2026-03-09T17:21:33.321782+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.222959+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.222959+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.223859+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.223859+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.224815+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.224815+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.225192+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.225192+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.228729+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.228729+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.229401+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.229401+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.231063+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.231063+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.232797+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.232797+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.233207+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 bash[23351]: audit 2026-03-09T17:21:34.233207+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.487 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:34 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:34.525 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdc 2026-03-09T17:21:34.525 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:21:34.525 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T17:21:34.525 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:21:34.525 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-09 17:12:40.593752003 +0000 2026-03-09T17:21:34.525 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-09 17:12:39.577752003 +0000 2026-03-09T17:21:34.525 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-09 17:12:39.577752003 +0000 2026-03-09T17:21:34.525 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-09T17:21:34.525 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T17:21:34.573 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-09T17:21:34.573 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-09T17:21:34.573 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000125254 s, 4.1 MB/s 2026-03-09T17:21:34.574 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T17:21:34.618 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdd 2026-03-09T17:21:34.661 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdd 2026-03-09T17:21:34.661 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:21:34.661 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T17:21:34.661 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:21:34.661 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-09 17:12:40.585752003 +0000 2026-03-09T17:21:34.661 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-09 17:12:39.581752003 +0000 2026-03-09T17:21:34.661 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-09 17:12:39.581752003 +0000 2026-03-09T17:21:34.661 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-09T17:21:34.661 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: cluster 2026-03-09T17:21:33.321782+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: cluster 2026-03-09T17:21:33.321782+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.222959+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.222959+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.223859+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.223859+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.224815+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.224815+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.225192+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.225192+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.228729+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.228729+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.229401+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.229401+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.231063+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.231063+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.232797+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.232797+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.233207+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:34 vm00 bash[20770]: audit 2026-03-09T17:21:34.233207+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: cluster 2026-03-09T17:21:33.321782+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: cluster 2026-03-09T17:21:33.321782+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.222959+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.222959+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.223859+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.223859+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.224815+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.224815+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.225192+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.225192+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.228729+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.228729+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.229401+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.229401+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.231063+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.231063+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.232797+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.232797+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.233207+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.708 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:34 vm00 bash[28333]: audit 2026-03-09T17:21:34.233207+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:34.708 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-09T17:21:34.708 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-09T17:21:34.708 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000151935 s, 3.4 MB/s 2026-03-09T17:21:34.709 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T17:21:34.754 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vde 2026-03-09T17:21:34.797 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vde 2026-03-09T17:21:34.797 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:21:34.797 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T17:21:34.797 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:21:34.797 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-09 17:12:40.593752003 +0000 2026-03-09T17:21:34.797 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-09 17:12:39.597752003 +0000 2026-03-09T17:21:34.797 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-09 17:12:39.597752003 +0000 2026-03-09T17:21:34.797 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-09T17:21:34.797 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T17:21:34.844 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-09T17:21:34.844 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-09T17:21:34.844 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000141164 s, 3.6 MB/s 2026-03-09T17:21:34.845 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T17:21:34.890 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:21:34.890 DEBUG:teuthology.orchestra.run.vm02:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T17:21:34.894 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:21:34.894 DEBUG:teuthology.orchestra.run.vm02:> ls /dev/[sv]d? 2026-03-09T17:21:34.938 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vda 2026-03-09T17:21:34.938 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdb 2026-03-09T17:21:34.938 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdc 2026-03-09T17:21:34.938 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdd 2026-03-09T17:21:34.938 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vde 2026-03-09T17:21:34.938 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T17:21:34.938 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T17:21:34.938 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdb 2026-03-09T17:21:34.983 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdb 2026-03-09T17:21:34.983 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:21:34.983 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T17:21:34.983 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:21:34.983 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-09 17:13:05.759856358 +0000 2026-03-09T17:21:34.983 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-09 17:13:04.647856358 +0000 2026-03-09T17:21:34.983 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-09 17:13:04.647856358 +0000 2026-03-09T17:21:34.983 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-09T17:21:34.983 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T17:21:35.000 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:35.000 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:34 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:35.001 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:34 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:35.001 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:34 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:35.001 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:34 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:35.001 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:34 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:21:35.013 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-09T17:21:35.013 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-09T17:21:35.013 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000115576 s, 4.4 MB/s 2026-03-09T17:21:35.015 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T17:21:35.060 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdc 2026-03-09T17:21:35.106 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdc 2026-03-09T17:21:35.106 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:21:35.106 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T17:21:35.106 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:21:35.106 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-09 17:13:05.767856358 +0000 2026-03-09T17:21:35.106 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-09 17:13:04.667856358 +0000 2026-03-09T17:21:35.106 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-09 17:13:04.667856358 +0000 2026-03-09T17:21:35.106 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-09T17:21:35.106 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T17:21:35.166 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-09T17:21:35.166 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-09T17:21:35.166 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000330889 s, 1.5 MB/s 2026-03-09T17:21:35.168 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T17:21:35.233 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdd 2026-03-09T17:21:35.278 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:35 vm02 systemd[1]: Started Ceph mgr.x for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:21:35.278 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:35 vm02 bash[24073]: debug 2026-03-09T17:21:35.237+0000 7f7ff9718140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T17:21:35.296 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdd 2026-03-09T17:21:35.296 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:21:35.296 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T17:21:35.296 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:21:35.296 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-09 17:13:05.759856358 +0000 2026-03-09T17:21:35.296 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-09 17:13:04.675856358 +0000 2026-03-09T17:21:35.296 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-09 17:13:04.675856358 +0000 2026-03-09T17:21:35.296 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-09T17:21:35.296 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T17:21:35.350 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-09T17:21:35.350 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-09T17:21:35.350 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000167413 s, 3.1 MB/s 2026-03-09T17:21:35.351 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T17:21:35.395 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vde 2026-03-09T17:21:35.442 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vde 2026-03-09T17:21:35.442 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:21:35.442 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T17:21:35.442 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:21:35.442 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-09 17:13:05.767856358 +0000 2026-03-09T17:21:35.442 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-09 17:13:04.667856358 +0000 2026-03-09T17:21:35.442 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-09 17:13:04.667856358 +0000 2026-03-09T17:21:35.442 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-09T17:21:35.442 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T17:21:35.489 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-09T17:21:35.489 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-09T17:21:35.489 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000193231 s, 2.6 MB/s 2026-03-09T17:21:35.490 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T17:21:35.536 INFO:tasks.cephadm:Deploying osd.0 on vm00 with /dev/vde... 2026-03-09T17:21:35.536 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- lvm zap /dev/vde 2026-03-09T17:21:35.634 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:35 vm02 bash[24073]: debug 2026-03-09T17:21:35.273+0000 7f7ff9718140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:21:35.634 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:35 vm02 bash[24073]: debug 2026-03-09T17:21:35.405+0000 7f7ff9718140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:21:35.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:34.218359+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:35.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:34.218359+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:35.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: cephadm 2026-03-09T17:21:34.219059+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm02=x;count:2 2026-03-09T17:21:35.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: cephadm 2026-03-09T17:21:34.219059+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm02=x;count:2 2026-03-09T17:21:35.634 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: cephadm 2026-03-09T17:21:34.233610+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm02 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: cephadm 2026-03-09T17:21:34.233610+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm02 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.045935+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.045935+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.052108+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.052108+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.055397+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.055397+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.058954+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.058954+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.069902+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:35 vm02 bash[23351]: audit 2026-03-09T17:21:35.069902+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:34.218359+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:34.218359+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: cephadm 2026-03-09T17:21:34.219059+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm02=x;count:2 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: cephadm 2026-03-09T17:21:34.219059+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm02=x;count:2 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: cephadm 2026-03-09T17:21:34.233610+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm02 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: cephadm 2026-03-09T17:21:34.233610+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm02 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.045935+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.045935+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.052108+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.052108+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.055397+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.055397+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.058954+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.058954+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.069902+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:35 vm00 bash[20770]: audit 2026-03-09T17:21:35.069902+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:34.218359+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:34.218359+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: cephadm 2026-03-09T17:21:34.219059+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm02=x;count:2 2026-03-09T17:21:35.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: cephadm 2026-03-09T17:21:34.219059+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm02=x;count:2 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: cephadm 2026-03-09T17:21:34.233610+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm02 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: cephadm 2026-03-09T17:21:34.233610+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm02 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.045935+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.045935+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.052108+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.052108+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.055397+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.055397+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.058954+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.058954+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.069902+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:35 vm00 bash[28333]: audit 2026-03-09T17:21:35.069902+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:36.134 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:35 vm02 bash[24073]: debug 2026-03-09T17:21:35.709+0000 7f7ff9718140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:21:36.556 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:36 vm02 bash[23351]: cluster 2026-03-09T17:21:35.321950+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:36.556 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:36 vm02 bash[23351]: cluster 2026-03-09T17:21:35.321950+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:36.556 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: debug 2026-03-09T17:21:36.177+0000 7f7ff9718140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:21:36.556 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: debug 2026-03-09T17:21:36.265+0000 7f7ff9718140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:21:36.556 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:21:36.556 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:21:36.556 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: from numpy import show_config as show_numpy_config 2026-03-09T17:21:36.556 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: debug 2026-03-09T17:21:36.397+0000 7f7ff9718140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:21:36.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:36 vm00 bash[20770]: cluster 2026-03-09T17:21:35.321950+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:36.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:36 vm00 bash[20770]: cluster 2026-03-09T17:21:35.321950+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:36.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:36 vm00 bash[28333]: cluster 2026-03-09T17:21:35.321950+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:36.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:36 vm00 bash[28333]: cluster 2026-03-09T17:21:35.321950+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:36.884 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: debug 2026-03-09T17:21:36.549+0000 7f7ff9718140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:21:36.884 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: debug 2026-03-09T17:21:36.589+0000 7f7ff9718140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:21:36.884 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: debug 2026-03-09T17:21:36.629+0000 7f7ff9718140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:21:36.884 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: debug 2026-03-09T17:21:36.669+0000 7f7ff9718140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:21:36.884 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:36 vm02 bash[24073]: debug 2026-03-09T17:21:36.721+0000 7f7ff9718140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:21:37.426 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.153+0000 7f7ff9718140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:21:37.426 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.189+0000 7f7ff9718140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:21:37.426 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.225+0000 7f7ff9718140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:21:37.426 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.381+0000 7f7ff9718140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:21:37.751 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.421+0000 7f7ff9718140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:21:37.751 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.465+0000 7f7ff9718140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:21:37.751 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.585+0000 7f7ff9718140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:21:38.010 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.745+0000 7f7ff9718140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:21:38.010 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.925+0000 7f7ff9718140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:21:38.010 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:37 vm02 bash[24073]: debug 2026-03-09T17:21:37.961+0000 7f7ff9718140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:21:38.010 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:38 vm02 bash[24073]: debug 2026-03-09T17:21:38.005+0000 7f7ff9718140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:21:38.384 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:38 vm02 bash[24073]: debug 2026-03-09T17:21:38.157+0000 7f7ff9718140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:21:38.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:38 vm00 bash[20770]: cluster 2026-03-09T17:21:37.322134+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:38.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:38 vm00 bash[20770]: cluster 2026-03-09T17:21:37.322134+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:38.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:38 vm00 bash[28333]: cluster 2026-03-09T17:21:37.322134+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:38.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:38 vm00 bash[28333]: cluster 2026-03-09T17:21:37.322134+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:38.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:38 vm02 bash[23351]: cluster 2026-03-09T17:21:37.322134+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:38.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:38 vm02 bash[23351]: cluster 2026-03-09T17:21:37.322134+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:38.884 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:21:38 vm02 bash[24073]: debug 2026-03-09T17:21:38.397+0000 7f7ff9718140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.406833+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.406833+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: cluster 2026-03-09T17:21:38.406973+0000 mon.a (mon.0) 275 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: cluster 2026-03-09T17:21:38.406973+0000 mon.a (mon.0) 275 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.407760+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.407760+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.408600+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.408600+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.409245+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.409245+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.879745+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:39 vm00 bash[20770]: audit 2026-03-09T17:21:38.879745+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.406833+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.406833+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: cluster 2026-03-09T17:21:38.406973+0000 mon.a (mon.0) 275 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: cluster 2026-03-09T17:21:38.406973+0000 mon.a (mon.0) 275 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.407760+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:21:39.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.407760+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:21:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.408600+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:21:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.408600+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:21:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.409245+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:21:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.409245+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:21:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.879745+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:39 vm00 bash[28333]: audit 2026-03-09T17:21:38.879745+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.406833+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.406833+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: cluster 2026-03-09T17:21:38.406973+0000 mon.a (mon.0) 275 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: cluster 2026-03-09T17:21:38.406973+0000 mon.a (mon.0) 275 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.407760+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.407760+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.408600+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.408600+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.409245+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.409245+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.102:0/2431890401' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:21:39.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.879745+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:39.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:39 vm02 bash[23351]: audit 2026-03-09T17:21:38.879745+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.150 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: cluster 2026-03-09T17:21:39.322290+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: cluster 2026-03-09T17:21:39.322290+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: cluster 2026-03-09T17:21:39.413748+0000 mon.a (mon.0) 277 : cluster [DBG] mgrmap e14: y(active, since 58s), standbys: x 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: cluster 2026-03-09T17:21:39.413748+0000 mon.a (mon.0) 277 : cluster [DBG] mgrmap e14: y(active, since 58s), standbys: x 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:39.413840+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:39.413840+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.010404+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.010404+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.014353+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.014353+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.015358+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.015358+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.015853+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.015853+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.019255+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.019255+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.029307+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.029307+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.029786+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.029786+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.030173+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:40 vm00 bash[20770]: audit 2026-03-09T17:21:40.030173+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: cluster 2026-03-09T17:21:39.322290+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: cluster 2026-03-09T17:21:39.322290+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: cluster 2026-03-09T17:21:39.413748+0000 mon.a (mon.0) 277 : cluster [DBG] mgrmap e14: y(active, since 58s), standbys: x 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: cluster 2026-03-09T17:21:39.413748+0000 mon.a (mon.0) 277 : cluster [DBG] mgrmap e14: y(active, since 58s), standbys: x 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:39.413840+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:39.413840+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.010404+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.010404+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.014353+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.014353+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.015358+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.015358+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.015853+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.015853+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.019255+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.019255+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.029307+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.029307+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.029786+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.029786+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.030173+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:40 vm00 bash[28333]: audit 2026-03-09T17:21:40.030173+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: cluster 2026-03-09T17:21:39.322290+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:40.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: cluster 2026-03-09T17:21:39.322290+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:40.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: cluster 2026-03-09T17:21:39.413748+0000 mon.a (mon.0) 277 : cluster [DBG] mgrmap e14: y(active, since 58s), standbys: x 2026-03-09T17:21:40.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: cluster 2026-03-09T17:21:39.413748+0000 mon.a (mon.0) 277 : cluster [DBG] mgrmap e14: y(active, since 58s), standbys: x 2026-03-09T17:21:40.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:39.413840+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:21:40.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:39.413840+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.010404+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.010404+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.014353+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.014353+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.015358+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.015358+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.015853+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.015853+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.019255+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.019255+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.029307+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.029307+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.029786+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.029786+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.030173+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:40 vm02 bash[23351]: audit 2026-03-09T17:21:40.030173+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:40.979 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:21:40.997 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch daemon add osd vm00:/dev/vde 2026-03-09T17:21:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: cephadm 2026-03-09T17:21:40.029162+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T17:21:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: cephadm 2026-03-09T17:21:40.029162+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T17:21:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: cephadm 2026-03-09T17:21:40.030587+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T17:21:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: cephadm 2026-03-09T17:21:40.030587+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T17:21:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.110170+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.110170+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.116421+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.116421+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.120539+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.120539+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.122375+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.122375+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.123398+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.123398+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.128545+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:41 vm00 bash[20770]: audit 2026-03-09T17:21:41.128545+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: cephadm 2026-03-09T17:21:40.029162+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: cephadm 2026-03-09T17:21:40.029162+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: cephadm 2026-03-09T17:21:40.030587+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: cephadm 2026-03-09T17:21:40.030587+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.110170+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.110170+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.116421+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.116421+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.120539+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.120539+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.122375+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.122375+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.123398+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.123398+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.128545+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:41 vm00 bash[28333]: audit 2026-03-09T17:21:41.128545+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: cephadm 2026-03-09T17:21:40.029162+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T17:21:41.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: cephadm 2026-03-09T17:21:40.029162+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T17:21:41.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: cephadm 2026-03-09T17:21:40.030587+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T17:21:41.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: cephadm 2026-03-09T17:21:40.030587+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T17:21:41.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.110170+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.110170+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.116421+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.116421+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.120539+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.120539+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.122375+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.122375+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.123398+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.123398+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.128545+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:41 vm02 bash[23351]: audit 2026-03-09T17:21:41.128545+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:21:42.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:42 vm00 bash[28333]: cluster 2026-03-09T17:21:41.322465+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:42.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:42 vm00 bash[28333]: cluster 2026-03-09T17:21:41.322465+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:42.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:42 vm00 bash[20770]: cluster 2026-03-09T17:21:41.322465+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:42.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:42 vm00 bash[20770]: cluster 2026-03-09T17:21:41.322465+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:42.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:42 vm02 bash[23351]: cluster 2026-03-09T17:21:41.322465+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:42.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:42 vm02 bash[23351]: cluster 2026-03-09T17:21:41.322465+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:44.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:44 vm00 bash[20770]: cluster 2026-03-09T17:21:43.322697+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:44.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:44 vm00 bash[20770]: cluster 2026-03-09T17:21:43.322697+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:44.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:44 vm00 bash[28333]: cluster 2026-03-09T17:21:43.322697+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:44.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:44 vm00 bash[28333]: cluster 2026-03-09T17:21:43.322697+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:44.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:44 vm02 bash[23351]: cluster 2026-03-09T17:21:43.322697+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:44 vm02 bash[23351]: cluster 2026-03-09T17:21:43.322697+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:45.658 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- 192.168.123.100:0/2385053327 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe494102ae0 msgr2=0x7fe494102ee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 --2- 192.168.123.100:0/2385053327 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe494102ae0 0x7fe494102ee0 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7fe488009a80 tx=0x7fe48802f2b0 comp rx=0 tx=0).stop 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- 192.168.123.100:0/2385053327 shutdown_connections 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 --2- 192.168.123.100:0/2385053327 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe4941046a0 0x7fe49410af30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 --2- 192.168.123.100:0/2385053327 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe494103ce0 0x7fe494104160 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 --2- 192.168.123.100:0/2385053327 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe494102ae0 0x7fe494102ee0 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- 192.168.123.100:0/2385053327 >> 192.168.123.100:0/2385053327 conn(0x7fe4940fe290 msgr2=0x7fe4941006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- 192.168.123.100:0/2385053327 shutdown_connections 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- 192.168.123.100:0/2385053327 wait complete. 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 Processor -- start 2026-03-09T17:21:45.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- start start 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe494102ae0 0x7fe494111450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe494103ce0 0x7fe494111990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4941046a0 0x7fe494118a10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe494076d40 con 0x7fe4941046a0 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fe494076bc0 con 0x7fe494102ae0 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4994d2640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fe494076ec0 con 0x7fe494103ce0 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4937fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4941046a0 0x7fe494118a10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe4927fc640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe494103ce0 0x7fe494111990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.811+0000 7fe492ffd640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe494102ae0 0x7fe494111450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe4927fc640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe494103ce0 0x7fe494111990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:32984/0 (socket says 192.168.123.100:32984) 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe4927fc640 1 -- 192.168.123.100:0/3291892883 learned_addr learned my addr 192.168.123.100:0/3291892883 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:21:45.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe492ffd640 1 -- 192.168.123.100:0/3291892883 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe494103ce0 msgr2=0x7fe494111990 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:45.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe492ffd640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe494103ce0 0x7fe494111990 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:45.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe492ffd640 1 -- 192.168.123.100:0/3291892883 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4941046a0 msgr2=0x7fe494118a10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:21:45.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe492ffd640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4941046a0 0x7fe494118a10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:21:45.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe492ffd640 1 -- 192.168.123.100:0/3291892883 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe49406c890 con 0x7fe494102ae0 2026-03-09T17:21:45.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe4937fe640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4941046a0 0x7fe494118a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:21:45.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe4927fc640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe494103ce0 0x7fe494111990 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:21:45.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe492ffd640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe494102ae0 0x7fe494111450 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fe4880099a0 tx=0x7fe488002990 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:45.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe473fff640 1 -- 192.168.123.100:0/3291892883 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe488004440 con 0x7fe494102ae0 2026-03-09T17:21:45.817 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe49406cb20 con 0x7fe494102ae0 2026-03-09T17:21:45.817 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe49406cfa0 con 0x7fe494102ae0 2026-03-09T17:21:45.817 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe473fff640 1 -- 192.168.123.100:0/3291892883 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fe48803e070 con 0x7fe494102ae0 2026-03-09T17:21:45.817 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe473fff640 1 -- 192.168.123.100:0/3291892883 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe488041870 con 0x7fe494102ae0 2026-03-09T17:21:45.818 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe473fff640 1 -- 192.168.123.100:0/3291892883 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 14) ==== 99944+0+0 (secure 0 0 0) 0x7fe488041a10 con 0x7fe494102ae0 2026-03-09T17:21:45.819 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.815+0000 7fe473fff640 1 --2- 192.168.123.100:0/3291892883 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe45c077540 0x7fe45c079a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:21:45.819 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.819+0000 7fe473fff640 1 -- 192.168.123.100:0/3291892883 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7fe4880bcf00 con 0x7fe494102ae0 2026-03-09T17:21:45.819 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.819+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe460005180 con 0x7fe494102ae0 2026-03-09T17:21:45.819 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.819+0000 7fe4927fc640 1 --2- 192.168.123.100:0/3291892883 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe45c077540 0x7fe45c079a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:21:45.820 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.819+0000 7fe4927fc640 1 --2- 192.168.123.100:0/3291892883 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe45c077540 0x7fe45c079a00 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fe494112970 tx=0x7fe47c00a480 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:21:45.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.823+0000 7fe473fff640 1 -- 192.168.123.100:0/3291892883 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe488087510 con 0x7fe494102ae0 2026-03-09T17:21:45.921 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:21:45.919+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7fe460002bf0 con 0x7fe45c077540 2026-03-09T17:21:46.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:46 vm00 bash[20770]: cluster 2026-03-09T17:21:45.322939+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:46 vm00 bash[20770]: cluster 2026-03-09T17:21:45.322939+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:46 vm00 bash[20770]: audit 2026-03-09T17:21:45.923413+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:46 vm00 bash[20770]: audit 2026-03-09T17:21:45.923413+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:46 vm00 bash[20770]: audit 2026-03-09T17:21:45.924723+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:46 vm00 bash[20770]: audit 2026-03-09T17:21:45.924723+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:46 vm00 bash[20770]: audit 2026-03-09T17:21:45.925163+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:46 vm00 bash[20770]: audit 2026-03-09T17:21:45.925163+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:46 vm00 bash[28333]: cluster 2026-03-09T17:21:45.322939+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:46 vm00 bash[28333]: cluster 2026-03-09T17:21:45.322939+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:46 vm00 bash[28333]: audit 2026-03-09T17:21:45.923413+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:46 vm00 bash[28333]: audit 2026-03-09T17:21:45.923413+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:46 vm00 bash[28333]: audit 2026-03-09T17:21:45.924723+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:46 vm00 bash[28333]: audit 2026-03-09T17:21:45.924723+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:46 vm00 bash[28333]: audit 2026-03-09T17:21:45.925163+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:46 vm00 bash[28333]: audit 2026-03-09T17:21:45.925163+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:46.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:46 vm02 bash[23351]: cluster 2026-03-09T17:21:45.322939+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:46 vm02 bash[23351]: cluster 2026-03-09T17:21:45.322939+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:46 vm02 bash[23351]: audit 2026-03-09T17:21:45.923413+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:21:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:46 vm02 bash[23351]: audit 2026-03-09T17:21:45.923413+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:21:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:46 vm02 bash[23351]: audit 2026-03-09T17:21:45.924723+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:21:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:46 vm02 bash[23351]: audit 2026-03-09T17:21:45.924723+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:21:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:46 vm02 bash[23351]: audit 2026-03-09T17:21:45.925163+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:46 vm02 bash[23351]: audit 2026-03-09T17:21:45.925163+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:21:47.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:47 vm00 bash[28333]: audit 2026-03-09T17:21:45.921877+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:47.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:47 vm00 bash[28333]: audit 2026-03-09T17:21:45.921877+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:47.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:47 vm00 bash[20770]: audit 2026-03-09T17:21:45.921877+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:47.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:47 vm00 bash[20770]: audit 2026-03-09T17:21:45.921877+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:47.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:47 vm02 bash[23351]: audit 2026-03-09T17:21:45.921877+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:47.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:47 vm02 bash[23351]: audit 2026-03-09T17:21:45.921877+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:21:48.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:48 vm00 bash[28333]: cluster 2026-03-09T17:21:47.323113+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:48.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:48 vm00 bash[28333]: cluster 2026-03-09T17:21:47.323113+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:48.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:48 vm00 bash[20770]: cluster 2026-03-09T17:21:47.323113+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:48.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:48 vm00 bash[20770]: cluster 2026-03-09T17:21:47.323113+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:48.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:48 vm02 bash[23351]: cluster 2026-03-09T17:21:47.323113+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:48.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:48 vm02 bash[23351]: cluster 2026-03-09T17:21:47.323113+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:50.753 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:50 vm00 bash[28333]: cluster 2026-03-09T17:21:49.323324+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:50.753 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:50 vm00 bash[28333]: cluster 2026-03-09T17:21:49.323324+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:50.753 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:50 vm00 bash[20770]: cluster 2026-03-09T17:21:49.323324+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:50.753 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:50 vm00 bash[20770]: cluster 2026-03-09T17:21:49.323324+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:50.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:50 vm02 bash[23351]: cluster 2026-03-09T17:21:49.323324+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:50.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:50 vm02 bash[23351]: cluster 2026-03-09T17:21:49.323324+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: audit 2026-03-09T17:21:51.274691+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/3708407530' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: audit 2026-03-09T17:21:51.274691+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/3708407530' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: audit 2026-03-09T17:21:51.275074+0000 mon.a (mon.0) 296 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: audit 2026-03-09T17:21:51.275074+0000 mon.a (mon.0) 296 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: audit 2026-03-09T17:21:51.277999+0000 mon.a (mon.0) 297 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]': finished 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: audit 2026-03-09T17:21:51.277999+0000 mon.a (mon.0) 297 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]': finished 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: cluster 2026-03-09T17:21:51.281008+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: cluster 2026-03-09T17:21:51.281008+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: audit 2026-03-09T17:21:51.282354+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:21:51.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:51 vm00 bash[28333]: audit 2026-03-09T17:21:51.282354+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: audit 2026-03-09T17:21:51.274691+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/3708407530' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: audit 2026-03-09T17:21:51.274691+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/3708407530' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: audit 2026-03-09T17:21:51.275074+0000 mon.a (mon.0) 296 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: audit 2026-03-09T17:21:51.275074+0000 mon.a (mon.0) 296 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: audit 2026-03-09T17:21:51.277999+0000 mon.a (mon.0) 297 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]': finished 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: audit 2026-03-09T17:21:51.277999+0000 mon.a (mon.0) 297 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]': finished 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: cluster 2026-03-09T17:21:51.281008+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: cluster 2026-03-09T17:21:51.281008+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: audit 2026-03-09T17:21:51.282354+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:21:51.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:51 vm00 bash[20770]: audit 2026-03-09T17:21:51.282354+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:21:51.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: audit 2026-03-09T17:21:51.274691+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/3708407530' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: audit 2026-03-09T17:21:51.274691+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/3708407530' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: audit 2026-03-09T17:21:51.275074+0000 mon.a (mon.0) 296 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: audit 2026-03-09T17:21:51.275074+0000 mon.a (mon.0) 296 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]: dispatch 2026-03-09T17:21:51.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: audit 2026-03-09T17:21:51.277999+0000 mon.a (mon.0) 297 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]': finished 2026-03-09T17:21:51.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: audit 2026-03-09T17:21:51.277999+0000 mon.a (mon.0) 297 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "568cb8ad-2652-448a-8223-a18b7a893c0f"}]': finished 2026-03-09T17:21:51.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: cluster 2026-03-09T17:21:51.281008+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T17:21:51.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: cluster 2026-03-09T17:21:51.281008+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T17:21:51.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: audit 2026-03-09T17:21:51.282354+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:21:51.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:51 vm02 bash[23351]: audit 2026-03-09T17:21:51.282354+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:21:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:52 vm00 bash[28333]: cluster 2026-03-09T17:21:51.323550+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:52 vm00 bash[28333]: cluster 2026-03-09T17:21:51.323550+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:52 vm00 bash[28333]: audit 2026-03-09T17:21:51.871181+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/1888945662' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:21:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:52 vm00 bash[28333]: audit 2026-03-09T17:21:51.871181+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/1888945662' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:21:52.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:52 vm00 bash[20770]: cluster 2026-03-09T17:21:51.323550+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:52.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:52 vm00 bash[20770]: cluster 2026-03-09T17:21:51.323550+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:52.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:52 vm00 bash[20770]: audit 2026-03-09T17:21:51.871181+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/1888945662' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:21:52.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:52 vm00 bash[20770]: audit 2026-03-09T17:21:51.871181+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/1888945662' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:21:52.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:52 vm02 bash[23351]: cluster 2026-03-09T17:21:51.323550+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:52.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:52 vm02 bash[23351]: cluster 2026-03-09T17:21:51.323550+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:52.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:52 vm02 bash[23351]: audit 2026-03-09T17:21:51.871181+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/1888945662' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:21:52.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:52 vm02 bash[23351]: audit 2026-03-09T17:21:51.871181+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/1888945662' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:21:54.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:54 vm00 bash[28333]: cluster 2026-03-09T17:21:53.323742+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:54.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:54 vm00 bash[28333]: cluster 2026-03-09T17:21:53.323742+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:54 vm00 bash[20770]: cluster 2026-03-09T17:21:53.323742+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:54 vm00 bash[20770]: cluster 2026-03-09T17:21:53.323742+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:54.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:54 vm02 bash[23351]: cluster 2026-03-09T17:21:53.323742+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:54.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:54 vm02 bash[23351]: cluster 2026-03-09T17:21:53.323742+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:56.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:56 vm00 bash[28333]: cluster 2026-03-09T17:21:55.323933+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:56.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:56 vm00 bash[28333]: cluster 2026-03-09T17:21:55.323933+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:56.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:56 vm00 bash[20770]: cluster 2026-03-09T17:21:55.323933+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:56.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:56 vm00 bash[20770]: cluster 2026-03-09T17:21:55.323933+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:56.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:56 vm02 bash[23351]: cluster 2026-03-09T17:21:55.323933+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:56.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:56 vm02 bash[23351]: cluster 2026-03-09T17:21:55.323933+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:58.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:58 vm00 bash[28333]: cluster 2026-03-09T17:21:57.324100+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:58.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:21:58 vm00 bash[28333]: cluster 2026-03-09T17:21:57.324100+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:58.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:58 vm00 bash[20770]: cluster 2026-03-09T17:21:57.324100+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:58.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:21:58 vm00 bash[20770]: cluster 2026-03-09T17:21:57.324100+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:58.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:58 vm02 bash[23351]: cluster 2026-03-09T17:21:57.324100+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:21:58.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:21:58 vm02 bash[23351]: cluster 2026-03-09T17:21:57.324100+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:00.645 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:00 vm00 bash[28333]: cluster 2026-03-09T17:21:59.324281+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:00.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:00 vm00 bash[28333]: cluster 2026-03-09T17:21:59.324281+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:00.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:00 vm00 bash[28333]: audit 2026-03-09T17:22:00.091962+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T17:22:00.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:00 vm00 bash[28333]: audit 2026-03-09T17:22:00.091962+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T17:22:00.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:00 vm00 bash[28333]: audit 2026-03-09T17:22:00.092571+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:00.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:00 vm00 bash[28333]: audit 2026-03-09T17:22:00.092571+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:00.647 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:00 vm00 bash[20770]: cluster 2026-03-09T17:21:59.324281+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:00.647 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:00 vm00 bash[20770]: cluster 2026-03-09T17:21:59.324281+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:00.647 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:00 vm00 bash[20770]: audit 2026-03-09T17:22:00.091962+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T17:22:00.647 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:00 vm00 bash[20770]: audit 2026-03-09T17:22:00.091962+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T17:22:00.647 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:00 vm00 bash[20770]: audit 2026-03-09T17:22:00.092571+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:00.647 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:00 vm00 bash[20770]: audit 2026-03-09T17:22:00.092571+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:00.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:00 vm02 bash[23351]: cluster 2026-03-09T17:21:59.324281+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:00.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:00 vm02 bash[23351]: cluster 2026-03-09T17:21:59.324281+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:00.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:00 vm02 bash[23351]: audit 2026-03-09T17:22:00.091962+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T17:22:00.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:00 vm02 bash[23351]: audit 2026-03-09T17:22:00.091962+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T17:22:00.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:00 vm02 bash[23351]: audit 2026-03-09T17:22:00.092571+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:00 vm02 bash[23351]: audit 2026-03-09T17:22:00.092571+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:00.900 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:00 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:00.900 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:00 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:00.900 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:22:00 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:01.203 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:22:01 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:01.203 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:01.203 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:01.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:01 vm02 bash[23351]: cephadm 2026-03-09T17:22:00.093025+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T17:22:01.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:01 vm02 bash[23351]: cephadm 2026-03-09T17:22:00.093025+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T17:22:01.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:01 vm02 bash[23351]: audit 2026-03-09T17:22:01.097902+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:01.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:01 vm02 bash[23351]: audit 2026-03-09T17:22:01.097902+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:01.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:01 vm02 bash[23351]: audit 2026-03-09T17:22:01.102612+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:01.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:01 vm02 bash[23351]: audit 2026-03-09T17:22:01.102612+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:01.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:01 vm02 bash[23351]: audit 2026-03-09T17:22:01.107588+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:01.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:01 vm02 bash[23351]: audit 2026-03-09T17:22:01.107588+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 bash[28333]: cephadm 2026-03-09T17:22:00.093025+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T17:22:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 bash[28333]: cephadm 2026-03-09T17:22:00.093025+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T17:22:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 bash[28333]: audit 2026-03-09T17:22:01.097902+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 bash[28333]: audit 2026-03-09T17:22:01.097902+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 bash[28333]: audit 2026-03-09T17:22:01.102612+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 bash[28333]: audit 2026-03-09T17:22:01.102612+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 bash[28333]: audit 2026-03-09T17:22:01.107588+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:01 vm00 bash[28333]: audit 2026-03-09T17:22:01.107588+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 bash[20770]: cephadm 2026-03-09T17:22:00.093025+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T17:22:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 bash[20770]: cephadm 2026-03-09T17:22:00.093025+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T17:22:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 bash[20770]: audit 2026-03-09T17:22:01.097902+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 bash[20770]: audit 2026-03-09T17:22:01.097902+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 bash[20770]: audit 2026-03-09T17:22:01.102612+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 bash[20770]: audit 2026-03-09T17:22:01.102612+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 bash[20770]: audit 2026-03-09T17:22:01.107588+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:01 vm00 bash[20770]: audit 2026-03-09T17:22:01.107588+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:02.826 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:02 vm00 bash[20770]: cluster 2026-03-09T17:22:01.324444+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:02.826 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:02 vm00 bash[20770]: cluster 2026-03-09T17:22:01.324444+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:02.826 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:02 vm00 bash[28333]: cluster 2026-03-09T17:22:01.324444+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:02.826 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:02 vm00 bash[28333]: cluster 2026-03-09T17:22:01.324444+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:02.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:02 vm02 bash[23351]: cluster 2026-03-09T17:22:01.324444+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:02.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:02 vm02 bash[23351]: cluster 2026-03-09T17:22:01.324444+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:04.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:04 vm02 bash[23351]: cluster 2026-03-09T17:22:03.324612+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:04.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:04 vm02 bash[23351]: cluster 2026-03-09T17:22:03.324612+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:05.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:04 vm00 bash[20770]: cluster 2026-03-09T17:22:03.324612+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:05.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:04 vm00 bash[20770]: cluster 2026-03-09T17:22:03.324612+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:04 vm00 bash[28333]: cluster 2026-03-09T17:22:03.324612+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:04 vm00 bash[28333]: cluster 2026-03-09T17:22:03.324612+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:05.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:05 vm02 bash[23351]: audit 2026-03-09T17:22:04.721023+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T17:22:05.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:05 vm02 bash[23351]: audit 2026-03-09T17:22:04.721023+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T17:22:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:05 vm00 bash[20770]: audit 2026-03-09T17:22:04.721023+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T17:22:06.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:05 vm00 bash[20770]: audit 2026-03-09T17:22:04.721023+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T17:22:06.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:05 vm00 bash[28333]: audit 2026-03-09T17:22:04.721023+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T17:22:06.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:05 vm00 bash[28333]: audit 2026-03-09T17:22:04.721023+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T17:22:06.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: cluster 2026-03-09T17:22:05.324780+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:06.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: cluster 2026-03-09T17:22:05.324780+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:06.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: audit 2026-03-09T17:22:05.568135+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T17:22:06.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: audit 2026-03-09T17:22:05.568135+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T17:22:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: cluster 2026-03-09T17:22:05.571517+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T17:22:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: cluster 2026-03-09T17:22:05.571517+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T17:22:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: audit 2026-03-09T17:22:05.571783+0000 mon.a (mon.0) 308 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: audit 2026-03-09T17:22:05.571783+0000 mon.a (mon.0) 308 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: audit 2026-03-09T17:22:05.571889+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:06 vm02 bash[23351]: audit 2026-03-09T17:22:05.571889+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: cluster 2026-03-09T17:22:05.324780+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: cluster 2026-03-09T17:22:05.324780+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: audit 2026-03-09T17:22:05.568135+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: audit 2026-03-09T17:22:05.568135+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: cluster 2026-03-09T17:22:05.571517+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: cluster 2026-03-09T17:22:05.571517+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: audit 2026-03-09T17:22:05.571783+0000 mon.a (mon.0) 308 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: audit 2026-03-09T17:22:05.571783+0000 mon.a (mon.0) 308 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: audit 2026-03-09T17:22:05.571889+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:06 vm00 bash[28333]: audit 2026-03-09T17:22:05.571889+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: cluster 2026-03-09T17:22:05.324780+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: cluster 2026-03-09T17:22:05.324780+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: audit 2026-03-09T17:22:05.568135+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T17:22:07.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: audit 2026-03-09T17:22:05.568135+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T17:22:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: cluster 2026-03-09T17:22:05.571517+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T17:22:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: cluster 2026-03-09T17:22:05.571517+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T17:22:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: audit 2026-03-09T17:22:05.571783+0000 mon.a (mon.0) 308 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: audit 2026-03-09T17:22:05.571783+0000 mon.a (mon.0) 308 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: audit 2026-03-09T17:22:05.571889+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:06 vm00 bash[20770]: audit 2026-03-09T17:22:05.571889+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:06.570741+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:07.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:06.570741+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:07.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: cluster 2026-03-09T17:22:06.573476+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: cluster 2026-03-09T17:22:06.573476+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:06.574367+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:06.574367+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:06.584439+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:06.584439+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:07.285433+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:07.285433+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:07.292826+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:07 vm02 bash[23351]: audit 2026-03-09T17:22:07.292826+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:06.570741+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:06.570741+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: cluster 2026-03-09T17:22:06.573476+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: cluster 2026-03-09T17:22:06.573476+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:06.574367+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:06.574367+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:06.584439+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:06.584439+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:07.285433+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:07.285433+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:07.292826+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:07 vm00 bash[28333]: audit 2026-03-09T17:22:07.292826+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:06.570741+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:06.570741+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1564530650' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: cluster 2026-03-09T17:22:06.573476+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: cluster 2026-03-09T17:22:06.573476+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:06.574367+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:06.574367+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:06.584439+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:06.584439+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:07.285433+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:07.285433+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:07.292826+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:07 vm00 bash[20770]: audit 2026-03-09T17:22:07.292826+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.366 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 0 on host 'vm00' 2026-03-09T17:22:08.366 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.363+0000 7fe473fff640 1 -- 192.168.123.100:0/3291892883 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7fe460002bf0 con 0x7fe45c077540 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.363+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe45c077540 msgr2=0x7fe45c079a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.363+0000 7fe4994d2640 1 --2- 192.168.123.100:0/3291892883 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe45c077540 0x7fe45c079a00 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fe494112970 tx=0x7fe47c00a480 comp rx=0 tx=0).stop 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.363+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe494102ae0 msgr2=0x7fe494111450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.363+0000 7fe4994d2640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe494102ae0 0x7fe494111450 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fe4880099a0 tx=0x7fe488002990 comp rx=0 tx=0).stop 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.367+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 shutdown_connections 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.367+0000 7fe4994d2640 1 --2- 192.168.123.100:0/3291892883 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe45c077540 0x7fe45c079a00 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.367+0000 7fe4994d2640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe4941046a0 0x7fe494118a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.367+0000 7fe4994d2640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe494103ce0 0x7fe494111990 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.367+0000 7fe4994d2640 1 --2- 192.168.123.100:0/3291892883 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe494102ae0 0x7fe494111450 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.367+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 >> 192.168.123.100:0/3291892883 conn(0x7fe4940fe290 msgr2=0x7fe4940ffd20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.367+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 shutdown_connections 2026-03-09T17:22:08.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:08.367+0000 7fe4994d2640 1 -- 192.168.123.100:0/3291892883 wait complete. 2026-03-09T17:22:08.451 DEBUG:teuthology.orchestra.run.vm00:osd.0> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.0.service 2026-03-09T17:22:08.452 INFO:tasks.cephadm:Deploying osd.1 on vm00 with /dev/vdd... 2026-03-09T17:22:08.452 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- lvm zap /dev/vdd 2026-03-09T17:22:08.609 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:05.721048+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:05.721048+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:05.721106+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:05.721106+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:07.324988+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:07.324988+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.586265+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.586265+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:07.597207+0000 mon.a (mon.0) 317 : cluster [INF] osd.0 v2:192.168.123.100:6801/1564530650 boot 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:07.597207+0000 mon.a (mon.0) 317 : cluster [INF] osd.0 v2:192.168.123.100:6801/1564530650 boot 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:07.597259+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: cluster 2026-03-09T17:22:07.597259+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.601460+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.601460+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.684655+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.684655+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.685172+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.685172+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.689348+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:07.689348+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:08.350862+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:08.350862+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:08.356368+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:08.356368+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:08.361877+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.610 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:08 vm00 bash[20770]: audit 2026-03-09T17:22:08.361877+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:05.721048+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:05.721048+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:05.721106+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:05.721106+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:07.324988+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:07.324988+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.586265+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.586265+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:07.597207+0000 mon.a (mon.0) 317 : cluster [INF] osd.0 v2:192.168.123.100:6801/1564530650 boot 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:07.597207+0000 mon.a (mon.0) 317 : cluster [INF] osd.0 v2:192.168.123.100:6801/1564530650 boot 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:07.597259+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: cluster 2026-03-09T17:22:07.597259+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.601460+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.601460+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.684655+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.684655+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.685172+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.685172+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.689348+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:07.689348+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:08.350862+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:08.350862+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:08.356368+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:08.356368+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:08.361877+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:08 vm00 bash[28333]: audit 2026-03-09T17:22:08.361877+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:05.721048+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:05.721048+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:05.721106+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:05.721106+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:07.324988+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:07.324988+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.586265+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.586265+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:07.597207+0000 mon.a (mon.0) 317 : cluster [INF] osd.0 v2:192.168.123.100:6801/1564530650 boot 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:07.597207+0000 mon.a (mon.0) 317 : cluster [INF] osd.0 v2:192.168.123.100:6801/1564530650 boot 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:07.597259+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: cluster 2026-03-09T17:22:07.597259+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.601460+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.601460+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.684655+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.684655+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.685172+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.685172+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.689348+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:07.689348+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:08.350862+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:08.350862+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:08.356368+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:08.356368+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:08.361877+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:08 vm02 bash[23351]: audit 2026-03-09T17:22:08.361877+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:10.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:09 vm00 bash[20770]: cluster 2026-03-09T17:22:08.692686+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T17:22:10.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:09 vm00 bash[20770]: cluster 2026-03-09T17:22:08.692686+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T17:22:10.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:09 vm00 bash[28333]: cluster 2026-03-09T17:22:08.692686+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T17:22:10.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:09 vm00 bash[28333]: cluster 2026-03-09T17:22:08.692686+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T17:22:10.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:09 vm02 bash[23351]: cluster 2026-03-09T17:22:08.692686+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T17:22:10.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:09 vm02 bash[23351]: cluster 2026-03-09T17:22:08.692686+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T17:22:11.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:10 vm00 bash[20770]: cluster 2026-03-09T17:22:09.325309+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:11.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:10 vm00 bash[20770]: cluster 2026-03-09T17:22:09.325309+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:10 vm00 bash[28333]: cluster 2026-03-09T17:22:09.325309+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:10 vm00 bash[28333]: cluster 2026-03-09T17:22:09.325309+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:11.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:10 vm02 bash[23351]: cluster 2026-03-09T17:22:09.325309+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:11.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:10 vm02 bash[23351]: cluster 2026-03-09T17:22:09.325309+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:13.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:12 vm00 bash[20770]: cluster 2026-03-09T17:22:11.325491+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:13.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:12 vm00 bash[20770]: cluster 2026-03-09T17:22:11.325491+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:12 vm00 bash[28333]: cluster 2026-03-09T17:22:11.325491+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:12 vm00 bash[28333]: cluster 2026-03-09T17:22:11.325491+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:13.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:12 vm02 bash[23351]: cluster 2026-03-09T17:22:11.325491+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:13.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:12 vm02 bash[23351]: cluster 2026-03-09T17:22:11.325491+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:13.136 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:22:13.992 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:22:14.010 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch daemon add osd vm00:/dev/vdd 2026-03-09T17:22:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:14 vm00 bash[28333]: cluster 2026-03-09T17:22:13.325692+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:14 vm00 bash[28333]: cluster 2026-03-09T17:22:13.325692+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:15.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:14 vm00 bash[20770]: cluster 2026-03-09T17:22:13.325692+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:15.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:14 vm00 bash[20770]: cluster 2026-03-09T17:22:13.325692+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:15.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:14 vm02 bash[23351]: cluster 2026-03-09T17:22:13.325692+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:15.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:14 vm02 bash[23351]: cluster 2026-03-09T17:22:13.325692+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: cephadm 2026-03-09T17:22:14.737473+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: cephadm 2026-03-09T17:22:14.737473+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.742434+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.742434+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.746196+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.746196+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.746846+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.746846+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.747926+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.747926+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.748265+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.748265+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.752628+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:15 vm00 bash[28333]: audit 2026-03-09T17:22:14.752628+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: cephadm 2026-03-09T17:22:14.737473+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: cephadm 2026-03-09T17:22:14.737473+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.742434+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.742434+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.746196+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.746196+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.746846+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.746846+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.747926+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.747926+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.748265+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.748265+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.752628+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:15 vm00 bash[20770]: audit 2026-03-09T17:22:14.752628+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: cephadm 2026-03-09T17:22:14.737473+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:16.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: cephadm 2026-03-09T17:22:14.737473+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:16.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.742434+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.742434+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.746196+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.746196+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.746846+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.746846+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.747926+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.747926+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.748265+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.748265+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.752628+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:15 vm02 bash[23351]: audit 2026-03-09T17:22:14.752628+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:17.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:16 vm00 bash[28333]: cluster 2026-03-09T17:22:15.325937+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:17.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:16 vm00 bash[28333]: cluster 2026-03-09T17:22:15.325937+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:17.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:16 vm00 bash[20770]: cluster 2026-03-09T17:22:15.325937+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:17.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:16 vm00 bash[20770]: cluster 2026-03-09T17:22:15.325937+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:17.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:16 vm02 bash[23351]: cluster 2026-03-09T17:22:15.325937+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:17.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:16 vm02 bash[23351]: cluster 2026-03-09T17:22:15.325937+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:18.657 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- 192.168.123.100:0/2361517257 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 msgr2=0x7f3bdc102850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 --2- 192.168.123.100:0/2361517257 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 0x7f3bdc102850 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f3bd000ade0 tx=0x7f3bd0030860 comp rx=0 tx=0).stop 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- 192.168.123.100:0/2361517257 shutdown_connections 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 --2- 192.168.123.100:0/2361517257 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3bdc1070d0 0x7f3bdc1094c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 --2- 192.168.123.100:0/2361517257 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 0x7f3bdc102850 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 --2- 192.168.123.100:0/2361517257 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3bdc069a50 0x7f3bdc101e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- 192.168.123.100:0/2361517257 >> 192.168.123.100:0/2361517257 conn(0x7f3bdc0fc420 msgr2=0x7f3bdc0fe860 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- 192.168.123.100:0/2361517257 shutdown_connections 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- 192.168.123.100:0/2361517257 wait complete. 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 Processor -- start 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- start start 2026-03-09T17:22:18.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3bdc069a50 0x7f3bdc196f90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3bdc069a50 0x7f3bdc196f90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3bdc069a50 0x7f3bdc196f90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:42140/0 (socket says 192.168.123.100:42140) 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 0x7f3bdc1974d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3bdc1070d0 0x7f3bdc193a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3bdc10ba10 con 0x7f3bdc069a50 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f3bdc10b890 con 0x7f3bdc1070d0 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f3bdc10bb90 con 0x7f3bdc1023d0 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 -- 192.168.123.100:0/4115545497 learned_addr learned my addr 192.168.123.100:0/4115545497 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 -- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 msgr2=0x7f3bdc1974d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be2b9d640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3bdc1070d0 0x7f3bdc193a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be1b9b640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 0x7f3bdc1974d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:22:18.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 0x7f3bdc1974d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 -- 192.168.123.100:0/4115545497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3bdc1070d0 msgr2=0x7f3bdc193a90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3bdc1070d0 0x7f3bdc193a90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 -- 192.168.123.100:0/4115545497 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3bdc194350 con 0x7f3bdc069a50 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be239c640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3bdc069a50 0x7f3bdc196f90 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f3bcc002a80 tx=0x7f3bcc002f50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be1b9b640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 0x7f3bdc1974d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3bcb7fe640 1 -- 192.168.123.100:0/4115545497 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3bcc00ecc0 con 0x7f3bdc069a50 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be4627640 1 -- 192.168.123.100:0/4115545497 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3bdc1a8ba0 con 0x7f3bdc069a50 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3be2b9d640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3bdc1070d0 0x7f3bdc193a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.819+0000 7f3bcb7fe640 1 -- 192.168.123.100:0/4115545497 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f3bcc00ee60 con 0x7f3bdc069a50 2026-03-09T17:22:18.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.823+0000 7f3be4627640 1 -- 192.168.123.100:0/4115545497 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3bdc1a8f40 con 0x7f3bdc069a50 2026-03-09T17:22:18.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.823+0000 7f3bc97fa640 1 -- 192.168.123.100:0/4115545497 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3ba0005180 con 0x7f3bdc069a50 2026-03-09T17:22:18.828 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.827+0000 7f3bcb7fe640 1 -- 192.168.123.100:0/4115545497 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3bcc018810 con 0x7f3bdc069a50 2026-03-09T17:22:18.828 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.827+0000 7f3bcb7fe640 1 -- 192.168.123.100:0/4115545497 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 14) ==== 99944+0+0 (secure 0 0 0) 0x7f3bcc0189b0 con 0x7f3bdc069a50 2026-03-09T17:22:18.828 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.827+0000 7f3bcb7fe640 1 --2- 192.168.123.100:0/4115545497 >> v2:192.168.123.100:6800/3114914985 conn(0x7f3ba8077640 0x7f3ba8079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:22:18.828 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.827+0000 7f3bcb7fe640 1 -- 192.168.123.100:0/4115545497 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(9..9 src has 1..9) ==== 1531+0+0 (secure 0 0 0) 0x7f3bcc09a0e0 con 0x7f3bdc069a50 2026-03-09T17:22:18.828 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.827+0000 7f3bcb7fe640 1 -- 192.168.123.100:0/4115545497 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3bcc014030 con 0x7f3bdc069a50 2026-03-09T17:22:18.828 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.827+0000 7f3be1b9b640 1 --2- 192.168.123.100:0/4115545497 >> v2:192.168.123.100:6800/3114914985 conn(0x7f3ba8077640 0x7f3ba8079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:22:18.828 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.827+0000 7f3be1b9b640 1 --2- 192.168.123.100:0/4115545497 >> v2:192.168.123.100:6800/3114914985 conn(0x7f3ba8077640 0x7f3ba8079b00 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f3bdc194f10 tx=0x7f3bd00023d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:22:18.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:18.923+0000 7f3bc97fa640 1 -- 192.168.123.100:0/4115545497 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f3ba0002bf0 con 0x7f3ba8077640 2026-03-09T17:22:19.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:18 vm00 bash[28333]: cluster 2026-03-09T17:22:17.326129+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:19.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:18 vm00 bash[28333]: cluster 2026-03-09T17:22:17.326129+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:19.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:18 vm00 bash[20770]: cluster 2026-03-09T17:22:17.326129+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:19.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:18 vm00 bash[20770]: cluster 2026-03-09T17:22:17.326129+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:19.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:18 vm02 bash[23351]: cluster 2026-03-09T17:22:17.326129+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:19.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:18 vm02 bash[23351]: cluster 2026-03-09T17:22:17.326129+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:20.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:19 vm00 bash[28333]: audit 2026-03-09T17:22:18.928275+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:20.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:19 vm00 bash[28333]: audit 2026-03-09T17:22:18.928275+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:20.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:19 vm00 bash[28333]: audit 2026-03-09T17:22:18.930206+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:20.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:19 vm00 bash[28333]: audit 2026-03-09T17:22:18.930206+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:20.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:19 vm00 bash[28333]: audit 2026-03-09T17:22:18.930652+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:20.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:19 vm00 bash[28333]: audit 2026-03-09T17:22:18.930652+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:19 vm00 bash[20770]: audit 2026-03-09T17:22:18.928275+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:19 vm00 bash[20770]: audit 2026-03-09T17:22:18.928275+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:19 vm00 bash[20770]: audit 2026-03-09T17:22:18.930206+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:19 vm00 bash[20770]: audit 2026-03-09T17:22:18.930206+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:19 vm00 bash[20770]: audit 2026-03-09T17:22:18.930652+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:19 vm00 bash[20770]: audit 2026-03-09T17:22:18.930652+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:20.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:19 vm02 bash[23351]: audit 2026-03-09T17:22:18.928275+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:19 vm02 bash[23351]: audit 2026-03-09T17:22:18.928275+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:19 vm02 bash[23351]: audit 2026-03-09T17:22:18.930206+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:19 vm02 bash[23351]: audit 2026-03-09T17:22:18.930206+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:19 vm02 bash[23351]: audit 2026-03-09T17:22:18.930652+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:19 vm02 bash[23351]: audit 2026-03-09T17:22:18.930652+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:20 vm00 bash[28333]: audit 2026-03-09T17:22:18.926594+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:20 vm00 bash[28333]: audit 2026-03-09T17:22:18.926594+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:20 vm00 bash[28333]: cluster 2026-03-09T17:22:19.326441+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:20 vm00 bash[28333]: cluster 2026-03-09T17:22:19.326441+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:21.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:20 vm00 bash[20770]: audit 2026-03-09T17:22:18.926594+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:21.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:20 vm00 bash[20770]: audit 2026-03-09T17:22:18.926594+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:21.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:20 vm00 bash[20770]: cluster 2026-03-09T17:22:19.326441+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:21.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:20 vm00 bash[20770]: cluster 2026-03-09T17:22:19.326441+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:20 vm02 bash[23351]: audit 2026-03-09T17:22:18.926594+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:20 vm02 bash[23351]: audit 2026-03-09T17:22:18.926594+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:20 vm02 bash[23351]: cluster 2026-03-09T17:22:19.326441+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:20 vm02 bash[23351]: cluster 2026-03-09T17:22:19.326441+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:23.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:22 vm02 bash[23351]: cluster 2026-03-09T17:22:21.326706+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:23.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:22 vm02 bash[23351]: cluster 2026-03-09T17:22:21.326706+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:23.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:22 vm00 bash[28333]: cluster 2026-03-09T17:22:21.326706+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:23.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:22 vm00 bash[28333]: cluster 2026-03-09T17:22:21.326706+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:23.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:22 vm00 bash[20770]: cluster 2026-03-09T17:22:21.326706+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:23.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:22 vm00 bash[20770]: cluster 2026-03-09T17:22:21.326706+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: cluster 2026-03-09T17:22:23.326952+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: cluster 2026-03-09T17:22:23.326952+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: audit 2026-03-09T17:22:24.301455+0000 mon.a (mon.0) 336 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]: dispatch 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: audit 2026-03-09T17:22:24.301455+0000 mon.a (mon.0) 336 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]: dispatch 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: audit 2026-03-09T17:22:24.303956+0000 mon.a (mon.0) 337 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]': finished 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: audit 2026-03-09T17:22:24.303956+0000 mon.a (mon.0) 337 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]': finished 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: cluster 2026-03-09T17:22:24.307183+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: cluster 2026-03-09T17:22:24.307183+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: audit 2026-03-09T17:22:24.307490+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:24 vm00 bash[28333]: audit 2026-03-09T17:22:24.307490+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: cluster 2026-03-09T17:22:23.326952+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: cluster 2026-03-09T17:22:23.326952+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: audit 2026-03-09T17:22:24.301455+0000 mon.a (mon.0) 336 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]: dispatch 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: audit 2026-03-09T17:22:24.301455+0000 mon.a (mon.0) 336 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]: dispatch 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: audit 2026-03-09T17:22:24.303956+0000 mon.a (mon.0) 337 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]': finished 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: audit 2026-03-09T17:22:24.303956+0000 mon.a (mon.0) 337 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]': finished 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: cluster 2026-03-09T17:22:24.307183+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: cluster 2026-03-09T17:22:24.307183+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: audit 2026-03-09T17:22:24.307490+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:24 vm00 bash[20770]: audit 2026-03-09T17:22:24.307490+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: cluster 2026-03-09T17:22:23.326952+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: cluster 2026-03-09T17:22:23.326952+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: audit 2026-03-09T17:22:24.301455+0000 mon.a (mon.0) 336 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]: dispatch 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: audit 2026-03-09T17:22:24.301455+0000 mon.a (mon.0) 336 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]: dispatch 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: audit 2026-03-09T17:22:24.303956+0000 mon.a (mon.0) 337 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]': finished 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: audit 2026-03-09T17:22:24.303956+0000 mon.a (mon.0) 337 : audit [INF] from='client.? 192.168.123.100:0/3625445347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9e37873-3fd7-4a71-be36-c91d099132ac"}]': finished 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: cluster 2026-03-09T17:22:24.307183+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: cluster 2026-03-09T17:22:24.307183+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: audit 2026-03-09T17:22:24.307490+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:25.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:24 vm02 bash[23351]: audit 2026-03-09T17:22:24.307490+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:26.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:25 vm02 bash[23351]: audit 2026-03-09T17:22:24.905147+0000 mon.c (mon.2) 5 : audit [DBG] from='client.? 192.168.123.100:0/2510664815' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:26.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:25 vm02 bash[23351]: audit 2026-03-09T17:22:24.905147+0000 mon.c (mon.2) 5 : audit [DBG] from='client.? 192.168.123.100:0/2510664815' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:26.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:25 vm00 bash[28333]: audit 2026-03-09T17:22:24.905147+0000 mon.c (mon.2) 5 : audit [DBG] from='client.? 192.168.123.100:0/2510664815' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:26.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:25 vm00 bash[28333]: audit 2026-03-09T17:22:24.905147+0000 mon.c (mon.2) 5 : audit [DBG] from='client.? 192.168.123.100:0/2510664815' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:26.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:25 vm00 bash[20770]: audit 2026-03-09T17:22:24.905147+0000 mon.c (mon.2) 5 : audit [DBG] from='client.? 192.168.123.100:0/2510664815' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:26.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:25 vm00 bash[20770]: audit 2026-03-09T17:22:24.905147+0000 mon.c (mon.2) 5 : audit [DBG] from='client.? 192.168.123.100:0/2510664815' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:27.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:26 vm02 bash[23351]: cluster 2026-03-09T17:22:25.327201+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:26 vm02 bash[23351]: cluster 2026-03-09T17:22:25.327201+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:27.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:26 vm00 bash[28333]: cluster 2026-03-09T17:22:25.327201+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:27.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:26 vm00 bash[28333]: cluster 2026-03-09T17:22:25.327201+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:27.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:26 vm00 bash[20770]: cluster 2026-03-09T17:22:25.327201+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:27.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:26 vm00 bash[20770]: cluster 2026-03-09T17:22:25.327201+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:29.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:28 vm02 bash[23351]: cluster 2026-03-09T17:22:27.327398+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:29.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:28 vm02 bash[23351]: cluster 2026-03-09T17:22:27.327398+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:29.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:28 vm00 bash[28333]: cluster 2026-03-09T17:22:27.327398+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:29.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:28 vm00 bash[28333]: cluster 2026-03-09T17:22:27.327398+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:29.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:28 vm00 bash[20770]: cluster 2026-03-09T17:22:27.327398+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:29.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:28 vm00 bash[20770]: cluster 2026-03-09T17:22:27.327398+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:31.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:30 vm02 bash[23351]: cluster 2026-03-09T17:22:29.327594+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:31.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:30 vm02 bash[23351]: cluster 2026-03-09T17:22:29.327594+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:30 vm00 bash[20770]: cluster 2026-03-09T17:22:29.327594+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:30 vm00 bash[20770]: cluster 2026-03-09T17:22:29.327594+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:30 vm00 bash[28333]: cluster 2026-03-09T17:22:29.327594+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:30 vm00 bash[28333]: cluster 2026-03-09T17:22:29.327594+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:33.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:32 vm02 bash[23351]: cluster 2026-03-09T17:22:31.327789+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:33.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:32 vm02 bash[23351]: cluster 2026-03-09T17:22:31.327789+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:33.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:32 vm00 bash[20770]: cluster 2026-03-09T17:22:31.327789+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:33.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:32 vm00 bash[20770]: cluster 2026-03-09T17:22:31.327789+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:33.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:32 vm00 bash[28333]: cluster 2026-03-09T17:22:31.327789+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:33.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:32 vm00 bash[28333]: cluster 2026-03-09T17:22:31.327789+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:33 vm00 bash[28333]: audit 2026-03-09T17:22:33.370422+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:33 vm00 bash[28333]: audit 2026-03-09T17:22:33.370422+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:33 vm00 bash[28333]: audit 2026-03-09T17:22:33.370988+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:33 vm00 bash[28333]: audit 2026-03-09T17:22:33.370988+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:33 vm00 bash[20770]: audit 2026-03-09T17:22:33.370422+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:33 vm00 bash[20770]: audit 2026-03-09T17:22:33.370422+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:33 vm00 bash[20770]: audit 2026-03-09T17:22:33.370988+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:33 vm00 bash[20770]: audit 2026-03-09T17:22:33.370988+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:34.174 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:22:34 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:34.174 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:22:34 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:34.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:33 vm02 bash[23351]: audit 2026-03-09T17:22:33.370422+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T17:22:34.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:33 vm02 bash[23351]: audit 2026-03-09T17:22:33.370422+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T17:22:34.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:33 vm02 bash[23351]: audit 2026-03-09T17:22:33.370988+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:34.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:33 vm02 bash[23351]: audit 2026-03-09T17:22:33.370988+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:34.442 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:34.443 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:34.443 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:22:34 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:34.443 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:22:34 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: cluster 2026-03-09T17:22:33.327988+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: cluster 2026-03-09T17:22:33.327988+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: cephadm 2026-03-09T17:22:33.371451+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: cephadm 2026-03-09T17:22:33.371451+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: audit 2026-03-09T17:22:34.405150+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: audit 2026-03-09T17:22:34.405150+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: audit 2026-03-09T17:22:34.410190+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: audit 2026-03-09T17:22:34.410190+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: audit 2026-03-09T17:22:34.416496+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:34 vm00 bash[28333]: audit 2026-03-09T17:22:34.416496+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: cluster 2026-03-09T17:22:33.327988+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: cluster 2026-03-09T17:22:33.327988+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: cephadm 2026-03-09T17:22:33.371451+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T17:22:35.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: cephadm 2026-03-09T17:22:33.371451+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T17:22:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: audit 2026-03-09T17:22:34.405150+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: audit 2026-03-09T17:22:34.405150+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: audit 2026-03-09T17:22:34.410190+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: audit 2026-03-09T17:22:34.410190+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: audit 2026-03-09T17:22:34.416496+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:34 vm00 bash[20770]: audit 2026-03-09T17:22:34.416496+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: cluster 2026-03-09T17:22:33.327988+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: cluster 2026-03-09T17:22:33.327988+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: cephadm 2026-03-09T17:22:33.371451+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: cephadm 2026-03-09T17:22:33.371451+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: audit 2026-03-09T17:22:34.405150+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: audit 2026-03-09T17:22:34.405150+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: audit 2026-03-09T17:22:34.410190+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: audit 2026-03-09T17:22:34.410190+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: audit 2026-03-09T17:22:34.416496+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:35.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:34 vm02 bash[23351]: audit 2026-03-09T17:22:34.416496+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:36 vm00 bash[28333]: cluster 2026-03-09T17:22:35.328220+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:36 vm00 bash[28333]: cluster 2026-03-09T17:22:35.328220+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:37.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:36 vm00 bash[20770]: cluster 2026-03-09T17:22:35.328220+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:37.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:36 vm00 bash[20770]: cluster 2026-03-09T17:22:35.328220+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:37.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:36 vm02 bash[23351]: cluster 2026-03-09T17:22:35.328220+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:37.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:36 vm02 bash[23351]: cluster 2026-03-09T17:22:35.328220+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:38.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:37 vm00 bash[28333]: audit 2026-03-09T17:22:37.756406+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:37 vm00 bash[28333]: audit 2026-03-09T17:22:37.756406+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:37 vm00 bash[28333]: audit 2026-03-09T17:22:37.756850+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:37 vm00 bash[28333]: audit 2026-03-09T17:22:37.756850+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:37 vm00 bash[20770]: audit 2026-03-09T17:22:37.756406+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:37 vm00 bash[20770]: audit 2026-03-09T17:22:37.756406+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:37 vm00 bash[20770]: audit 2026-03-09T17:22:37.756850+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:37 vm00 bash[20770]: audit 2026-03-09T17:22:37.756850+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:37 vm02 bash[23351]: audit 2026-03-09T17:22:37.756406+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:37 vm02 bash[23351]: audit 2026-03-09T17:22:37.756406+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:37 vm02 bash[23351]: audit 2026-03-09T17:22:37.756850+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:38.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:37 vm02 bash[23351]: audit 2026-03-09T17:22:37.756850+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: cluster 2026-03-09T17:22:37.328422+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: cluster 2026-03-09T17:22:37.328422+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: audit 2026-03-09T17:22:37.910697+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: audit 2026-03-09T17:22:37.910697+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: cluster 2026-03-09T17:22:37.913521+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: cluster 2026-03-09T17:22:37.913521+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: audit 2026-03-09T17:22:37.913674+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: audit 2026-03-09T17:22:37.913674+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: audit 2026-03-09T17:22:37.914825+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: audit 2026-03-09T17:22:37.914825+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: audit 2026-03-09T17:22:37.915064+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:38 vm00 bash[28333]: audit 2026-03-09T17:22:37.915064+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: cluster 2026-03-09T17:22:37.328422+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: cluster 2026-03-09T17:22:37.328422+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: audit 2026-03-09T17:22:37.910697+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: audit 2026-03-09T17:22:37.910697+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: cluster 2026-03-09T17:22:37.913521+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T17:22:39.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: cluster 2026-03-09T17:22:37.913521+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T17:22:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: audit 2026-03-09T17:22:37.913674+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: audit 2026-03-09T17:22:37.913674+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: audit 2026-03-09T17:22:37.914825+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: audit 2026-03-09T17:22:37.914825+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: audit 2026-03-09T17:22:37.915064+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:38 vm00 bash[20770]: audit 2026-03-09T17:22:37.915064+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: cluster 2026-03-09T17:22:37.328422+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: cluster 2026-03-09T17:22:37.328422+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: audit 2026-03-09T17:22:37.910697+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: audit 2026-03-09T17:22:37.910697+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: cluster 2026-03-09T17:22:37.913521+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: cluster 2026-03-09T17:22:37.913521+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: audit 2026-03-09T17:22:37.913674+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: audit 2026-03-09T17:22:37.913674+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: audit 2026-03-09T17:22:37.914825+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: audit 2026-03-09T17:22:37.914825+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.100:6805/1086087815' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: audit 2026-03-09T17:22:37.915064+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:39.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:38 vm02 bash[23351]: audit 2026-03-09T17:22:37.915064+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: audit 2026-03-09T17:22:38.913215+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: audit 2026-03-09T17:22:38.913215+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: cluster 2026-03-09T17:22:38.916842+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: cluster 2026-03-09T17:22:38.916842+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: audit 2026-03-09T17:22:38.918941+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: audit 2026-03-09T17:22:38.918941+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: cluster 2026-03-09T17:22:39.328694+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: cluster 2026-03-09T17:22:39.328694+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: audit 2026-03-09T17:22:39.918371+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:39 vm00 bash[28333]: audit 2026-03-09T17:22:39.918371+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: audit 2026-03-09T17:22:38.913215+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:40.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: audit 2026-03-09T17:22:38.913215+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: cluster 2026-03-09T17:22:38.916842+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T17:22:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: cluster 2026-03-09T17:22:38.916842+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T17:22:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: audit 2026-03-09T17:22:38.918941+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: audit 2026-03-09T17:22:38.918941+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: cluster 2026-03-09T17:22:39.328694+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: cluster 2026-03-09T17:22:39.328694+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: audit 2026-03-09T17:22:39.918371+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:39 vm00 bash[20770]: audit 2026-03-09T17:22:39.918371+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: audit 2026-03-09T17:22:38.913215+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:40.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: audit 2026-03-09T17:22:38.913215+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:22:40.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: cluster 2026-03-09T17:22:38.916842+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T17:22:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: cluster 2026-03-09T17:22:38.916842+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T17:22:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: audit 2026-03-09T17:22:38.918941+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: audit 2026-03-09T17:22:38.918941+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: cluster 2026-03-09T17:22:39.328694+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: cluster 2026-03-09T17:22:39.328694+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:22:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: audit 2026-03-09T17:22:39.918371+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:39 vm02 bash[23351]: audit 2026-03-09T17:22:39.918371+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:41.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: cluster 2026-03-09T17:22:38.798104+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:41.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: cluster 2026-03-09T17:22:38.798104+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:41.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: cluster 2026-03-09T17:22:38.798158+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:41.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: cluster 2026-03-09T17:22:38.798158+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:41.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: cluster 2026-03-09T17:22:39.926432+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 v2:192.168.123.100:6805/1086087815 boot 2026-03-09T17:22:41.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: cluster 2026-03-09T17:22:39.926432+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 v2:192.168.123.100:6805/1086087815 boot 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: cluster 2026-03-09T17:22:39.926571+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: cluster 2026-03-09T17:22:39.926571+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:39.927664+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:39.927664+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.520473+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.520473+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.525177+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.525177+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.526549+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.526549+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.527041+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.527041+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.531821+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:40 vm00 bash[28333]: audit 2026-03-09T17:22:40.531821+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: cluster 2026-03-09T17:22:38.798104+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: cluster 2026-03-09T17:22:38.798104+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: cluster 2026-03-09T17:22:38.798158+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: cluster 2026-03-09T17:22:38.798158+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: cluster 2026-03-09T17:22:39.926432+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 v2:192.168.123.100:6805/1086087815 boot 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: cluster 2026-03-09T17:22:39.926432+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 v2:192.168.123.100:6805/1086087815 boot 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: cluster 2026-03-09T17:22:39.926571+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: cluster 2026-03-09T17:22:39.926571+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:39.927664+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:39.927664+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.520473+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.520473+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.525177+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.525177+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.526549+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.526549+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.527041+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.527041+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.531821+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:40 vm00 bash[20770]: audit 2026-03-09T17:22:40.531821+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: cluster 2026-03-09T17:22:38.798104+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:41.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: cluster 2026-03-09T17:22:38.798104+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:22:41.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: cluster 2026-03-09T17:22:38.798158+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: cluster 2026-03-09T17:22:38.798158+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: cluster 2026-03-09T17:22:39.926432+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 v2:192.168.123.100:6805/1086087815 boot 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: cluster 2026-03-09T17:22:39.926432+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 v2:192.168.123.100:6805/1086087815 boot 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: cluster 2026-03-09T17:22:39.926571+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: cluster 2026-03-09T17:22:39.926571+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:39.927664+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:39.927664+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.520473+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.520473+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.525177+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.525177+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.526549+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.526549+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.527041+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.527041+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.531821+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:40 vm02 bash[23351]: audit 2026-03-09T17:22:40.531821+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 1 on host 'vm00' 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.419+0000 7f3bcb7fe640 1 -- 192.168.123.100:0/4115545497 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f3ba0002bf0 con 0x7f3ba8077640 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 -- 192.168.123.100:0/4115545497 >> v2:192.168.123.100:6800/3114914985 conn(0x7f3ba8077640 msgr2=0x7f3ba8079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 --2- 192.168.123.100:0/4115545497 >> v2:192.168.123.100:6800/3114914985 conn(0x7f3ba8077640 0x7f3ba8079b00 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f3bdc194f10 tx=0x7f3bd00023d0 comp rx=0 tx=0).stop 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 -- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3bdc069a50 msgr2=0x7f3bdc196f90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3bdc069a50 0x7f3bdc196f90 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f3bcc002a80 tx=0x7f3bcc002f50 comp rx=0 tx=0).stop 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 -- 192.168.123.100:0/4115545497 shutdown_connections 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 --2- 192.168.123.100:0/4115545497 >> v2:192.168.123.100:6800/3114914985 conn(0x7f3ba8077640 0x7f3ba8079b00 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3bdc1070d0 0x7f3bdc193a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3bdc1023d0 0x7f3bdc1974d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 --2- 192.168.123.100:0/4115545497 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3bdc069a50 0x7f3bdc196f90 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 -- 192.168.123.100:0/4115545497 >> 192.168.123.100:0/4115545497 conn(0x7f3bdc0fc420 msgr2=0x7f3bdc0fe810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 -- 192.168.123.100:0/4115545497 shutdown_connections 2026-03-09T17:22:41.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:41.423+0000 7f3bc97fa640 1 -- 192.168.123.100:0/4115545497 wait complete. 2026-03-09T17:22:41.513 DEBUG:teuthology.orchestra.run.vm00:osd.1> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.1.service 2026-03-09T17:22:41.514 INFO:tasks.cephadm:Deploying osd.2 on vm00 with /dev/vdc... 2026-03-09T17:22:41.514 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- lvm zap /dev/vdc 2026-03-09T17:22:42.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: cluster 2026-03-09T17:22:41.328921+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:42.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: cluster 2026-03-09T17:22:41.328921+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:42.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: audit 2026-03-09T17:22:41.411554+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:42.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: audit 2026-03-09T17:22:41.411554+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:42.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: audit 2026-03-09T17:22:41.417370+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: audit 2026-03-09T17:22:41.417370+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: audit 2026-03-09T17:22:41.421769+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: audit 2026-03-09T17:22:41.421769+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: cluster 2026-03-09T17:22:41.535645+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:41 vm00 bash[20770]: cluster 2026-03-09T17:22:41.535645+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: cluster 2026-03-09T17:22:41.328921+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: cluster 2026-03-09T17:22:41.328921+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: audit 2026-03-09T17:22:41.411554+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: audit 2026-03-09T17:22:41.411554+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: audit 2026-03-09T17:22:41.417370+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: audit 2026-03-09T17:22:41.417370+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: audit 2026-03-09T17:22:41.421769+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: audit 2026-03-09T17:22:41.421769+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: cluster 2026-03-09T17:22:41.535645+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T17:22:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:41 vm00 bash[28333]: cluster 2026-03-09T17:22:41.535645+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T17:22:42.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: cluster 2026-03-09T17:22:41.328921+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:42.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: cluster 2026-03-09T17:22:41.328921+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:42.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: audit 2026-03-09T17:22:41.411554+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:42.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: audit 2026-03-09T17:22:41.411554+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:22:42.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: audit 2026-03-09T17:22:41.417370+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: audit 2026-03-09T17:22:41.417370+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: audit 2026-03-09T17:22:41.421769+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: audit 2026-03-09T17:22:41.421769+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: cluster 2026-03-09T17:22:41.535645+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T17:22:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:41 vm02 bash[23351]: cluster 2026-03-09T17:22:41.535645+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T17:22:44.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:44 vm00 bash[28333]: cluster 2026-03-09T17:22:43.329137+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:44.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:44 vm00 bash[28333]: cluster 2026-03-09T17:22:43.329137+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:44.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:44 vm00 bash[20770]: cluster 2026-03-09T17:22:43.329137+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:44.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:44 vm00 bash[20770]: cluster 2026-03-09T17:22:43.329137+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:44.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:44 vm02 bash[23351]: cluster 2026-03-09T17:22:43.329137+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:44.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:44 vm02 bash[23351]: cluster 2026-03-09T17:22:43.329137+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:46.177 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:22:46.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:46 vm00 bash[28333]: cluster 2026-03-09T17:22:45.329400+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:46.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:46 vm00 bash[28333]: cluster 2026-03-09T17:22:45.329400+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:46.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:46 vm00 bash[20770]: cluster 2026-03-09T17:22:45.329400+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:46.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:46 vm00 bash[20770]: cluster 2026-03-09T17:22:45.329400+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:46.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:46 vm02 bash[23351]: cluster 2026-03-09T17:22:45.329400+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:46.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:46 vm02 bash[23351]: cluster 2026-03-09T17:22:45.329400+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:47.030 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:22:47.045 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch daemon add osd vm00:/dev/vdc 2026-03-09T17:22:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: cluster 2026-03-09T17:22:47.329588+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: cluster 2026-03-09T17:22:47.329588+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: cephadm 2026-03-09T17:22:47.769503+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: cephadm 2026-03-09T17:22:47.769503+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.775215+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.775215+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.782603+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.782603+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.784994+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.784994+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.785808+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.785808+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.786237+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.786237+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.791164+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:48 vm00 bash[28333]: audit 2026-03-09T17:22:47.791164+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: cluster 2026-03-09T17:22:47.329588+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: cluster 2026-03-09T17:22:47.329588+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: cephadm 2026-03-09T17:22:47.769503+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: cephadm 2026-03-09T17:22:47.769503+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.775215+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.775215+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.782603+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.782603+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.784994+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.784994+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.785808+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.785808+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.786237+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.786237+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.791164+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:48 vm00 bash[20770]: audit 2026-03-09T17:22:47.791164+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: cluster 2026-03-09T17:22:47.329588+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:49.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: cluster 2026-03-09T17:22:47.329588+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:49.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: cephadm 2026-03-09T17:22:47.769503+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: cephadm 2026-03-09T17:22:47.769503+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.775215+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.775215+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.782603+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.782603+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.784994+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.784994+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.785808+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.785808+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.786237+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.786237+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.791164+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:48 vm02 bash[23351]: audit 2026-03-09T17:22:47.791164+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:22:51.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:50 vm00 bash[28333]: cluster 2026-03-09T17:22:49.329835+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:51.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:50 vm00 bash[28333]: cluster 2026-03-09T17:22:49.329835+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:51.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:50 vm00 bash[20770]: cluster 2026-03-09T17:22:49.329835+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:51.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:50 vm00 bash[20770]: cluster 2026-03-09T17:22:49.329835+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:51.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:50 vm02 bash[23351]: cluster 2026-03-09T17:22:49.329835+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:51.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:50 vm02 bash[23351]: cluster 2026-03-09T17:22:49.329835+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:51.696 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/1692041194 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4106610 msgr2=0x7ff1e4106ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/1692041194 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4106610 0x7ff1e4106ac0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7ff1d800b3e0 tx=0x7ff1d802f6a0 comp rx=0 tx=0).stop 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/1692041194 shutdown_connections 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/1692041194 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4106610 0x7ff1e4106ac0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/1692041194 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff1e4103c60 0x7ff1e41060d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/1692041194 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff1e4101310 0x7ff1e4103720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/1692041194 >> 192.168.123.100:0/1692041194 conn(0x7ff1e40fd120 msgr2=0x7ff1e40ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/1692041194 shutdown_connections 2026-03-09T17:22:51.853 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/1692041194 wait complete. 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 Processor -- start 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- start start 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff1e4101310 0x7ff1e41a0a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4103c60 0x7ff1e41a0fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff1e4106610 0x7ff1e41a8020 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff1e4114380 con 0x7ff1e4101310 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7ff1e4114200 con 0x7ff1e4106610 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e9bf7640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7ff1e4114500 con 0x7ff1e4103c60 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e37fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff1e4101310 0x7ff1e41a0a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e37fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff1e4101310 0x7ff1e41a0a60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:58192/0 (socket says 192.168.123.100:58192) 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e37fe640 1 -- 192.168.123.100:0/2107034102 learned_addr learned my addr 192.168.123.100:0/2107034102 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:22:51.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e3fff640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff1e4106610 0x7ff1e41a8020 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e37fe640 1 -- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4103c60 msgr2=0x7ff1e41a0fa0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e2ffd640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4103c60 0x7ff1e41a0fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e37fe640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4103c60 0x7ff1e41a0fa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e37fe640 1 -- 192.168.123.100:0/2107034102 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff1e4106610 msgr2=0x7ff1e41a8020 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e37fe640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff1e4106610 0x7ff1e41a8020 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e37fe640 1 -- 192.168.123.100:0/2107034102 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff1e41a8720 con 0x7ff1e4101310 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e3fff640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff1e4106610 0x7ff1e41a8020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.851+0000 7ff1e2ffd640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4103c60 0x7ff1e41a0fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:22:51.855 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e37fe640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff1e4101310 0x7ff1e41a0a60 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7ff1d400b780 tx=0x7ff1d400bc50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:22:51.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e0ff9640 1 -- 192.168.123.100:0/2107034102 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff1d4004480 con 0x7ff1e4101310 2026-03-09T17:22:51.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e0ff9640 1 -- 192.168.123.100:0/2107034102 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7ff1d4010070 con 0x7ff1e4101310 2026-03-09T17:22:51.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e0ff9640 1 -- 192.168.123.100:0/2107034102 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff1d400ca40 con 0x7ff1e4101310 2026-03-09T17:22:51.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff1e41a8a10 con 0x7ff1e4101310 2026-03-09T17:22:51.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff1e41a8f50 con 0x7ff1e4101310 2026-03-09T17:22:51.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e0ff9640 1 -- 192.168.123.100:0/2107034102 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 14) ==== 99944+0+0 (secure 0 0 0) 0x7ff1d400cbe0 con 0x7ff1e4101310 2026-03-09T17:22:51.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff1e4106ac0 con 0x7ff1e4101310 2026-03-09T17:22:51.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e0ff9640 1 --2- 192.168.123.100:0/2107034102 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff1b8077580 0x7ff1b8079a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:22:51.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.855+0000 7ff1e0ff9640 1 -- 192.168.123.100:0/2107034102 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(14..14 src has 1..14) ==== 1823+0+0 (secure 0 0 0) 0x7ff1d409c050 con 0x7ff1e4101310 2026-03-09T17:22:51.861 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.859+0000 7ff1e2ffd640 1 --2- 192.168.123.100:0/2107034102 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff1b8077580 0x7ff1b8079a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:22:51.861 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.859+0000 7ff1e0ff9640 1 -- 192.168.123.100:0/2107034102 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff1d4064850 con 0x7ff1e4101310 2026-03-09T17:22:51.861 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.859+0000 7ff1e2ffd640 1 --2- 192.168.123.100:0/2107034102 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff1b8077580 0x7ff1b8079a40 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7ff1e41a1f80 tx=0x7ff1d0009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:22:51.964 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:22:51.963+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7ff1e40630c0 con 0x7ff1b8077580 2026-03-09T17:22:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:52 vm00 bash[28333]: cluster 2026-03-09T17:22:51.330096+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:52 vm00 bash[28333]: cluster 2026-03-09T17:22:51.330096+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:52 vm00 bash[28333]: audit 2026-03-09T17:22:51.967164+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:52 vm00 bash[28333]: audit 2026-03-09T17:22:51.967164+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:52 vm00 bash[28333]: audit 2026-03-09T17:22:51.968423+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:52 vm00 bash[28333]: audit 2026-03-09T17:22:51.968423+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:52 vm00 bash[28333]: audit 2026-03-09T17:22:51.968782+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:52 vm00 bash[28333]: audit 2026-03-09T17:22:51.968782+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:52 vm00 bash[20770]: cluster 2026-03-09T17:22:51.330096+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:52 vm00 bash[20770]: cluster 2026-03-09T17:22:51.330096+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:52 vm00 bash[20770]: audit 2026-03-09T17:22:51.967164+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:52 vm00 bash[20770]: audit 2026-03-09T17:22:51.967164+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:52 vm00 bash[20770]: audit 2026-03-09T17:22:51.968423+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:52 vm00 bash[20770]: audit 2026-03-09T17:22:51.968423+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:52 vm00 bash[20770]: audit 2026-03-09T17:22:51.968782+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:52 vm00 bash[20770]: audit 2026-03-09T17:22:51.968782+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:53.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:52 vm02 bash[23351]: cluster 2026-03-09T17:22:51.330096+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:53.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:52 vm02 bash[23351]: cluster 2026-03-09T17:22:51.330096+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:53.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:52 vm02 bash[23351]: audit 2026-03-09T17:22:51.967164+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:53.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:52 vm02 bash[23351]: audit 2026-03-09T17:22:51.967164+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:22:53.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:52 vm02 bash[23351]: audit 2026-03-09T17:22:51.968423+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:53.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:52 vm02 bash[23351]: audit 2026-03-09T17:22:51.968423+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:22:53.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:52 vm02 bash[23351]: audit 2026-03-09T17:22:51.968782+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:53.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:52 vm02 bash[23351]: audit 2026-03-09T17:22:51.968782+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:22:54.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:53 vm02 bash[23351]: audit 2026-03-09T17:22:51.965558+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:54.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:53 vm02 bash[23351]: audit 2026-03-09T17:22:51.965558+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:54.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:53 vm00 bash[28333]: audit 2026-03-09T17:22:51.965558+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:54.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:53 vm00 bash[28333]: audit 2026-03-09T17:22:51.965558+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:53 vm00 bash[20770]: audit 2026-03-09T17:22:51.965558+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:53 vm00 bash[20770]: audit 2026-03-09T17:22:51.965558+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:22:55.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:54 vm02 bash[23351]: cluster 2026-03-09T17:22:53.330340+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:55.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:54 vm02 bash[23351]: cluster 2026-03-09T17:22:53.330340+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:55.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:54 vm00 bash[28333]: cluster 2026-03-09T17:22:53.330340+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:55.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:54 vm00 bash[28333]: cluster 2026-03-09T17:22:53.330340+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:55.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:54 vm00 bash[20770]: cluster 2026-03-09T17:22:53.330340+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:55.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:54 vm00 bash[20770]: cluster 2026-03-09T17:22:53.330340+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:57.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:56 vm00 bash[28333]: cluster 2026-03-09T17:22:55.330581+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:57.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:56 vm00 bash[28333]: cluster 2026-03-09T17:22:55.330581+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:56 vm00 bash[20770]: cluster 2026-03-09T17:22:55.330581+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:56 vm00 bash[20770]: cluster 2026-03-09T17:22:55.330581+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:57.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:56 vm02 bash[23351]: cluster 2026-03-09T17:22:55.330581+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:56 vm02 bash[23351]: cluster 2026-03-09T17:22:55.330581+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: audit 2026-03-09T17:22:57.393057+0000 mon.b (mon.1) 9 : audit [INF] from='client.? 192.168.123.100:0/2918903372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: audit 2026-03-09T17:22:57.393057+0000 mon.b (mon.1) 9 : audit [INF] from='client.? 192.168.123.100:0/2918903372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: audit 2026-03-09T17:22:57.393313+0000 mon.a (mon.0) 375 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: audit 2026-03-09T17:22:57.393313+0000 mon.a (mon.0) 375 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: audit 2026-03-09T17:22:57.395868+0000 mon.a (mon.0) 376 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]': finished 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: audit 2026-03-09T17:22:57.395868+0000 mon.a (mon.0) 376 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]': finished 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: cluster 2026-03-09T17:22:57.400781+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: cluster 2026-03-09T17:22:57.400781+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T17:22:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: audit 2026-03-09T17:22:57.400883+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:57 vm00 bash[28333]: audit 2026-03-09T17:22:57.400883+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: audit 2026-03-09T17:22:57.393057+0000 mon.b (mon.1) 9 : audit [INF] from='client.? 192.168.123.100:0/2918903372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: audit 2026-03-09T17:22:57.393057+0000 mon.b (mon.1) 9 : audit [INF] from='client.? 192.168.123.100:0/2918903372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: audit 2026-03-09T17:22:57.393313+0000 mon.a (mon.0) 375 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: audit 2026-03-09T17:22:57.393313+0000 mon.a (mon.0) 375 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: audit 2026-03-09T17:22:57.395868+0000 mon.a (mon.0) 376 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]': finished 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: audit 2026-03-09T17:22:57.395868+0000 mon.a (mon.0) 376 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]': finished 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: cluster 2026-03-09T17:22:57.400781+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: cluster 2026-03-09T17:22:57.400781+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: audit 2026-03-09T17:22:57.400883+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:22:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:57 vm00 bash[20770]: audit 2026-03-09T17:22:57.400883+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: audit 2026-03-09T17:22:57.393057+0000 mon.b (mon.1) 9 : audit [INF] from='client.? 192.168.123.100:0/2918903372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: audit 2026-03-09T17:22:57.393057+0000 mon.b (mon.1) 9 : audit [INF] from='client.? 192.168.123.100:0/2918903372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: audit 2026-03-09T17:22:57.393313+0000 mon.a (mon.0) 375 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: audit 2026-03-09T17:22:57.393313+0000 mon.a (mon.0) 375 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]: dispatch 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: audit 2026-03-09T17:22:57.395868+0000 mon.a (mon.0) 376 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]': finished 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: audit 2026-03-09T17:22:57.395868+0000 mon.a (mon.0) 376 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7306de3d-a962-4f03-99cd-7f218259f7e5"}]': finished 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: cluster 2026-03-09T17:22:57.400781+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: cluster 2026-03-09T17:22:57.400781+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: audit 2026-03-09T17:22:57.400883+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:22:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:57 vm02 bash[23351]: audit 2026-03-09T17:22:57.400883+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:22:59.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:58 vm02 bash[23351]: cluster 2026-03-09T17:22:57.330781+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:58 vm02 bash[23351]: cluster 2026-03-09T17:22:57.330781+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:58 vm02 bash[23351]: audit 2026-03-09T17:22:58.021479+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/1492865243' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:22:58 vm02 bash[23351]: audit 2026-03-09T17:22:58.021479+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/1492865243' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:58 vm00 bash[20770]: cluster 2026-03-09T17:22:57.330781+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:58 vm00 bash[20770]: cluster 2026-03-09T17:22:57.330781+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:58 vm00 bash[20770]: audit 2026-03-09T17:22:58.021479+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/1492865243' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:22:58 vm00 bash[20770]: audit 2026-03-09T17:22:58.021479+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/1492865243' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:58 vm00 bash[28333]: cluster 2026-03-09T17:22:57.330781+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:58 vm00 bash[28333]: cluster 2026-03-09T17:22:57.330781+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:22:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:58 vm00 bash[28333]: audit 2026-03-09T17:22:58.021479+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/1492865243' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:22:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:22:58 vm00 bash[28333]: audit 2026-03-09T17:22:58.021479+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/1492865243' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:23:01.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:00 vm02 bash[23351]: cluster 2026-03-09T17:22:59.331038+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:01.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:00 vm02 bash[23351]: cluster 2026-03-09T17:22:59.331038+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:00 vm00 bash[20770]: cluster 2026-03-09T17:22:59.331038+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:01.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:00 vm00 bash[20770]: cluster 2026-03-09T17:22:59.331038+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:00 vm00 bash[28333]: cluster 2026-03-09T17:22:59.331038+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:00 vm00 bash[28333]: cluster 2026-03-09T17:22:59.331038+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:03.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:02 vm02 bash[23351]: cluster 2026-03-09T17:23:01.331356+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:03.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:02 vm02 bash[23351]: cluster 2026-03-09T17:23:01.331356+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:03.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:02 vm00 bash[20770]: cluster 2026-03-09T17:23:01.331356+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:03.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:02 vm00 bash[20770]: cluster 2026-03-09T17:23:01.331356+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:03.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:02 vm00 bash[28333]: cluster 2026-03-09T17:23:01.331356+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:03.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:02 vm00 bash[28333]: cluster 2026-03-09T17:23:01.331356+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:05.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:04 vm02 bash[23351]: cluster 2026-03-09T17:23:03.331623+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:05.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:04 vm02 bash[23351]: cluster 2026-03-09T17:23:03.331623+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:05.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:04 vm00 bash[20770]: cluster 2026-03-09T17:23:03.331623+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:05.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:04 vm00 bash[20770]: cluster 2026-03-09T17:23:03.331623+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:05.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:04 vm00 bash[28333]: cluster 2026-03-09T17:23:03.331623+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:05.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:04 vm00 bash[28333]: cluster 2026-03-09T17:23:03.331623+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:06 vm00 bash[28333]: cluster 2026-03-09T17:23:05.332183+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:06 vm00 bash[28333]: cluster 2026-03-09T17:23:05.332183+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:06 vm00 bash[28333]: audit 2026-03-09T17:23:06.306991+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:06 vm00 bash[28333]: audit 2026-03-09T17:23:06.306991+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:06 vm00 bash[28333]: audit 2026-03-09T17:23:06.307590+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:06 vm00 bash[28333]: audit 2026-03-09T17:23:06.307590+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:07.125 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:06 vm00 bash[20770]: cluster 2026-03-09T17:23:05.332183+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:06 vm00 bash[20770]: cluster 2026-03-09T17:23:05.332183+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:06 vm00 bash[20770]: audit 2026-03-09T17:23:06.306991+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:06 vm00 bash[20770]: audit 2026-03-09T17:23:06.306991+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:06 vm00 bash[20770]: audit 2026-03-09T17:23:06.307590+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:07.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:06 vm00 bash[20770]: audit 2026-03-09T17:23:06.307590+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:07.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:06 vm02 bash[23351]: cluster 2026-03-09T17:23:05.332183+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:07.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:06 vm02 bash[23351]: cluster 2026-03-09T17:23:05.332183+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:07.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:06 vm02 bash[23351]: audit 2026-03-09T17:23:06.306991+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T17:23:07.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:06 vm02 bash[23351]: audit 2026-03-09T17:23:06.306991+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T17:23:07.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:06 vm02 bash[23351]: audit 2026-03-09T17:23:06.307590+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:07.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:06 vm02 bash[23351]: audit 2026-03-09T17:23:06.307590+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:07.435 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.435 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.435 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.435 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.435 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.435 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.435 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.435 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:07.435 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:23:07 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 bash[28333]: cephadm 2026-03-09T17:23:06.308028+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 bash[28333]: cephadm 2026-03-09T17:23:06.308028+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 bash[28333]: audit 2026-03-09T17:23:07.391409+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 bash[28333]: audit 2026-03-09T17:23:07.391409+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 bash[28333]: audit 2026-03-09T17:23:07.397035+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 bash[28333]: audit 2026-03-09T17:23:07.397035+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 bash[28333]: audit 2026-03-09T17:23:07.402513+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:07 vm00 bash[28333]: audit 2026-03-09T17:23:07.402513+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 bash[20770]: cephadm 2026-03-09T17:23:06.308028+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 bash[20770]: cephadm 2026-03-09T17:23:06.308028+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T17:23:08.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 bash[20770]: audit 2026-03-09T17:23:07.391409+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 bash[20770]: audit 2026-03-09T17:23:07.391409+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 bash[20770]: audit 2026-03-09T17:23:07.397035+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 bash[20770]: audit 2026-03-09T17:23:07.397035+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 bash[20770]: audit 2026-03-09T17:23:07.402513+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:07 vm00 bash[20770]: audit 2026-03-09T17:23:07.402513+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:07 vm02 bash[23351]: cephadm 2026-03-09T17:23:06.308028+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T17:23:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:07 vm02 bash[23351]: cephadm 2026-03-09T17:23:06.308028+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T17:23:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:07 vm02 bash[23351]: audit 2026-03-09T17:23:07.391409+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:07 vm02 bash[23351]: audit 2026-03-09T17:23:07.391409+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:07 vm02 bash[23351]: audit 2026-03-09T17:23:07.397035+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:07 vm02 bash[23351]: audit 2026-03-09T17:23:07.397035+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:07 vm02 bash[23351]: audit 2026-03-09T17:23:07.402513+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:07 vm02 bash[23351]: audit 2026-03-09T17:23:07.402513+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:09.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:08 vm02 bash[23351]: cluster 2026-03-09T17:23:07.332373+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:09.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:08 vm02 bash[23351]: cluster 2026-03-09T17:23:07.332373+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:08 vm00 bash[28333]: cluster 2026-03-09T17:23:07.332373+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:08 vm00 bash[28333]: cluster 2026-03-09T17:23:07.332373+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:09.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:08 vm00 bash[20770]: cluster 2026-03-09T17:23:07.332373+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:09.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:08 vm00 bash[20770]: cluster 2026-03-09T17:23:07.332373+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:11.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:10 vm02 bash[23351]: cluster 2026-03-09T17:23:09.332609+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:10 vm02 bash[23351]: cluster 2026-03-09T17:23:09.332609+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:10 vm02 bash[23351]: audit 2026-03-09T17:23:10.804023+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:10 vm02 bash[23351]: audit 2026-03-09T17:23:10.804023+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:10 vm02 bash[23351]: audit 2026-03-09T17:23:10.804391+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:10 vm02 bash[23351]: audit 2026-03-09T17:23:10.804391+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:10 vm00 bash[28333]: cluster 2026-03-09T17:23:09.332609+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:10 vm00 bash[28333]: cluster 2026-03-09T17:23:09.332609+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:10 vm00 bash[28333]: audit 2026-03-09T17:23:10.804023+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:10 vm00 bash[28333]: audit 2026-03-09T17:23:10.804023+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:10 vm00 bash[28333]: audit 2026-03-09T17:23:10.804391+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:10 vm00 bash[28333]: audit 2026-03-09T17:23:10.804391+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:10 vm00 bash[20770]: cluster 2026-03-09T17:23:09.332609+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:10 vm00 bash[20770]: cluster 2026-03-09T17:23:09.332609+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:10 vm00 bash[20770]: audit 2026-03-09T17:23:10.804023+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:10 vm00 bash[20770]: audit 2026-03-09T17:23:10.804023+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:10 vm00 bash[20770]: audit 2026-03-09T17:23:10.804391+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:11.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:10 vm00 bash[20770]: audit 2026-03-09T17:23:10.804391+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T17:23:12.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: audit 2026-03-09T17:23:10.887359+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: audit 2026-03-09T17:23:10.887359+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: cluster 2026-03-09T17:23:10.890705+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: cluster 2026-03-09T17:23:10.890705+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: audit 2026-03-09T17:23:10.890821+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: audit 2026-03-09T17:23:10.890821+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: audit 2026-03-09T17:23:10.891565+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: audit 2026-03-09T17:23:10.891565+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: audit 2026-03-09T17:23:10.891790+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:11 vm02 bash[23351]: audit 2026-03-09T17:23:10.891790+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: audit 2026-03-09T17:23:10.887359+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: audit 2026-03-09T17:23:10.887359+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: cluster 2026-03-09T17:23:10.890705+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: cluster 2026-03-09T17:23:10.890705+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: audit 2026-03-09T17:23:10.890821+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: audit 2026-03-09T17:23:10.890821+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: audit 2026-03-09T17:23:10.891565+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: audit 2026-03-09T17:23:10.891565+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: audit 2026-03-09T17:23:10.891790+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:11 vm00 bash[28333]: audit 2026-03-09T17:23:10.891790+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: audit 2026-03-09T17:23:10.887359+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: audit 2026-03-09T17:23:10.887359+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T17:23:12.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: cluster 2026-03-09T17:23:10.890705+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T17:23:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: cluster 2026-03-09T17:23:10.890705+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T17:23:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: audit 2026-03-09T17:23:10.890821+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: audit 2026-03-09T17:23:10.890821+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: audit 2026-03-09T17:23:10.891565+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: audit 2026-03-09T17:23:10.891565+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4038313383' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: audit 2026-03-09T17:23:10.891790+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:11 vm00 bash[20770]: audit 2026-03-09T17:23:10.891790+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: cluster 2026-03-09T17:23:11.332812+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: cluster 2026-03-09T17:23:11.332812+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: audit 2026-03-09T17:23:11.889754+0000 mon.a (mon.0) 389 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: audit 2026-03-09T17:23:11.889754+0000 mon.a (mon.0) 389 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: cluster 2026-03-09T17:23:11.892064+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: cluster 2026-03-09T17:23:11.892064+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: audit 2026-03-09T17:23:11.892799+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: audit 2026-03-09T17:23:11.892799+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: audit 2026-03-09T17:23:11.901244+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: audit 2026-03-09T17:23:11.901244+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: audit 2026-03-09T17:23:12.783228+0000 mon.a (mon.0) 393 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:12 vm00 bash[28333]: audit 2026-03-09T17:23:12.783228+0000 mon.a (mon.0) 393 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: cluster 2026-03-09T17:23:11.332812+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: cluster 2026-03-09T17:23:11.332812+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: audit 2026-03-09T17:23:11.889754+0000 mon.a (mon.0) 389 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: audit 2026-03-09T17:23:11.889754+0000 mon.a (mon.0) 389 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: cluster 2026-03-09T17:23:11.892064+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: cluster 2026-03-09T17:23:11.892064+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: audit 2026-03-09T17:23:11.892799+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: audit 2026-03-09T17:23:11.892799+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: audit 2026-03-09T17:23:11.901244+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: audit 2026-03-09T17:23:11.901244+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: audit 2026-03-09T17:23:12.783228+0000 mon.a (mon.0) 393 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T17:23:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:12 vm00 bash[20770]: audit 2026-03-09T17:23:12.783228+0000 mon.a (mon.0) 393 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T17:23:13.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: cluster 2026-03-09T17:23:11.332812+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:13.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: cluster 2026-03-09T17:23:11.332812+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:13.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: audit 2026-03-09T17:23:11.889754+0000 mon.a (mon.0) 389 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:13.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: audit 2026-03-09T17:23:11.889754+0000 mon.a (mon.0) 389 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: cluster 2026-03-09T17:23:11.892064+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T17:23:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: cluster 2026-03-09T17:23:11.892064+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T17:23:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: audit 2026-03-09T17:23:11.892799+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: audit 2026-03-09T17:23:11.892799+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: audit 2026-03-09T17:23:11.901244+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: audit 2026-03-09T17:23:11.901244+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: audit 2026-03-09T17:23:12.783228+0000 mon.a (mon.0) 393 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T17:23:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:12 vm02 bash[23351]: audit 2026-03-09T17:23:12.783228+0000 mon.a (mon.0) 393 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T17:23:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: cluster 2026-03-09T17:23:11.822399+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: cluster 2026-03-09T17:23:11.822399+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: cluster 2026-03-09T17:23:11.822441+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: cluster 2026-03-09T17:23:11.822441+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: audit 2026-03-09T17:23:12.896958+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: audit 2026-03-09T17:23:12.896958+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: audit 2026-03-09T17:23:13.572445+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: audit 2026-03-09T17:23:13.572445+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: audit 2026-03-09T17:23:13.579181+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: audit 2026-03-09T17:23:13.579181+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: cluster 2026-03-09T17:23:13.787264+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 v2:192.168.123.100:6809/4038313383 boot 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: cluster 2026-03-09T17:23:13.787264+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 v2:192.168.123.100:6809/4038313383 boot 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: cluster 2026-03-09T17:23:13.787383+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: cluster 2026-03-09T17:23:13.787383+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: audit 2026-03-09T17:23:13.788862+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:13 vm00 bash[28333]: audit 2026-03-09T17:23:13.788862+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: cluster 2026-03-09T17:23:11.822399+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: cluster 2026-03-09T17:23:11.822399+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: cluster 2026-03-09T17:23:11.822441+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: cluster 2026-03-09T17:23:11.822441+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: audit 2026-03-09T17:23:12.896958+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: audit 2026-03-09T17:23:12.896958+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: audit 2026-03-09T17:23:13.572445+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: audit 2026-03-09T17:23:13.572445+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: audit 2026-03-09T17:23:13.579181+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: audit 2026-03-09T17:23:13.579181+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: cluster 2026-03-09T17:23:13.787264+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 v2:192.168.123.100:6809/4038313383 boot 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: cluster 2026-03-09T17:23:13.787264+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 v2:192.168.123.100:6809/4038313383 boot 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: cluster 2026-03-09T17:23:13.787383+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: cluster 2026-03-09T17:23:13.787383+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: audit 2026-03-09T17:23:13.788862+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:13 vm00 bash[20770]: audit 2026-03-09T17:23:13.788862+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: cluster 2026-03-09T17:23:11.822399+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: cluster 2026-03-09T17:23:11.822399+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: cluster 2026-03-09T17:23:11.822441+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: cluster 2026-03-09T17:23:11.822441+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: audit 2026-03-09T17:23:12.896958+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: audit 2026-03-09T17:23:12.896958+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: audit 2026-03-09T17:23:13.572445+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: audit 2026-03-09T17:23:13.572445+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: audit 2026-03-09T17:23:13.579181+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: audit 2026-03-09T17:23:13.579181+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: cluster 2026-03-09T17:23:13.787264+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 v2:192.168.123.100:6809/4038313383 boot 2026-03-09T17:23:14.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: cluster 2026-03-09T17:23:13.787264+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 v2:192.168.123.100:6809/4038313383 boot 2026-03-09T17:23:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: cluster 2026-03-09T17:23:13.787383+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T17:23:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: cluster 2026-03-09T17:23:13.787383+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T17:23:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: audit 2026-03-09T17:23:13.788862+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:13 vm02 bash[23351]: audit 2026-03-09T17:23:13.788862+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 2 on host 'vm00' 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.607+0000 7ff1e0ff9640 1 -- 192.168.123.100:0/2107034102 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7ff1e40630c0 con 0x7ff1b8077580 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff1b8077580 msgr2=0x7ff1b8079a40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/2107034102 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff1b8077580 0x7ff1b8079a40 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7ff1e41a1f80 tx=0x7ff1d0009290 comp rx=0 tx=0).stop 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff1e4101310 msgr2=0x7ff1e41a0a60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff1e4101310 0x7ff1e41a0a60 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7ff1d400b780 tx=0x7ff1d400bc50 comp rx=0 tx=0).stop 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 shutdown_connections 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/2107034102 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff1b8077580 0x7ff1b8079a40 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff1e4106610 0x7ff1e41a8020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff1e4103c60 0x7ff1e41a0fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 --2- 192.168.123.100:0/2107034102 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff1e4101310 0x7ff1e41a0a60 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 >> 192.168.123.100:0/2107034102 conn(0x7ff1e40fd120 msgr2=0x7ff1e4101f30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 shutdown_connections 2026-03-09T17:23:14.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:14.611+0000 7ff1e9bf7640 1 -- 192.168.123.100:0/2107034102 wait complete. 2026-03-09T17:23:14.718 DEBUG:teuthology.orchestra.run.vm00:osd.2> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.2.service 2026-03-09T17:23:14.719 INFO:tasks.cephadm:Deploying osd.3 on vm00 with /dev/vdb... 2026-03-09T17:23:14.719 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- lvm zap /dev/vdb 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: cluster 2026-03-09T17:23:13.333050+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: cluster 2026-03-09T17:23:13.333050+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.051825+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.051825+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.052460+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.052460+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.057359+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.057359+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.595822+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.595822+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.602090+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.602090+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.608366+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:14 vm00 bash[28333]: audit 2026-03-09T17:23:14.608366+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: cluster 2026-03-09T17:23:13.333050+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: cluster 2026-03-09T17:23:13.333050+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.051825+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.051825+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.052460+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.052460+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.057359+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.057359+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.595822+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.595822+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.602090+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.602090+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.608366+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:14 vm00 bash[20770]: audit 2026-03-09T17:23:14.608366+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: cluster 2026-03-09T17:23:13.333050+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:15.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: cluster 2026-03-09T17:23:13.333050+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T17:23:15.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.051825+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:15.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.051825+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.052460+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.052460+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.057359+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.057359+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.595822+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.595822+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.602090+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.602090+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.608366+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:14 vm02 bash[23351]: audit 2026-03-09T17:23:14.608366+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:15 vm00 bash[28333]: cluster 2026-03-09T17:23:14.922406+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:15 vm00 bash[28333]: cluster 2026-03-09T17:23:14.922406+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:15 vm00 bash[28333]: cluster 2026-03-09T17:23:15.333306+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:15 vm00 bash[28333]: cluster 2026-03-09T17:23:15.333306+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:15 vm00 bash[28333]: audit 2026-03-09T17:23:15.370594+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:15 vm00 bash[28333]: audit 2026-03-09T17:23:15.370594+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:15 vm00 bash[20770]: cluster 2026-03-09T17:23:14.922406+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:15 vm00 bash[20770]: cluster 2026-03-09T17:23:14.922406+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:15 vm00 bash[20770]: cluster 2026-03-09T17:23:15.333306+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:16.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:15 vm00 bash[20770]: cluster 2026-03-09T17:23:15.333306+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:15 vm00 bash[20770]: audit 2026-03-09T17:23:15.370594+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:15 vm00 bash[20770]: audit 2026-03-09T17:23:15.370594+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:16.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:15 vm02 bash[23351]: cluster 2026-03-09T17:23:14.922406+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T17:23:16.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:15 vm02 bash[23351]: cluster 2026-03-09T17:23:14.922406+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T17:23:16.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:15 vm02 bash[23351]: cluster 2026-03-09T17:23:15.333306+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:16.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:15 vm02 bash[23351]: cluster 2026-03-09T17:23:15.333306+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:16.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:15 vm02 bash[23351]: audit 2026-03-09T17:23:15.370594+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:16.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:15 vm02 bash[23351]: audit 2026-03-09T17:23:15.370594+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:17.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:16 vm00 bash[28333]: audit 2026-03-09T17:23:15.930893+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:17.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:16 vm00 bash[28333]: audit 2026-03-09T17:23:15.930893+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:17.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:16 vm00 bash[28333]: cluster 2026-03-09T17:23:15.937026+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:16 vm00 bash[28333]: cluster 2026-03-09T17:23:15.937026+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:16 vm00 bash[28333]: audit 2026-03-09T17:23:15.943772+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:16 vm00 bash[28333]: audit 2026-03-09T17:23:15.943772+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:16 vm00 bash[20770]: audit 2026-03-09T17:23:15.930893+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:16 vm00 bash[20770]: audit 2026-03-09T17:23:15.930893+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:16 vm00 bash[20770]: cluster 2026-03-09T17:23:15.937026+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:16 vm00 bash[20770]: cluster 2026-03-09T17:23:15.937026+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:16 vm00 bash[20770]: audit 2026-03-09T17:23:15.943772+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:16 vm00 bash[20770]: audit 2026-03-09T17:23:15.943772+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:17.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:16 vm02 bash[23351]: audit 2026-03-09T17:23:15.930893+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:16 vm02 bash[23351]: audit 2026-03-09T17:23:15.930893+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:16 vm02 bash[23351]: cluster 2026-03-09T17:23:15.937026+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T17:23:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:16 vm02 bash[23351]: cluster 2026-03-09T17:23:15.937026+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T17:23:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:16 vm02 bash[23351]: audit 2026-03-09T17:23:15.943772+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:17.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:16 vm02 bash[23351]: audit 2026-03-09T17:23:15.943772+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:16.933431+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:16.933431+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: cluster 2026-03-09T17:23:16.936745+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: cluster 2026-03-09T17:23:16.936745+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.040695+0000 mon.a (mon.0) 413 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.040695+0000 mon.a (mon.0) 413 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.060597+0000 mon.a (mon.0) 414 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.060597+0000 mon.a (mon.0) 414 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.060885+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.060885+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.061082+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.061082+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.061266+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.061266+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.062620+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.062620+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.062736+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.062736+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.062809+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.062809+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.062957+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.062957+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.079629+0000 mon.b (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.079629+0000 mon.b (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.081044+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.081044+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.081271+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.081271+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.081313+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.081313+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.081354+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.081354+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.097490+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: audit 2026-03-09T17:23:17.097490+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: cluster 2026-03-09T17:23:17.333625+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:17 vm00 bash[20770]: cluster 2026-03-09T17:23:17.333625+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:16.933431+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:16.933431+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: cluster 2026-03-09T17:23:16.936745+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: cluster 2026-03-09T17:23:16.936745+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.040695+0000 mon.a (mon.0) 413 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.040695+0000 mon.a (mon.0) 413 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.060597+0000 mon.a (mon.0) 414 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.060597+0000 mon.a (mon.0) 414 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.060885+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.060885+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.061082+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.061082+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.061266+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.061266+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.062620+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.062620+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.062736+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.062736+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.062809+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.062809+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.062957+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.062957+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.079629+0000 mon.b (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.079629+0000 mon.b (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.081044+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.081044+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.081271+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.081271+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.081313+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.081313+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.081354+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.081354+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.097490+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: audit 2026-03-09T17:23:17.097490+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: cluster 2026-03-09T17:23:17.333625+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:18.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:17 vm00 bash[28333]: cluster 2026-03-09T17:23:17.333625+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:16.933431+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:16.933431+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: cluster 2026-03-09T17:23:16.936745+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: cluster 2026-03-09T17:23:16.936745+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.040695+0000 mon.a (mon.0) 413 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.040695+0000 mon.a (mon.0) 413 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.060597+0000 mon.a (mon.0) 414 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.060597+0000 mon.a (mon.0) 414 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.060885+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.060885+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.061082+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.061082+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.061266+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.061266+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.062620+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.062620+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.062736+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.062736+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.062809+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.062809+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.062957+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.062957+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.079629+0000 mon.b (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.079629+0000 mon.b (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.081044+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.081044+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.081271+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.081271+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.081313+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.081313+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:23:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.081354+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.081354+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:23:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.097490+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: audit 2026-03-09T17:23:17.097490+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T17:23:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: cluster 2026-03-09T17:23:17.333625+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:17 vm02 bash[23351]: cluster 2026-03-09T17:23:17.333625+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:18 vm00 bash[28333]: cluster 2026-03-09T17:23:17.963320+0000 mon.a (mon.0) 424 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T17:23:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:18 vm00 bash[28333]: cluster 2026-03-09T17:23:17.963320+0000 mon.a (mon.0) 424 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T17:23:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:18 vm00 bash[28333]: cluster 2026-03-09T17:23:17.963354+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T17:23:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:18 vm00 bash[28333]: cluster 2026-03-09T17:23:17.963354+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T17:23:19.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:18 vm00 bash[20770]: cluster 2026-03-09T17:23:17.963320+0000 mon.a (mon.0) 424 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T17:23:19.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:18 vm00 bash[20770]: cluster 2026-03-09T17:23:17.963320+0000 mon.a (mon.0) 424 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T17:23:19.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:18 vm00 bash[20770]: cluster 2026-03-09T17:23:17.963354+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T17:23:19.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:18 vm00 bash[20770]: cluster 2026-03-09T17:23:17.963354+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T17:23:19.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:18 vm02 bash[23351]: cluster 2026-03-09T17:23:17.963320+0000 mon.a (mon.0) 424 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T17:23:19.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:18 vm02 bash[23351]: cluster 2026-03-09T17:23:17.963320+0000 mon.a (mon.0) 424 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T17:23:19.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:18 vm02 bash[23351]: cluster 2026-03-09T17:23:17.963354+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T17:23:19.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:18 vm02 bash[23351]: cluster 2026-03-09T17:23:17.963354+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T17:23:19.395 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:23:20.225 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:19 vm00 bash[28333]: cluster 2026-03-09T17:23:19.333916+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:20.225 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:19 vm00 bash[28333]: cluster 2026-03-09T17:23:19.333916+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:20.225 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:19 vm00 bash[20770]: cluster 2026-03-09T17:23:19.333916+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:20.225 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:19 vm00 bash[20770]: cluster 2026-03-09T17:23:19.333916+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:20.258 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:23:20.274 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch daemon add osd vm00:/dev/vdb 2026-03-09T17:23:20.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:19 vm02 bash[23351]: cluster 2026-03-09T17:23:19.333916+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:20.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:19 vm02 bash[23351]: cluster 2026-03-09T17:23:19.333916+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: cephadm 2026-03-09T17:23:20.993796+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: cephadm 2026-03-09T17:23:20.993796+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:20.999451+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:20.999451+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.005937+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.005937+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.008046+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.008046+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.008633+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.008633+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.008996+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.008996+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.013092+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: audit 2026-03-09T17:23:21.013092+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: cluster 2026-03-09T17:23:21.334233+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:22 vm00 bash[28333]: cluster 2026-03-09T17:23:21.334233+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: cephadm 2026-03-09T17:23:20.993796+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: cephadm 2026-03-09T17:23:20.993796+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:20.999451+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:20.999451+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.005937+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.005937+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.008046+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.008046+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.008633+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.008633+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.008996+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.008996+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.013092+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: audit 2026-03-09T17:23:21.013092+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: cluster 2026-03-09T17:23:21.334233+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:22 vm00 bash[20770]: cluster 2026-03-09T17:23:21.334233+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:22.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: cephadm 2026-03-09T17:23:20.993796+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: cephadm 2026-03-09T17:23:20.993796+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:20.999451+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:20.999451+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.005937+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.005937+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.008046+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.008046+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.008633+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.008633+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.008996+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.008996+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.013092+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: audit 2026-03-09T17:23:21.013092+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: cluster 2026-03-09T17:23:21.334233+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:22 vm02 bash[23351]: cluster 2026-03-09T17:23:21.334233+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:24.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:24 vm00 bash[20770]: cluster 2026-03-09T17:23:23.334532+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:24.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:24 vm00 bash[20770]: cluster 2026-03-09T17:23:23.334532+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:24.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:24 vm00 bash[28333]: cluster 2026-03-09T17:23:23.334532+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:24.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:24 vm00 bash[28333]: cluster 2026-03-09T17:23:23.334532+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:24.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:24 vm02 bash[23351]: cluster 2026-03-09T17:23:23.334532+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:24.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:24 vm02 bash[23351]: cluster 2026-03-09T17:23:23.334532+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:24.923 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:23:25.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 -- 192.168.123.100:0/2058433689 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1f540770a0 msgr2=0x7f1f54075500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:23:25.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 --2- 192.168.123.100:0/2058433689 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1f540770a0 0x7f1f54075500 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f1f48009a30 tx=0x7f1f4802f220 comp rx=0 tx=0).stop 2026-03-09T17:23:25.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 -- 192.168.123.100:0/2058433689 shutdown_connections 2026-03-09T17:23:25.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 --2- 192.168.123.100:0/2058433689 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1f5410a9d0 0x7f1f5410ce90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:25.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 --2- 192.168.123.100:0/2058433689 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1f54075a40 0x7f1f54075ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:25.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 --2- 192.168.123.100:0/2058433689 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1f540770a0 0x7f1f54075500 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:25.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 -- 192.168.123.100:0/2058433689 >> 192.168.123.100:0/2058433689 conn(0x7f1f540fe290 msgr2=0x7f1f541006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:23:25.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 -- 192.168.123.100:0/2058433689 shutdown_connections 2026-03-09T17:23:25.103 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 -- 192.168.123.100:0/2058433689 wait complete. 2026-03-09T17:23:25.103 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.099+0000 7f1f591b8640 1 Processor -- start 2026-03-09T17:23:25.103 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 -- start start 2026-03-09T17:23:25.103 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1f54075a40 0x7f1f5419c530 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:23:25.103 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1f540770a0 0x7f1f5419ca70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1f5410a9d0 0x7f1f541a3af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f1f5410fe80 con 0x7f1f54075a40 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f1f5410fd00 con 0x7f1f5410a9d0 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f1f54110000 con 0x7f1f540770a0 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1f54075a40 0x7f1f5419c530 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1f54075a40 0x7f1f5419c530 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:52696/0 (socket says 192.168.123.100:52696) 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52d76640 1 -- 192.168.123.100:0/1083874874 learned_addr learned my addr 192.168.123.100:0/1083874874 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f53577640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1f5410a9d0 0x7f1f541a3af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:23:25.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52575640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1f540770a0 0x7f1f5419ca70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:23:25.105 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52575640 1 -- 192.168.123.100:0/1083874874 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1f5410a9d0 msgr2=0x7f1f541a3af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:23:25.105 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52575640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1f5410a9d0 0x7f1f541a3af0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:25.105 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52575640 1 -- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1f54075a40 msgr2=0x7f1f5419c530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:23:25.105 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52575640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1f54075a40 0x7f1f5419c530 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:25.105 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52575640 1 -- 192.168.123.100:0/1083874874 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1f541a4160 con 0x7f1f540770a0 2026-03-09T17:23:25.105 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52575640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1f540770a0 0x7f1f5419ca70 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f1f4000cce0 tx=0x7f1f40007590 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:23:25.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f2ffff640 1 -- 192.168.123.100:0/1083874874 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1f40013070 con 0x7f1f540770a0 2026-03-09T17:23:25.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f2ffff640 1 -- 192.168.123.100:0/1083874874 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f1f400044e0 con 0x7f1f540770a0 2026-03-09T17:23:25.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f52d76640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1f54075a40 0x7f1f5419c530 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:23:25.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f1f541a4450 con 0x7f1f540770a0 2026-03-09T17:23:25.107 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f1f541a4960 con 0x7f1f540770a0 2026-03-09T17:23:25.107 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1f18005180 con 0x7f1f540770a0 2026-03-09T17:23:25.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.103+0000 7f1f2ffff640 1 -- 192.168.123.100:0/1083874874 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1f4000f450 con 0x7f1f540770a0 2026-03-09T17:23:25.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.111+0000 7f1f2ffff640 1 -- 192.168.123.100:0/1083874874 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f1f400040a0 con 0x7f1f540770a0 2026-03-09T17:23:25.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.111+0000 7f1f2ffff640 1 --2- 192.168.123.100:0/1083874874 >> v2:192.168.123.100:6800/3114914985 conn(0x7f1f28077610 0x7f1f28079ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:23:25.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.111+0000 7f1f52d76640 1 --2- 192.168.123.100:0/1083874874 >> v2:192.168.123.100:6800/3114914985 conn(0x7f1f28077610 0x7f1f28079ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:23:25.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.111+0000 7f1f2ffff640 1 -- 192.168.123.100:0/1083874874 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(22..22 src has 1..22) ==== 2501+0+0 (secure 0 0 0) 0x7f1f40098f90 con 0x7f1f540770a0 2026-03-09T17:23:25.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.111+0000 7f1f2ffff640 1 -- 192.168.123.100:0/1083874874 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f1f400993b0 con 0x7f1f540770a0 2026-03-09T17:23:25.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.111+0000 7f1f52d76640 1 --2- 192.168.123.100:0/1083874874 >> v2:192.168.123.100:6800/3114914985 conn(0x7f1f28077610 0x7f1f28079ad0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f1f48002ba0 tx=0x7f1f480057d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:23:25.212 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:25.211+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f1f18002bf0 con 0x7f1f28077610 2026-03-09T17:23:25.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:25 vm00 bash[20770]: audit 2026-03-09T17:23:25.214904+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:23:25.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:25 vm00 bash[20770]: audit 2026-03-09T17:23:25.214904+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:23:25.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:25 vm00 bash[20770]: audit 2026-03-09T17:23:25.216639+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:23:25.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:25 vm00 bash[20770]: audit 2026-03-09T17:23:25.216639+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:23:25.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:25 vm00 bash[20770]: audit 2026-03-09T17:23:25.217148+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:25.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:25 vm00 bash[20770]: audit 2026-03-09T17:23:25.217148+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:25 vm00 bash[28333]: audit 2026-03-09T17:23:25.214904+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:23:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:25 vm00 bash[28333]: audit 2026-03-09T17:23:25.214904+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:23:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:25 vm00 bash[28333]: audit 2026-03-09T17:23:25.216639+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:23:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:25 vm00 bash[28333]: audit 2026-03-09T17:23:25.216639+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:23:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:25 vm00 bash[28333]: audit 2026-03-09T17:23:25.217148+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:25 vm00 bash[28333]: audit 2026-03-09T17:23:25.217148+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:25.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:25 vm02 bash[23351]: audit 2026-03-09T17:23:25.214904+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:23:25.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:25 vm02 bash[23351]: audit 2026-03-09T17:23:25.214904+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:23:25.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:25 vm02 bash[23351]: audit 2026-03-09T17:23:25.216639+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:23:25.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:25 vm02 bash[23351]: audit 2026-03-09T17:23:25.216639+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:23:25.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:25 vm02 bash[23351]: audit 2026-03-09T17:23:25.217148+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:25.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:25 vm02 bash[23351]: audit 2026-03-09T17:23:25.217148+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:26.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:26 vm00 bash[20770]: audit 2026-03-09T17:23:25.213518+0000 mgr.y (mgr.14150) 123 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:23:26.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:26 vm00 bash[20770]: audit 2026-03-09T17:23:25.213518+0000 mgr.y (mgr.14150) 123 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:23:26.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:26 vm00 bash[20770]: cluster 2026-03-09T17:23:25.334921+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:26.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:26 vm00 bash[20770]: cluster 2026-03-09T17:23:25.334921+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:26.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:26 vm00 bash[28333]: audit 2026-03-09T17:23:25.213518+0000 mgr.y (mgr.14150) 123 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:23:26.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:26 vm00 bash[28333]: audit 2026-03-09T17:23:25.213518+0000 mgr.y (mgr.14150) 123 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:23:26.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:26 vm00 bash[28333]: cluster 2026-03-09T17:23:25.334921+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:26.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:26 vm00 bash[28333]: cluster 2026-03-09T17:23:25.334921+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:26.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:26 vm02 bash[23351]: audit 2026-03-09T17:23:25.213518+0000 mgr.y (mgr.14150) 123 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:23:26.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:26 vm02 bash[23351]: audit 2026-03-09T17:23:25.213518+0000 mgr.y (mgr.14150) 123 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:23:26.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:26 vm02 bash[23351]: cluster 2026-03-09T17:23:25.334921+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:26.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:26 vm02 bash[23351]: cluster 2026-03-09T17:23:25.334921+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:28.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:28 vm00 bash[20770]: cluster 2026-03-09T17:23:27.335142+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:28.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:28 vm00 bash[20770]: cluster 2026-03-09T17:23:27.335142+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:28.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:28 vm00 bash[28333]: cluster 2026-03-09T17:23:27.335142+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:28.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:28 vm00 bash[28333]: cluster 2026-03-09T17:23:27.335142+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:28.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:28 vm02 bash[23351]: cluster 2026-03-09T17:23:27.335142+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:28.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:28 vm02 bash[23351]: cluster 2026-03-09T17:23:27.335142+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:30.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:30 vm00 bash[28333]: cluster 2026-03-09T17:23:29.335504+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:30.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:30 vm00 bash[28333]: cluster 2026-03-09T17:23:29.335504+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:30.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:30 vm00 bash[20770]: cluster 2026-03-09T17:23:29.335504+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:30.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:30 vm00 bash[20770]: cluster 2026-03-09T17:23:29.335504+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:30.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:30 vm02 bash[23351]: cluster 2026-03-09T17:23:29.335504+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:30.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:30 vm02 bash[23351]: cluster 2026-03-09T17:23:29.335504+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:31.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:30.693800+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1533615458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:30.693800+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1533615458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:30.694191+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:30.694191+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:30.697101+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]': finished 2026-03-09T17:23:31.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:30.697101+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]': finished 2026-03-09T17:23:31.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: cluster 2026-03-09T17:23:30.700830+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: cluster 2026-03-09T17:23:30.700830+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:30.700950+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:30.700950+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:31.381755+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.100:0/1277239256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:31 vm00 bash[28333]: audit 2026-03-09T17:23:31.381755+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.100:0/1277239256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:30.693800+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1533615458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:30.693800+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1533615458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:30.694191+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:30.694191+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:30.697101+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]': finished 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:30.697101+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]': finished 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: cluster 2026-03-09T17:23:30.700830+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: cluster 2026-03-09T17:23:30.700830+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:30.700950+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:30.700950+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:31.381755+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.100:0/1277239256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:23:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:31 vm00 bash[20770]: audit 2026-03-09T17:23:31.381755+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.100:0/1277239256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:23:31.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:30.693800+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1533615458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:30.693800+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1533615458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:30.694191+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:30.694191+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]: dispatch 2026-03-09T17:23:31.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:30.697101+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]': finished 2026-03-09T17:23:31.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:30.697101+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "814401b5-fa87-447f-8581-6e4a7fde7f2e"}]': finished 2026-03-09T17:23:31.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: cluster 2026-03-09T17:23:30.700830+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T17:23:31.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: cluster 2026-03-09T17:23:30.700830+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T17:23:31.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:30.700950+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:31.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:30.700950+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:31.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:31.381755+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.100:0/1277239256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:23:31.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:31 vm02 bash[23351]: audit 2026-03-09T17:23:31.381755+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.100:0/1277239256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:23:33.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:32 vm00 bash[28333]: cluster 2026-03-09T17:23:31.335814+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:33.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:32 vm00 bash[28333]: cluster 2026-03-09T17:23:31.335814+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:33.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:32 vm00 bash[20770]: cluster 2026-03-09T17:23:31.335814+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:33.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:32 vm00 bash[20770]: cluster 2026-03-09T17:23:31.335814+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:33.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:32 vm02 bash[23351]: cluster 2026-03-09T17:23:31.335814+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:32 vm02 bash[23351]: cluster 2026-03-09T17:23:31.335814+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:35.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:34 vm00 bash[28333]: cluster 2026-03-09T17:23:33.336145+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:35.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:34 vm00 bash[28333]: cluster 2026-03-09T17:23:33.336145+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:35.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:34 vm00 bash[20770]: cluster 2026-03-09T17:23:33.336145+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:35.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:34 vm00 bash[20770]: cluster 2026-03-09T17:23:33.336145+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:35.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:34 vm02 bash[23351]: cluster 2026-03-09T17:23:33.336145+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:35.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:34 vm02 bash[23351]: cluster 2026-03-09T17:23:33.336145+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:37.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:36 vm00 bash[28333]: cluster 2026-03-09T17:23:35.336414+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:37.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:36 vm00 bash[28333]: cluster 2026-03-09T17:23:35.336414+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:36 vm00 bash[20770]: cluster 2026-03-09T17:23:35.336414+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:36 vm00 bash[20770]: cluster 2026-03-09T17:23:35.336414+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:37.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:36 vm02 bash[23351]: cluster 2026-03-09T17:23:35.336414+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:36 vm02 bash[23351]: cluster 2026-03-09T17:23:35.336414+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:39.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:38 vm02 bash[23351]: cluster 2026-03-09T17:23:37.336666+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:39.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:38 vm02 bash[23351]: cluster 2026-03-09T17:23:37.336666+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:38 vm00 bash[28333]: cluster 2026-03-09T17:23:37.336666+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:38 vm00 bash[28333]: cluster 2026-03-09T17:23:37.336666+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:39.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:38 vm00 bash[20770]: cluster 2026-03-09T17:23:37.336666+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:39.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:38 vm00 bash[20770]: cluster 2026-03-09T17:23:37.336666+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:41.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:40 vm02 bash[23351]: cluster 2026-03-09T17:23:39.336963+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:40 vm02 bash[23351]: cluster 2026-03-09T17:23:39.336963+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:41.250 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:40 vm00 bash[28333]: cluster 2026-03-09T17:23:39.336963+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:41.250 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:40 vm00 bash[28333]: cluster 2026-03-09T17:23:39.336963+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:41.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:40 vm00 bash[20770]: cluster 2026-03-09T17:23:39.336963+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:40 vm00 bash[20770]: cluster 2026-03-09T17:23:39.336963+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:42.281 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:41 vm00 bash[28333]: audit 2026-03-09T17:23:41.311299+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T17:23:42.282 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:41 vm00 bash[28333]: audit 2026-03-09T17:23:41.311299+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T17:23:42.282 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:41 vm00 bash[28333]: audit 2026-03-09T17:23:41.312041+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:42.282 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:41 vm00 bash[28333]: audit 2026-03-09T17:23:41.312041+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:42.282 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:41 vm00 bash[20770]: audit 2026-03-09T17:23:41.311299+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T17:23:42.282 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:41 vm00 bash[20770]: audit 2026-03-09T17:23:41.311299+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T17:23:42.282 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:41 vm00 bash[20770]: audit 2026-03-09T17:23:41.312041+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:42.282 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:41 vm00 bash[20770]: audit 2026-03-09T17:23:41.312041+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:42.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:41 vm02 bash[23351]: audit 2026-03-09T17:23:41.311299+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T17:23:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:41 vm02 bash[23351]: audit 2026-03-09T17:23:41.311299+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T17:23:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:41 vm02 bash[23351]: audit 2026-03-09T17:23:41.312041+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:41 vm02 bash[23351]: audit 2026-03-09T17:23:41.312041+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:42.901 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:42 vm00 bash[20770]: cephadm 2026-03-09T17:23:41.312543+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:42 vm00 bash[20770]: cephadm 2026-03-09T17:23:41.312543+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:42 vm00 bash[20770]: cluster 2026-03-09T17:23:41.337238+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:42 vm00 bash[20770]: cluster 2026-03-09T17:23:41.337238+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:42.902 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:42.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:23:42 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:23:43.261 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:42 vm00 bash[28333]: cephadm 2026-03-09T17:23:41.312543+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T17:23:43.261 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:42 vm00 bash[28333]: cephadm 2026-03-09T17:23:41.312543+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T17:23:43.261 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:42 vm00 bash[28333]: cluster 2026-03-09T17:23:41.337238+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:43.261 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:42 vm00 bash[28333]: cluster 2026-03-09T17:23:41.337238+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:43.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:42 vm02 bash[23351]: cephadm 2026-03-09T17:23:41.312543+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T17:23:43.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:42 vm02 bash[23351]: cephadm 2026-03-09T17:23:41.312543+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T17:23:43.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:42 vm02 bash[23351]: cluster 2026-03-09T17:23:41.337238+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:43.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:42 vm02 bash[23351]: cluster 2026-03-09T17:23:41.337238+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:43 vm00 bash[28333]: audit 2026-03-09T17:23:42.975825+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:43 vm00 bash[28333]: audit 2026-03-09T17:23:42.975825+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:43 vm00 bash[28333]: audit 2026-03-09T17:23:42.980058+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:43 vm00 bash[28333]: audit 2026-03-09T17:23:42.980058+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:43 vm00 bash[28333]: audit 2026-03-09T17:23:42.987155+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:43 vm00 bash[28333]: audit 2026-03-09T17:23:42.987155+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:43 vm00 bash[20770]: audit 2026-03-09T17:23:42.975825+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:43 vm00 bash[20770]: audit 2026-03-09T17:23:42.975825+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:43 vm00 bash[20770]: audit 2026-03-09T17:23:42.980058+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:43 vm00 bash[20770]: audit 2026-03-09T17:23:42.980058+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:43 vm00 bash[20770]: audit 2026-03-09T17:23:42.987155+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:43 vm00 bash[20770]: audit 2026-03-09T17:23:42.987155+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:43 vm02 bash[23351]: audit 2026-03-09T17:23:42.975825+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:44.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:43 vm02 bash[23351]: audit 2026-03-09T17:23:42.975825+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:44.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:43 vm02 bash[23351]: audit 2026-03-09T17:23:42.980058+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:43 vm02 bash[23351]: audit 2026-03-09T17:23:42.980058+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:43 vm02 bash[23351]: audit 2026-03-09T17:23:42.987155+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:44.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:43 vm02 bash[23351]: audit 2026-03-09T17:23:42.987155+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:45.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:45 vm02 bash[23351]: cluster 2026-03-09T17:23:43.337531+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:45.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:45 vm02 bash[23351]: cluster 2026-03-09T17:23:43.337531+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:45.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:45 vm00 bash[20770]: cluster 2026-03-09T17:23:43.337531+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:45.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:45 vm00 bash[20770]: cluster 2026-03-09T17:23:43.337531+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:45.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:45 vm00 bash[28333]: cluster 2026-03-09T17:23:43.337531+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:45.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:45 vm00 bash[28333]: cluster 2026-03-09T17:23:43.337531+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:46.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:46 vm02 bash[23351]: cluster 2026-03-09T17:23:45.337834+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:46.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:46 vm02 bash[23351]: cluster 2026-03-09T17:23:45.337834+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:46.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:46 vm00 bash[28333]: cluster 2026-03-09T17:23:45.337834+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:46.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:46 vm00 bash[28333]: cluster 2026-03-09T17:23:45.337834+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:46.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:46 vm00 bash[20770]: cluster 2026-03-09T17:23:45.337834+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:46.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:46 vm00 bash[20770]: cluster 2026-03-09T17:23:45.337834+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:47.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:47 vm02 bash[23351]: audit 2026-03-09T17:23:46.222879+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:47 vm02 bash[23351]: audit 2026-03-09T17:23:46.222879+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:47 vm02 bash[23351]: audit 2026-03-09T17:23:46.223253+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:47 vm02 bash[23351]: audit 2026-03-09T17:23:46.223253+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:47 vm00 bash[28333]: audit 2026-03-09T17:23:46.222879+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:47 vm00 bash[28333]: audit 2026-03-09T17:23:46.222879+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:47 vm00 bash[28333]: audit 2026-03-09T17:23:46.223253+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:47 vm00 bash[28333]: audit 2026-03-09T17:23:46.223253+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:47 vm00 bash[20770]: audit 2026-03-09T17:23:46.222879+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:47 vm00 bash[20770]: audit 2026-03-09T17:23:46.222879+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:47 vm00 bash[20770]: audit 2026-03-09T17:23:46.223253+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:47.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:47 vm00 bash[20770]: audit 2026-03-09T17:23:46.223253+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T17:23:48.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: audit 2026-03-09T17:23:47.062070+0000 mon.a (mon.0) 445 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T17:23:48.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: audit 2026-03-09T17:23:47.062070+0000 mon.a (mon.0) 445 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: cluster 2026-03-09T17:23:47.066240+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: cluster 2026-03-09T17:23:47.066240+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: audit 2026-03-09T17:23:47.066551+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: audit 2026-03-09T17:23:47.066551+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: audit 2026-03-09T17:23:47.066997+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: audit 2026-03-09T17:23:47.066997+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: audit 2026-03-09T17:23:47.067272+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: audit 2026-03-09T17:23:47.067272+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: cluster 2026-03-09T17:23:47.338106+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:48 vm02 bash[23351]: cluster 2026-03-09T17:23:47.338106+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: audit 2026-03-09T17:23:47.062070+0000 mon.a (mon.0) 445 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: audit 2026-03-09T17:23:47.062070+0000 mon.a (mon.0) 445 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: cluster 2026-03-09T17:23:47.066240+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: cluster 2026-03-09T17:23:47.066240+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: audit 2026-03-09T17:23:47.066551+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: audit 2026-03-09T17:23:47.066551+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: audit 2026-03-09T17:23:47.066997+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: audit 2026-03-09T17:23:47.066997+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: audit 2026-03-09T17:23:47.067272+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: audit 2026-03-09T17:23:47.067272+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: cluster 2026-03-09T17:23:47.338106+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:48 vm00 bash[28333]: cluster 2026-03-09T17:23:47.338106+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: audit 2026-03-09T17:23:47.062070+0000 mon.a (mon.0) 445 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T17:23:48.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: audit 2026-03-09T17:23:47.062070+0000 mon.a (mon.0) 445 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: cluster 2026-03-09T17:23:47.066240+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: cluster 2026-03-09T17:23:47.066240+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: audit 2026-03-09T17:23:47.066551+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: audit 2026-03-09T17:23:47.066551+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: audit 2026-03-09T17:23:47.066997+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: audit 2026-03-09T17:23:47.066997+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.100:6813/652999983' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: audit 2026-03-09T17:23:47.067272+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: audit 2026-03-09T17:23:47.067272+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: cluster 2026-03-09T17:23:47.338106+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:48 vm00 bash[20770]: cluster 2026-03-09T17:23:47.338106+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:49.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: audit 2026-03-09T17:23:48.064570+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:49.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: audit 2026-03-09T17:23:48.064570+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:49.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: cluster 2026-03-09T17:23:48.070098+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T17:23:49.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: cluster 2026-03-09T17:23:48.070098+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: audit 2026-03-09T17:23:48.070711+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: audit 2026-03-09T17:23:48.070711+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: audit 2026-03-09T17:23:48.088595+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: audit 2026-03-09T17:23:48.088595+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: audit 2026-03-09T17:23:49.071577+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:49 vm00 bash[28333]: audit 2026-03-09T17:23:49.071577+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: audit 2026-03-09T17:23:48.064570+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: audit 2026-03-09T17:23:48.064570+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: cluster 2026-03-09T17:23:48.070098+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: cluster 2026-03-09T17:23:48.070098+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: audit 2026-03-09T17:23:48.070711+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: audit 2026-03-09T17:23:48.070711+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: audit 2026-03-09T17:23:48.088595+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: audit 2026-03-09T17:23:48.088595+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: audit 2026-03-09T17:23:49.071577+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:49 vm00 bash[20770]: audit 2026-03-09T17:23:49.071577+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: audit 2026-03-09T17:23:48.064570+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:49.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: audit 2026-03-09T17:23:48.064570+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T17:23:49.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: cluster 2026-03-09T17:23:48.070098+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T17:23:49.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: cluster 2026-03-09T17:23:48.070098+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T17:23:49.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: audit 2026-03-09T17:23:48.070711+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: audit 2026-03-09T17:23:48.070711+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: audit 2026-03-09T17:23:48.088595+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: audit 2026-03-09T17:23:48.088595+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: audit 2026-03-09T17:23:49.071577+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:49.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:49 vm02 bash[23351]: audit 2026-03-09T17:23:49.071577+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:50.164 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 3 on host 'vm00' 2026-03-09T17:23:50.165 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.159+0000 7f1f2ffff640 1 -- 192.168.123.100:0/1083874874 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f1f18002bf0 con 0x7f1f28077610 2026-03-09T17:23:50.167 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.163+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 >> v2:192.168.123.100:6800/3114914985 conn(0x7f1f28077610 msgr2=0x7f1f28079ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:23:50.167 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.163+0000 7f1f591b8640 1 --2- 192.168.123.100:0/1083874874 >> v2:192.168.123.100:6800/3114914985 conn(0x7f1f28077610 0x7f1f28079ad0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f1f48002ba0 tx=0x7f1f480057d0 comp rx=0 tx=0).stop 2026-03-09T17:23:50.167 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.163+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1f540770a0 msgr2=0x7f1f5419ca70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:23:50.167 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.163+0000 7f1f591b8640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1f540770a0 0x7f1f5419ca70 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f1f4000cce0 tx=0x7f1f40007590 comp rx=0 tx=0).stop 2026-03-09T17:23:50.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.167+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 shutdown_connections 2026-03-09T17:23:50.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.167+0000 7f1f591b8640 1 --2- 192.168.123.100:0/1083874874 >> v2:192.168.123.100:6800/3114914985 conn(0x7f1f28077610 0x7f1f28079ad0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:50.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.167+0000 7f1f591b8640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1f5410a9d0 0x7f1f541a3af0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:50.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.167+0000 7f1f591b8640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1f540770a0 0x7f1f5419ca70 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:50.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.167+0000 7f1f591b8640 1 --2- 192.168.123.100:0/1083874874 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1f54075a40 0x7f1f5419c530 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:23:50.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.167+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 >> 192.168.123.100:0/1083874874 conn(0x7f1f540fe290 msgr2=0x7f1f54100680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:23:50.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.167+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 shutdown_connections 2026-03-09T17:23:50.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:23:50.167+0000 7f1f591b8640 1 -- 192.168.123.100:0/1083874874 wait complete. 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:47.183384+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:47.183384+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:47.183437+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:47.183437+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:49.082732+0000 mon.a (mon.0) 454 : cluster [INF] osd.3 v2:192.168.123.100:6813/652999983 boot 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:49.082732+0000 mon.a (mon.0) 454 : cluster [INF] osd.3 v2:192.168.123.100:6813/652999983 boot 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:49.082758+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:49.082758+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.083407+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.083407+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:50.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.215061+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.215061+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.219519+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.219519+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.221364+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.221364+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.221892+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.221892+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.226095+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: audit 2026-03-09T17:23:49.226095+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:49.338349+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:50 vm00 bash[20770]: cluster 2026-03-09T17:23:49.338349+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:47.183384+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:47.183384+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:47.183437+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:47.183437+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:49.082732+0000 mon.a (mon.0) 454 : cluster [INF] osd.3 v2:192.168.123.100:6813/652999983 boot 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:49.082732+0000 mon.a (mon.0) 454 : cluster [INF] osd.3 v2:192.168.123.100:6813/652999983 boot 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:49.082758+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:49.082758+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.083407+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.083407+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.215061+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.215061+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.219519+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.219519+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.221364+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.221364+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.221892+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.221892+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.226095+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: audit 2026-03-09T17:23:49.226095+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:49.338349+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:50.184 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:50 vm00 bash[28333]: cluster 2026-03-09T17:23:49.338349+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:50.276 DEBUG:teuthology.orchestra.run.vm00:osd.3> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.3.service 2026-03-09T17:23:50.277 INFO:tasks.cephadm:Deploying osd.4 on vm02 with /dev/vde... 2026-03-09T17:23:50.277 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- lvm zap /dev/vde 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:47.183384+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:47.183384+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:47.183437+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:47.183437+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:49.082732+0000 mon.a (mon.0) 454 : cluster [INF] osd.3 v2:192.168.123.100:6813/652999983 boot 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:49.082732+0000 mon.a (mon.0) 454 : cluster [INF] osd.3 v2:192.168.123.100:6813/652999983 boot 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:49.082758+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:49.082758+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.083407+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.083407+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.215061+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.215061+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.219519+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.219519+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.221364+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.221364+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.221892+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.221892+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.226095+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: audit 2026-03-09T17:23:49.226095+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:49.338349+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:50 vm02 bash[23351]: cluster 2026-03-09T17:23:49.338349+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T17:23:51.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:51 vm02 bash[23351]: cluster 2026-03-09T17:23:50.101396+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T17:23:51.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:51 vm02 bash[23351]: cluster 2026-03-09T17:23:50.101396+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T17:23:51.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:51 vm02 bash[23351]: audit 2026-03-09T17:23:50.151130+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:51.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:51 vm02 bash[23351]: audit 2026-03-09T17:23:50.151130+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:51.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:51 vm02 bash[23351]: audit 2026-03-09T17:23:50.155577+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:51 vm02 bash[23351]: audit 2026-03-09T17:23:50.155577+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:51 vm02 bash[23351]: audit 2026-03-09T17:23:50.161261+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:51 vm02 bash[23351]: audit 2026-03-09T17:23:50.161261+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:51 vm00 bash[28333]: cluster 2026-03-09T17:23:50.101396+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T17:23:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:51 vm00 bash[28333]: cluster 2026-03-09T17:23:50.101396+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T17:23:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:51 vm00 bash[28333]: audit 2026-03-09T17:23:50.151130+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:51 vm00 bash[28333]: audit 2026-03-09T17:23:50.151130+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:51 vm00 bash[28333]: audit 2026-03-09T17:23:50.155577+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:51 vm00 bash[28333]: audit 2026-03-09T17:23:50.155577+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:51 vm00 bash[28333]: audit 2026-03-09T17:23:50.161261+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:51 vm00 bash[28333]: audit 2026-03-09T17:23:50.161261+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:51 vm00 bash[20770]: cluster 2026-03-09T17:23:50.101396+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T17:23:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:51 vm00 bash[20770]: cluster 2026-03-09T17:23:50.101396+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T17:23:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:51 vm00 bash[20770]: audit 2026-03-09T17:23:50.151130+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:51 vm00 bash[20770]: audit 2026-03-09T17:23:50.151130+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:23:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:51 vm00 bash[20770]: audit 2026-03-09T17:23:50.155577+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:51 vm00 bash[20770]: audit 2026-03-09T17:23:50.155577+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:51 vm00 bash[20770]: audit 2026-03-09T17:23:50.161261+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:51 vm00 bash[20770]: audit 2026-03-09T17:23:50.161261+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:52.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:52 vm02 bash[23351]: cluster 2026-03-09T17:23:51.338616+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:52.384 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:52 vm02 bash[23351]: cluster 2026-03-09T17:23:51.338616+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:52.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:52 vm00 bash[28333]: cluster 2026-03-09T17:23:51.338616+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:52.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:52 vm00 bash[28333]: cluster 2026-03-09T17:23:51.338616+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:52.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:52 vm00 bash[20770]: cluster 2026-03-09T17:23:51.338616+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:52.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:52 vm00 bash[20770]: cluster 2026-03-09T17:23:51.338616+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:54.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:54 vm00 bash[28333]: cluster 2026-03-09T17:23:53.338926+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:54.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:54 vm00 bash[28333]: cluster 2026-03-09T17:23:53.338926+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:54 vm00 bash[20770]: cluster 2026-03-09T17:23:53.338926+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:54 vm00 bash[20770]: cluster 2026-03-09T17:23:53.338926+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:54.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:54 vm02 bash[23351]: cluster 2026-03-09T17:23:53.338926+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:54.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:54 vm02 bash[23351]: cluster 2026-03-09T17:23:53.338926+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:54.889 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:23:55.716 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:23:55.732 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch daemon add osd vm02:/dev/vde 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: cluster 2026-03-09T17:23:55.339192+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: cluster 2026-03-09T17:23:55.339192+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: cephadm 2026-03-09T17:23:55.810572+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: cephadm 2026-03-09T17:23:55.810572+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.816798+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.816798+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.824711+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.824711+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.827820+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.827820+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.828706+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.828706+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.829258+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.829258+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.837373+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:56 vm02 bash[23351]: audit 2026-03-09T17:23:55.837373+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: cluster 2026-03-09T17:23:55.339192+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: cluster 2026-03-09T17:23:55.339192+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: cephadm 2026-03-09T17:23:55.810572+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: cephadm 2026-03-09T17:23:55.810572+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.816798+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.816798+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.824711+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.824711+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.827820+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.827820+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.828706+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.828706+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.829258+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.829258+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.837373+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:56 vm00 bash[28333]: audit 2026-03-09T17:23:55.837373+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: cluster 2026-03-09T17:23:55.339192+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: cluster 2026-03-09T17:23:55.339192+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: cephadm 2026-03-09T17:23:55.810572+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: cephadm 2026-03-09T17:23:55.810572+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.816798+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.816798+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.824711+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.824711+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.827820+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.827820+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.828706+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.828706+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.829258+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.829258+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.837373+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:56 vm00 bash[20770]: audit 2026-03-09T17:23:55.837373+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:23:59.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:58 vm02 bash[23351]: cluster 2026-03-09T17:23:57.339525+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:23:58 vm02 bash[23351]: cluster 2026-03-09T17:23:57.339525+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:58 vm00 bash[28333]: cluster 2026-03-09T17:23:57.339525+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:23:58 vm00 bash[28333]: cluster 2026-03-09T17:23:57.339525+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:58 vm00 bash[20770]: cluster 2026-03-09T17:23:57.339525+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:23:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:23:58 vm00 bash[20770]: cluster 2026-03-09T17:23:57.339525+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:00.366 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- 192.168.123.102:0/1425506609 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c01029d0 msgr2=0x7f96c0102dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 --2- 192.168.123.102:0/1425506609 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c01029d0 0x7f96c0102dd0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f96b4009a80 tx=0x7f96b402f280 comp rx=0 tx=0).stop 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- 192.168.123.102:0/1425506609 shutdown_connections 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 --2- 192.168.123.102:0/1425506609 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f96c0104590 0x7f96c010ae20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 --2- 192.168.123.102:0/1425506609 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f96c0103bd0 0x7f96c0104050 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 --2- 192.168.123.102:0/1425506609 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c01029d0 0x7f96c0102dd0 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- 192.168.123.102:0/1425506609 >> 192.168.123.102:0/1425506609 conn(0x7f96c00fe180 msgr2=0x7f96c01005a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- 192.168.123.102:0/1425506609 shutdown_connections 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- 192.168.123.102:0/1425506609 wait complete. 2026-03-09T17:24:00.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 Processor -- start 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- start start 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f96c01029d0 0x7f96c019c3d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f96c0103bd0 0x7f96c019c910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c0104590 0x7f96c01a3990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f96c010dac0 con 0x7f96c01029d0 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f96c010d940 con 0x7f96c0104590 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c63b3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f96c01029d0 0x7f96c019c3d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c63b3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f96c01029d0 0x7f96c019c3d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:41634/0 (socket says 192.168.123.102:41634) 2026-03-09T17:24:00.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c63b3640 1 -- 192.168.123.102:0/2305073752 learned_addr learned my addr 192.168.123.102:0/2305073752 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c5bb2640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f96c0103bd0 0x7f96c019c910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.633+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f96c010dc40 con 0x7f96c0103bd0 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c6bb4640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c0104590 0x7f96c01a3990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c63b3640 1 -- 192.168.123.102:0/2305073752 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f96c0103bd0 msgr2=0x7f96c019c910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c63b3640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f96c0103bd0 0x7f96c019c910 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c63b3640 1 -- 192.168.123.102:0/2305073752 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c0104590 msgr2=0x7f96c01a3990 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c63b3640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c0104590 0x7f96c01a3990 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c63b3640 1 -- 192.168.123.102:0/2305073752 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f96c01a4090 con 0x7f96c01029d0 2026-03-09T17:24:00.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c6bb4640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c0104590 0x7f96c01a3990 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:24:00.641 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c63b3640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f96c01029d0 0x7f96c019c3d0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f96b4009a50 tx=0x7f96b40387f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:24:00.641 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96af7fe640 1 -- 192.168.123.102:0/2305073752 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f96b4038df0 con 0x7f96c01029d0 2026-03-09T17:24:00.641 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96af7fe640 1 -- 192.168.123.102:0/2305073752 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f96b403e070 con 0x7f96c01029d0 2026-03-09T17:24:00.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f96c01a4320 con 0x7f96c01029d0 2026-03-09T17:24:00.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96af7fe640 1 -- 192.168.123.102:0/2305073752 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f96b4042bc0 con 0x7f96c01029d0 2026-03-09T17:24:00.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f96c01a4750 con 0x7f96c01029d0 2026-03-09T17:24:00.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96af7fe640 1 -- 192.168.123.102:0/2305073752 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f96b404c440 con 0x7f96c01029d0 2026-03-09T17:24:00.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9690005180 con 0x7f96c01029d0 2026-03-09T17:24:00.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96af7fe640 1 --2- 192.168.123.102:0/2305073752 >> v2:192.168.123.100:6800/3114914985 conn(0x7f96a0077600 0x7f96a0079ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:24:00.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96af7fe640 1 -- 192.168.123.102:0/2305073752 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(27..27 src has 1..27) ==== 2793+0+0 (secure 0 0 0) 0x7f96b40bdce0 con 0x7f96c01029d0 2026-03-09T17:24:00.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c5bb2640 1 --2- 192.168.123.102:0/2305073752 >> v2:192.168.123.100:6800/3114914985 conn(0x7f96a0077600 0x7f96a0079ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:24:00.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.637+0000 7f96c5bb2640 1 --2- 192.168.123.102:0/2305073752 >> v2:192.168.123.100:6800/3114914985 conn(0x7f96a0077600 0x7f96a0079ac0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f96c019d8f0 tx=0x7f96b00073d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:24:00.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.641+0000 7f96af7fe640 1 -- 192.168.123.102:0/2305073752 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f96b408b800 con 0x7f96c01029d0 2026-03-09T17:24:00.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:00.773+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7f9690002bf0 con 0x7f96a0077600 2026-03-09T17:24:01.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:00 vm02 bash[23351]: cluster 2026-03-09T17:23:59.339758+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:01.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:00 vm02 bash[23351]: cluster 2026-03-09T17:23:59.339758+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:01.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:00 vm02 bash[23351]: audit 2026-03-09T17:24:00.781488+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:01.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:00 vm02 bash[23351]: audit 2026-03-09T17:24:00.781488+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:01.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:00 vm02 bash[23351]: audit 2026-03-09T17:24:00.782644+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:01.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:00 vm02 bash[23351]: audit 2026-03-09T17:24:00.782644+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:01.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:00 vm02 bash[23351]: audit 2026-03-09T17:24:00.783020+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:01.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:00 vm02 bash[23351]: audit 2026-03-09T17:24:00.783020+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:00 vm00 bash[28333]: cluster 2026-03-09T17:23:59.339758+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:00 vm00 bash[28333]: cluster 2026-03-09T17:23:59.339758+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:00 vm00 bash[28333]: audit 2026-03-09T17:24:00.781488+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:00 vm00 bash[28333]: audit 2026-03-09T17:24:00.781488+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:00 vm00 bash[28333]: audit 2026-03-09T17:24:00.782644+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:00 vm00 bash[28333]: audit 2026-03-09T17:24:00.782644+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:00 vm00 bash[28333]: audit 2026-03-09T17:24:00.783020+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:00 vm00 bash[28333]: audit 2026-03-09T17:24:00.783020+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:00 vm00 bash[20770]: cluster 2026-03-09T17:23:59.339758+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:00 vm00 bash[20770]: cluster 2026-03-09T17:23:59.339758+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:00 vm00 bash[20770]: audit 2026-03-09T17:24:00.781488+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:00 vm00 bash[20770]: audit 2026-03-09T17:24:00.781488+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:00 vm00 bash[20770]: audit 2026-03-09T17:24:00.782644+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:00 vm00 bash[20770]: audit 2026-03-09T17:24:00.782644+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:00 vm00 bash[20770]: audit 2026-03-09T17:24:00.783020+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:00 vm00 bash[20770]: audit 2026-03-09T17:24:00.783020+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:02.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:01 vm02 bash[23351]: audit 2026-03-09T17:24:00.780090+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:02.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:01 vm02 bash[23351]: audit 2026-03-09T17:24:00.780090+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:02.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:01 vm00 bash[28333]: audit 2026-03-09T17:24:00.780090+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:02.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:01 vm00 bash[28333]: audit 2026-03-09T17:24:00.780090+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:02.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:01 vm00 bash[20770]: audit 2026-03-09T17:24:00.780090+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:02.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:01 vm00 bash[20770]: audit 2026-03-09T17:24:00.780090+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:03.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:02 vm02 bash[23351]: cluster 2026-03-09T17:24:01.340059+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:03.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:02 vm02 bash[23351]: cluster 2026-03-09T17:24:01.340059+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:03.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:02 vm00 bash[28333]: cluster 2026-03-09T17:24:01.340059+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:03.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:02 vm00 bash[28333]: cluster 2026-03-09T17:24:01.340059+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:03.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:02 vm00 bash[20770]: cluster 2026-03-09T17:24:01.340059+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:03.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:02 vm00 bash[20770]: cluster 2026-03-09T17:24:01.340059+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:05.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:04 vm02 bash[23351]: cluster 2026-03-09T17:24:03.340302+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:05.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:04 vm02 bash[23351]: cluster 2026-03-09T17:24:03.340302+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:05.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:04 vm00 bash[28333]: cluster 2026-03-09T17:24:03.340302+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:05.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:04 vm00 bash[28333]: cluster 2026-03-09T17:24:03.340302+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:05.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:04 vm00 bash[20770]: cluster 2026-03-09T17:24:03.340302+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:05.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:04 vm00 bash[20770]: cluster 2026-03-09T17:24:03.340302+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: cluster 2026-03-09T17:24:05.340523+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: cluster 2026-03-09T17:24:05.340523+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.211514+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.102:0/2918544422' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.211514+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.102:0/2918544422' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.211820+0000 mon.a (mon.0) 475 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.211820+0000 mon.a (mon.0) 475 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.214214+0000 mon.a (mon.0) 476 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]': finished 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.214214+0000 mon.a (mon.0) 476 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]': finished 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: cluster 2026-03-09T17:24:06.217940+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: cluster 2026-03-09T17:24:06.217940+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.218020+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.218020+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.819058+0000 mon.b (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/3939689281' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:06 vm02 bash[23351]: audit 2026-03-09T17:24:06.819058+0000 mon.b (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/3939689281' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:07.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: cluster 2026-03-09T17:24:05.340523+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:07.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: cluster 2026-03-09T17:24:05.340523+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:07.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.211514+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.102:0/2918544422' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.211514+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.102:0/2918544422' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.211820+0000 mon.a (mon.0) 475 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.211820+0000 mon.a (mon.0) 475 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.214214+0000 mon.a (mon.0) 476 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]': finished 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.214214+0000 mon.a (mon.0) 476 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]': finished 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: cluster 2026-03-09T17:24:06.217940+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: cluster 2026-03-09T17:24:06.217940+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.218020+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.218020+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.819058+0000 mon.b (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/3939689281' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:06 vm00 bash[28333]: audit 2026-03-09T17:24:06.819058+0000 mon.b (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/3939689281' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: cluster 2026-03-09T17:24:05.340523+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: cluster 2026-03-09T17:24:05.340523+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.211514+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.102:0/2918544422' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.211514+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.102:0/2918544422' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.211820+0000 mon.a (mon.0) 475 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.211820+0000 mon.a (mon.0) 475 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.214214+0000 mon.a (mon.0) 476 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]': finished 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.214214+0000 mon.a (mon.0) 476 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dae36d57-3a29-4f68-ae98-8c24557509f1"}]': finished 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: cluster 2026-03-09T17:24:06.217940+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: cluster 2026-03-09T17:24:06.217940+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.218020+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.218020+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.819058+0000 mon.b (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/3939689281' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:06 vm00 bash[20770]: audit 2026-03-09T17:24:06.819058+0000 mon.b (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/3939689281' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:09.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:08 vm02 bash[23351]: cluster 2026-03-09T17:24:07.340736+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:09.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:08 vm02 bash[23351]: cluster 2026-03-09T17:24:07.340736+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:08 vm00 bash[28333]: cluster 2026-03-09T17:24:07.340736+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:08 vm00 bash[28333]: cluster 2026-03-09T17:24:07.340736+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:09.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:08 vm00 bash[20770]: cluster 2026-03-09T17:24:07.340736+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:09.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:08 vm00 bash[20770]: cluster 2026-03-09T17:24:07.340736+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:11.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:10 vm00 bash[28333]: cluster 2026-03-09T17:24:09.340981+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:11.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:10 vm00 bash[28333]: cluster 2026-03-09T17:24:09.340981+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:10 vm00 bash[20770]: cluster 2026-03-09T17:24:09.340981+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:10 vm00 bash[20770]: cluster 2026-03-09T17:24:09.340981+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:11.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:10 vm02 bash[23351]: cluster 2026-03-09T17:24:09.340981+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:11.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:10 vm02 bash[23351]: cluster 2026-03-09T17:24:09.340981+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:12 vm02 bash[23351]: cluster 2026-03-09T17:24:11.341253+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:12 vm02 bash[23351]: cluster 2026-03-09T17:24:11.341253+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:12 vm00 bash[28333]: cluster 2026-03-09T17:24:11.341253+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:12 vm00 bash[28333]: cluster 2026-03-09T17:24:11.341253+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:12 vm00 bash[20770]: cluster 2026-03-09T17:24:11.341253+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:12 vm00 bash[20770]: cluster 2026-03-09T17:24:11.341253+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:14.884 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:14 vm02 bash[23351]: cluster 2026-03-09T17:24:13.341493+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:14 vm02 bash[23351]: cluster 2026-03-09T17:24:13.341493+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:14 vm00 bash[28333]: cluster 2026-03-09T17:24:13.341493+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:14 vm00 bash[28333]: cluster 2026-03-09T17:24:13.341493+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:15.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:14 vm00 bash[20770]: cluster 2026-03-09T17:24:13.341493+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:15.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:14 vm00 bash[20770]: cluster 2026-03-09T17:24:13.341493+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:16.856 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:16 vm02 bash[23351]: cluster 2026-03-09T17:24:15.341793+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:16.857 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:16 vm02 bash[23351]: cluster 2026-03-09T17:24:15.341793+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:16.857 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:16 vm02 bash[23351]: audit 2026-03-09T17:24:15.988606+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T17:24:16.857 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:16 vm02 bash[23351]: audit 2026-03-09T17:24:15.988606+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T17:24:16.857 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:16 vm02 bash[23351]: audit 2026-03-09T17:24:15.989398+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:16.857 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:16 vm02 bash[23351]: audit 2026-03-09T17:24:15.989398+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:16.857 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:16 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:16.857 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:24:16 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:17.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:16 vm00 bash[28333]: cluster 2026-03-09T17:24:15.341793+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:17.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:16 vm00 bash[28333]: cluster 2026-03-09T17:24:15.341793+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:17.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:16 vm00 bash[28333]: audit 2026-03-09T17:24:15.988606+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T17:24:17.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:16 vm00 bash[28333]: audit 2026-03-09T17:24:15.988606+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T17:24:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:16 vm00 bash[28333]: audit 2026-03-09T17:24:15.989398+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:16 vm00 bash[28333]: audit 2026-03-09T17:24:15.989398+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:16 vm00 bash[20770]: cluster 2026-03-09T17:24:15.341793+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:16 vm00 bash[20770]: cluster 2026-03-09T17:24:15.341793+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:16 vm00 bash[20770]: audit 2026-03-09T17:24:15.988606+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T17:24:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:16 vm00 bash[20770]: audit 2026-03-09T17:24:15.988606+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T17:24:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:16 vm00 bash[20770]: audit 2026-03-09T17:24:15.989398+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:16 vm00 bash[20770]: audit 2026-03-09T17:24:15.989398+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:17.109 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:24:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:17.110 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 bash[23351]: cephadm 2026-03-09T17:24:15.989999+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-09T17:24:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 bash[23351]: cephadm 2026-03-09T17:24:15.989999+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-09T17:24:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 bash[23351]: audit 2026-03-09T17:24:17.105690+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 bash[23351]: audit 2026-03-09T17:24:17.105690+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 bash[23351]: audit 2026-03-09T17:24:17.112503+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 bash[23351]: audit 2026-03-09T17:24:17.112503+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 bash[23351]: audit 2026-03-09T17:24:17.119505+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:17 vm02 bash[23351]: audit 2026-03-09T17:24:17.119505+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:17 vm00 bash[28333]: cephadm 2026-03-09T17:24:15.989999+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-09T17:24:18.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:17 vm00 bash[28333]: cephadm 2026-03-09T17:24:15.989999+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-09T17:24:18.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:17 vm00 bash[28333]: audit 2026-03-09T17:24:17.105690+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:18.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:17 vm00 bash[28333]: audit 2026-03-09T17:24:17.105690+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:18.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:17 vm00 bash[28333]: audit 2026-03-09T17:24:17.112503+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:17 vm00 bash[28333]: audit 2026-03-09T17:24:17.112503+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:17 vm00 bash[28333]: audit 2026-03-09T17:24:17.119505+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:17 vm00 bash[28333]: audit 2026-03-09T17:24:17.119505+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:17 vm00 bash[20770]: cephadm 2026-03-09T17:24:15.989999+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:17 vm00 bash[20770]: cephadm 2026-03-09T17:24:15.989999+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:17 vm00 bash[20770]: audit 2026-03-09T17:24:17.105690+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:17 vm00 bash[20770]: audit 2026-03-09T17:24:17.105690+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:17 vm00 bash[20770]: audit 2026-03-09T17:24:17.112503+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:17 vm00 bash[20770]: audit 2026-03-09T17:24:17.112503+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:17 vm00 bash[20770]: audit 2026-03-09T17:24:17.119505+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:17 vm00 bash[20770]: audit 2026-03-09T17:24:17.119505+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:18.873 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:18 vm02 bash[23351]: cluster 2026-03-09T17:24:17.342062+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:18.874 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:18 vm02 bash[23351]: cluster 2026-03-09T17:24:17.342062+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:19.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:18 vm00 bash[28333]: cluster 2026-03-09T17:24:17.342062+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:19.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:18 vm00 bash[28333]: cluster 2026-03-09T17:24:17.342062+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:18 vm00 bash[20770]: cluster 2026-03-09T17:24:17.342062+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:18 vm00 bash[20770]: cluster 2026-03-09T17:24:17.342062+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:20 vm02 bash[23351]: cluster 2026-03-09T17:24:19.342341+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:20 vm02 bash[23351]: cluster 2026-03-09T17:24:19.342341+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:20 vm02 bash[23351]: audit 2026-03-09T17:24:20.532959+0000 mon.b (mon.1) 14 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:20 vm02 bash[23351]: audit 2026-03-09T17:24:20.532959+0000 mon.b (mon.1) 14 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:20 vm02 bash[23351]: audit 2026-03-09T17:24:20.533184+0000 mon.a (mon.0) 484 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:20 vm02 bash[23351]: audit 2026-03-09T17:24:20.533184+0000 mon.a (mon.0) 484 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:20 vm00 bash[28333]: cluster 2026-03-09T17:24:19.342341+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:20 vm00 bash[28333]: cluster 2026-03-09T17:24:19.342341+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:20 vm00 bash[28333]: audit 2026-03-09T17:24:20.532959+0000 mon.b (mon.1) 14 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:20 vm00 bash[28333]: audit 2026-03-09T17:24:20.532959+0000 mon.b (mon.1) 14 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:20 vm00 bash[28333]: audit 2026-03-09T17:24:20.533184+0000 mon.a (mon.0) 484 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:20 vm00 bash[28333]: audit 2026-03-09T17:24:20.533184+0000 mon.a (mon.0) 484 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:20 vm00 bash[20770]: cluster 2026-03-09T17:24:19.342341+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:20 vm00 bash[20770]: cluster 2026-03-09T17:24:19.342341+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:20 vm00 bash[20770]: audit 2026-03-09T17:24:20.532959+0000 mon.b (mon.1) 14 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:21.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:20 vm00 bash[20770]: audit 2026-03-09T17:24:20.532959+0000 mon.b (mon.1) 14 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:20 vm00 bash[20770]: audit 2026-03-09T17:24:20.533184+0000 mon.a (mon.0) 484 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:20 vm00 bash[20770]: audit 2026-03-09T17:24:20.533184+0000 mon.a (mon.0) 484 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T17:24:22.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: audit 2026-03-09T17:24:20.627173+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T17:24:22.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: audit 2026-03-09T17:24:20.627173+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T17:24:22.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: audit 2026-03-09T17:24:20.629870+0000 mon.b (mon.1) 15 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: audit 2026-03-09T17:24:20.629870+0000 mon.b (mon.1) 15 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: cluster 2026-03-09T17:24:20.633435+0000 mon.a (mon.0) 486 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T17:24:22.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: cluster 2026-03-09T17:24:20.633435+0000 mon.a (mon.0) 486 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T17:24:22.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: audit 2026-03-09T17:24:20.633863+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: audit 2026-03-09T17:24:20.633863+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: audit 2026-03-09T17:24:20.634054+0000 mon.a (mon.0) 488 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:21 vm00 bash[28333]: audit 2026-03-09T17:24:20.634054+0000 mon.a (mon.0) 488 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: audit 2026-03-09T17:24:20.627173+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: audit 2026-03-09T17:24:20.627173+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: audit 2026-03-09T17:24:20.629870+0000 mon.b (mon.1) 15 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: audit 2026-03-09T17:24:20.629870+0000 mon.b (mon.1) 15 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: cluster 2026-03-09T17:24:20.633435+0000 mon.a (mon.0) 486 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: cluster 2026-03-09T17:24:20.633435+0000 mon.a (mon.0) 486 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: audit 2026-03-09T17:24:20.633863+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: audit 2026-03-09T17:24:20.633863+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: audit 2026-03-09T17:24:20.634054+0000 mon.a (mon.0) 488 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:21 vm00 bash[20770]: audit 2026-03-09T17:24:20.634054+0000 mon.a (mon.0) 488 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: audit 2026-03-09T17:24:20.627173+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: audit 2026-03-09T17:24:20.627173+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: audit 2026-03-09T17:24:20.629870+0000 mon.b (mon.1) 15 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: audit 2026-03-09T17:24:20.629870+0000 mon.b (mon.1) 15 : audit [INF] from='osd.4 v2:192.168.123.102:6800/1924015120' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: cluster 2026-03-09T17:24:20.633435+0000 mon.a (mon.0) 486 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: cluster 2026-03-09T17:24:20.633435+0000 mon.a (mon.0) 486 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: audit 2026-03-09T17:24:20.633863+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: audit 2026-03-09T17:24:20.633863+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: audit 2026-03-09T17:24:20.634054+0000 mon.a (mon.0) 488 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:21 vm02 bash[23351]: audit 2026-03-09T17:24:20.634054+0000 mon.a (mon.0) 488 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:23.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: cluster 2026-03-09T17:24:21.342585+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:23.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: cluster 2026-03-09T17:24:21.342585+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:23.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: audit 2026-03-09T17:24:21.629858+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:23.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: audit 2026-03-09T17:24:21.629858+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:23.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: cluster 2026-03-09T17:24:21.637066+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: cluster 2026-03-09T17:24:21.637066+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: audit 2026-03-09T17:24:21.637985+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: audit 2026-03-09T17:24:21.637985+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: audit 2026-03-09T17:24:22.635931+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:22 vm00 bash[28333]: audit 2026-03-09T17:24:22.635931+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: cluster 2026-03-09T17:24:21.342585+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: cluster 2026-03-09T17:24:21.342585+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: audit 2026-03-09T17:24:21.629858+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: audit 2026-03-09T17:24:21.629858+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: cluster 2026-03-09T17:24:21.637066+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: cluster 2026-03-09T17:24:21.637066+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: audit 2026-03-09T17:24:21.637985+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: audit 2026-03-09T17:24:21.637985+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: audit 2026-03-09T17:24:22.635931+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:22 vm00 bash[20770]: audit 2026-03-09T17:24:22.635931+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: cluster 2026-03-09T17:24:21.342585+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: cluster 2026-03-09T17:24:21.342585+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: audit 2026-03-09T17:24:21.629858+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: audit 2026-03-09T17:24:21.629858+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: cluster 2026-03-09T17:24:21.637066+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: cluster 2026-03-09T17:24:21.637066+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: audit 2026-03-09T17:24:21.637985+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: audit 2026-03-09T17:24:21.637985+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: audit 2026-03-09T17:24:22.635931+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.102 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:22 vm02 bash[23351]: audit 2026-03-09T17:24:22.635931+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: cluster 2026-03-09T17:24:21.490364+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: cluster 2026-03-09T17:24:21.490364+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: cluster 2026-03-09T17:24:21.490432+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: cluster 2026-03-09T17:24:21.490432+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: cluster 2026-03-09T17:24:22.648615+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: cluster 2026-03-09T17:24:22.648615+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:22.650121+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:22.650121+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:22.749417+0000 mon.a (mon.0) 495 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:22.749417+0000 mon.a (mon.0) 495 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:23.376235+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:23.376235+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:23.383971+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:23.383971+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:23.641405+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:23.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:23 vm02 bash[23351]: audit 2026-03-09T17:24:23.641405+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: cluster 2026-03-09T17:24:21.490364+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: cluster 2026-03-09T17:24:21.490364+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: cluster 2026-03-09T17:24:21.490432+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: cluster 2026-03-09T17:24:21.490432+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: cluster 2026-03-09T17:24:22.648615+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: cluster 2026-03-09T17:24:22.648615+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:22.650121+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:22.650121+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:22.749417+0000 mon.a (mon.0) 495 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:22.749417+0000 mon.a (mon.0) 495 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:23.376235+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:23.376235+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:23.383971+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:23.383971+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:23.641405+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:23 vm00 bash[28333]: audit 2026-03-09T17:24:23.641405+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: cluster 2026-03-09T17:24:21.490364+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: cluster 2026-03-09T17:24:21.490364+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: cluster 2026-03-09T17:24:21.490432+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: cluster 2026-03-09T17:24:21.490432+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: cluster 2026-03-09T17:24:22.648615+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: cluster 2026-03-09T17:24:22.648615+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:22.650121+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:22.650121+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:22.749417+0000 mon.a (mon.0) 495 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:22.749417+0000 mon.a (mon.0) 495 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:23.376235+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:23.376235+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:23.383971+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:23.383971+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:23.641405+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:23 vm00 bash[20770]: audit 2026-03-09T17:24:23.641405+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.659 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 4 on host 'vm02' 2026-03-09T17:24:24.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.653+0000 7f96af7fe640 1 -- 192.168.123.102:0/2305073752 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f9690002bf0 con 0x7f96a0077600 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.653+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 >> v2:192.168.123.100:6800/3114914985 conn(0x7f96a0077600 msgr2=0x7f96a0079ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.653+0000 7f96c73b5640 1 --2- 192.168.123.102:0/2305073752 >> v2:192.168.123.100:6800/3114914985 conn(0x7f96a0077600 0x7f96a0079ac0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f96c019d8f0 tx=0x7f96b00073d0 comp rx=0 tx=0).stop 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.653+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f96c01029d0 msgr2=0x7f96c019c3d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.653+0000 7f96c73b5640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f96c01029d0 0x7f96c019c3d0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f96b4009a50 tx=0x7f96b40387f0 comp rx=0 tx=0).stop 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.657+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 shutdown_connections 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.657+0000 7f96c73b5640 1 --2- 192.168.123.102:0/2305073752 >> v2:192.168.123.100:6800/3114914985 conn(0x7f96a0077600 0x7f96a0079ac0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.657+0000 7f96c73b5640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c0104590 0x7f96c01a3990 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.657+0000 7f96c73b5640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f96c0103bd0 0x7f96c019c910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.657+0000 7f96c73b5640 1 --2- 192.168.123.102:0/2305073752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f96c01029d0 0x7f96c019c3d0 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:24.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.657+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 >> 192.168.123.102:0/2305073752 conn(0x7f96c00fe180 msgr2=0x7f96c00ffc10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:24:24.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.657+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 shutdown_connections 2026-03-09T17:24:24.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:24.657+0000 7f96c73b5640 1 -- 192.168.123.102:0/2305073752 wait complete. 2026-03-09T17:24:24.760 DEBUG:teuthology.orchestra.run.vm02:osd.4> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.4.service 2026-03-09T17:24:24.761 INFO:tasks.cephadm:Deploying osd.5 on vm02 with /dev/vdd... 2026-03-09T17:24:24.761 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- lvm zap /dev/vdd 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: cluster 2026-03-09T17:24:23.342838+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: cluster 2026-03-09T17:24:23.342838+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: cluster 2026-03-09T17:24:23.664832+0000 mon.a (mon.0) 499 : cluster [INF] osd.4 v2:192.168.123.102:6800/1924015120 boot 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: cluster 2026-03-09T17:24:23.664832+0000 mon.a (mon.0) 499 : cluster [INF] osd.4 v2:192.168.123.102:6800/1924015120 boot 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: cluster 2026-03-09T17:24:23.664887+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: cluster 2026-03-09T17:24:23.664887+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:23.665651+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:23.665651+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:23.784071+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:23.784071+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:23.784653+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:23.784653+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:23.791605+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:23.791605+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:24.643909+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:24.643909+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:24.649082+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:24.649082+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:24.656152+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: audit 2026-03-09T17:24:24.656152+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: cluster 2026-03-09T17:24:24.668930+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T17:24:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:24 vm02 bash[23351]: cluster 2026-03-09T17:24:24.668930+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T17:24:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: cluster 2026-03-09T17:24:23.342838+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: cluster 2026-03-09T17:24:23.342838+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: cluster 2026-03-09T17:24:23.664832+0000 mon.a (mon.0) 499 : cluster [INF] osd.4 v2:192.168.123.102:6800/1924015120 boot 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: cluster 2026-03-09T17:24:23.664832+0000 mon.a (mon.0) 499 : cluster [INF] osd.4 v2:192.168.123.102:6800/1924015120 boot 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: cluster 2026-03-09T17:24:23.664887+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: cluster 2026-03-09T17:24:23.664887+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:23.665651+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:23.665651+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:23.784071+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:23.784071+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:23.784653+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:23.784653+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:23.791605+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:23.791605+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:24.643909+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:24.643909+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:24.649082+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:24.649082+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:24.656152+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: audit 2026-03-09T17:24:24.656152+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: cluster 2026-03-09T17:24:24.668930+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:24 vm00 bash[28333]: cluster 2026-03-09T17:24:24.668930+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: cluster 2026-03-09T17:24:23.342838+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: cluster 2026-03-09T17:24:23.342838+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: cluster 2026-03-09T17:24:23.664832+0000 mon.a (mon.0) 499 : cluster [INF] osd.4 v2:192.168.123.102:6800/1924015120 boot 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: cluster 2026-03-09T17:24:23.664832+0000 mon.a (mon.0) 499 : cluster [INF] osd.4 v2:192.168.123.102:6800/1924015120 boot 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: cluster 2026-03-09T17:24:23.664887+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: cluster 2026-03-09T17:24:23.664887+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:23.665651+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:23.665651+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:23.784071+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:23.784071+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:23.784653+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:23.784653+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:23.791605+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:23.791605+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:24.643909+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:24.643909+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:24.649082+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:24.649082+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:24.656152+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: audit 2026-03-09T17:24:24.656152+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: cluster 2026-03-09T17:24:24.668930+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T17:24:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:24 vm00 bash[20770]: cluster 2026-03-09T17:24:24.668930+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T17:24:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:26 vm00 bash[28333]: cluster 2026-03-09T17:24:25.343091+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:26 vm00 bash[28333]: cluster 2026-03-09T17:24:25.343091+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:26 vm00 bash[28333]: cluster 2026-03-09T17:24:25.669641+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-09T17:24:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:26 vm00 bash[28333]: cluster 2026-03-09T17:24:25.669641+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-09T17:24:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:26 vm00 bash[20770]: cluster 2026-03-09T17:24:25.343091+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:26 vm00 bash[20770]: cluster 2026-03-09T17:24:25.343091+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:26 vm00 bash[20770]: cluster 2026-03-09T17:24:25.669641+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-09T17:24:27.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:26 vm00 bash[20770]: cluster 2026-03-09T17:24:25.669641+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-09T17:24:27.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:26 vm02 bash[23351]: cluster 2026-03-09T17:24:25.343091+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:26 vm02 bash[23351]: cluster 2026-03-09T17:24:25.343091+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:26 vm02 bash[23351]: cluster 2026-03-09T17:24:25.669641+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-09T17:24:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:26 vm02 bash[23351]: cluster 2026-03-09T17:24:25.669641+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-09T17:24:29.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:28 vm00 bash[28333]: cluster 2026-03-09T17:24:27.343305+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v137: 1 pgs: 1 active+recovering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 95 KiB/s, 0 objects/s recovering 2026-03-09T17:24:29.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:28 vm00 bash[28333]: cluster 2026-03-09T17:24:27.343305+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v137: 1 pgs: 1 active+recovering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 95 KiB/s, 0 objects/s recovering 2026-03-09T17:24:29.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:28 vm00 bash[20770]: cluster 2026-03-09T17:24:27.343305+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v137: 1 pgs: 1 active+recovering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 95 KiB/s, 0 objects/s recovering 2026-03-09T17:24:29.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:28 vm00 bash[20770]: cluster 2026-03-09T17:24:27.343305+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v137: 1 pgs: 1 active+recovering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 95 KiB/s, 0 objects/s recovering 2026-03-09T17:24:29.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:28 vm02 bash[23351]: cluster 2026-03-09T17:24:27.343305+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v137: 1 pgs: 1 active+recovering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 95 KiB/s, 0 objects/s recovering 2026-03-09T17:24:29.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:28 vm02 bash[23351]: cluster 2026-03-09T17:24:27.343305+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v137: 1 pgs: 1 active+recovering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 95 KiB/s, 0 objects/s recovering 2026-03-09T17:24:29.425 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: cluster 2026-03-09T17:24:29.343591+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 active+recovering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: cluster 2026-03-09T17:24:29.343591+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 active+recovering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.250085+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.250085+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.253762+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.253762+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.254455+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.254455+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.255401+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.255401+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.255773+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.255773+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.259617+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:30.957 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:30 vm02 bash[23351]: audit 2026-03-09T17:24:30.259617+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:30.965 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:24:30.982 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch daemon add osd vm02:/dev/vdd 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: cluster 2026-03-09T17:24:29.343591+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 active+recovering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: cluster 2026-03-09T17:24:29.343591+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 active+recovering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.250085+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.250085+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.253762+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.253762+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.254455+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.254455+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.255401+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.255401+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.255773+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.255773+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.259617+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:30 vm00 bash[28333]: audit 2026-03-09T17:24:30.259617+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: cluster 2026-03-09T17:24:29.343591+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 active+recovering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: cluster 2026-03-09T17:24:29.343591+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 active+recovering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.250085+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.250085+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.253762+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.253762+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.254455+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.254455+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.255401+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.255401+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.255773+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.255773+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.259617+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:30 vm00 bash[20770]: audit 2026-03-09T17:24:30.259617+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:31 vm00 bash[28333]: cephadm 2026-03-09T17:24:30.244319+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:31 vm00 bash[28333]: cephadm 2026-03-09T17:24:30.244319+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:31 vm00 bash[28333]: cephadm 2026-03-09T17:24:30.254775+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:31 vm00 bash[28333]: cephadm 2026-03-09T17:24:30.254775+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:31 vm00 bash[28333]: cephadm 2026-03-09T17:24:30.255129+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:31 vm00 bash[28333]: cephadm 2026-03-09T17:24:30.255129+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:31 vm00 bash[20770]: cephadm 2026-03-09T17:24:30.244319+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:31 vm00 bash[20770]: cephadm 2026-03-09T17:24:30.244319+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:31 vm00 bash[20770]: cephadm 2026-03-09T17:24:30.254775+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:31 vm00 bash[20770]: cephadm 2026-03-09T17:24:30.254775+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:31 vm00 bash[20770]: cephadm 2026-03-09T17:24:30.255129+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T17:24:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:31 vm00 bash[20770]: cephadm 2026-03-09T17:24:30.255129+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T17:24:32.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:31 vm02 bash[23351]: cephadm 2026-03-09T17:24:30.244319+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:24:32.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:31 vm02 bash[23351]: cephadm 2026-03-09T17:24:30.244319+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:24:32.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:31 vm02 bash[23351]: cephadm 2026-03-09T17:24:30.254775+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-09T17:24:32.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:31 vm02 bash[23351]: cephadm 2026-03-09T17:24:30.254775+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-09T17:24:32.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:31 vm02 bash[23351]: cephadm 2026-03-09T17:24:30.255129+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T17:24:32.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:31 vm02 bash[23351]: cephadm 2026-03-09T17:24:30.255129+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T17:24:33.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:32 vm00 bash[28333]: cluster 2026-03-09T17:24:31.343931+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 58 KiB/s, 0 objects/s recovering 2026-03-09T17:24:33.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:32 vm00 bash[28333]: cluster 2026-03-09T17:24:31.343931+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 58 KiB/s, 0 objects/s recovering 2026-03-09T17:24:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:32 vm00 bash[20770]: cluster 2026-03-09T17:24:31.343931+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 58 KiB/s, 0 objects/s recovering 2026-03-09T17:24:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:32 vm00 bash[20770]: cluster 2026-03-09T17:24:31.343931+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 58 KiB/s, 0 objects/s recovering 2026-03-09T17:24:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:32 vm02 bash[23351]: cluster 2026-03-09T17:24:31.343931+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 58 KiB/s, 0 objects/s recovering 2026-03-09T17:24:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:32 vm02 bash[23351]: cluster 2026-03-09T17:24:31.343931+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 58 KiB/s, 0 objects/s recovering 2026-03-09T17:24:35.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:34 vm00 bash[28333]: cluster 2026-03-09T17:24:33.344281+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-09T17:24:35.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:34 vm00 bash[28333]: cluster 2026-03-09T17:24:33.344281+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-09T17:24:35.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:34 vm00 bash[20770]: cluster 2026-03-09T17:24:33.344281+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-09T17:24:35.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:34 vm00 bash[20770]: cluster 2026-03-09T17:24:33.344281+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-09T17:24:35.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:34 vm02 bash[23351]: cluster 2026-03-09T17:24:33.344281+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-09T17:24:35.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:34 vm02 bash[23351]: cluster 2026-03-09T17:24:33.344281+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-09T17:24:35.600 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:24:35.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 -- 192.168.123.102:0/677432043 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 msgr2=0x7f216c101da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 --2- 192.168.123.102:0/677432043 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 0x7f216c101da0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f2154009a30 tx=0x7f215402f220 comp rx=0 tx=0).stop 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 -- 192.168.123.102:0/677432043 shutdown_connections 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 --2- 192.168.123.102:0/677432043 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f216c103560 0x7f216c109df0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 --2- 192.168.123.102:0/677432043 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f216c102ba0 0x7f216c103020 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 --2- 192.168.123.102:0/677432043 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 0x7f216c101da0 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 -- 192.168.123.102:0/677432043 >> 192.168.123.102:0/677432043 conn(0x7f216c0fd150 msgr2=0x7f216c0ff570 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 -- 192.168.123.102:0/677432043 shutdown_connections 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 -- 192.168.123.102:0/677432043 wait complete. 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 Processor -- start 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 -- start start 2026-03-09T17:24:35.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.769+0000 7f2170e67640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 0x7f216c19a390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2170e67640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f216c102ba0 0x7f216c19a8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2170e67640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f216c103560 0x7f216c1a1950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2170e67640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f216c10ca90 con 0x7f216c102ba0 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2170e67640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f216c10c910 con 0x7f216c1019a0 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2170e67640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f216c10cc10 con 0x7f216c103560 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 0x7f216c19a390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 0x7f216c19a390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:53998/0 (socket says 192.168.123.102:53998) 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 -- 192.168.123.102:0/2921972089 learned_addr learned my addr 192.168.123.102:0/2921972089 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216ad76640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f216c103560 0x7f216c1a1950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2169d74640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f216c102ba0 0x7f216c19a8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 -- 192.168.123.102:0/2921972089 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f216c103560 msgr2=0x7f216c1a1950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f216c103560 0x7f216c1a1950 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 -- 192.168.123.102:0/2921972089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f216c102ba0 msgr2=0x7f216c19a8d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f216c102ba0 0x7f216c19a8d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:35.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 -- 192.168.123.102:0/2921972089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f216c103e40 con 0x7f216c1019a0 2026-03-09T17:24:35.777 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f216a575640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 0x7f216c19a390 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f2154009a00 tx=0x7f215402fd80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:24:35.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f21537fe640 1 -- 192.168.123.102:0/2921972089 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2154004280 con 0x7f216c1019a0 2026-03-09T17:24:35.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f21537fe640 1 -- 192.168.123.102:0/2921972089 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2154004420 con 0x7f216c1019a0 2026-03-09T17:24:35.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f21537fe640 1 -- 192.168.123.102:0/2921972089 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2154004e20 con 0x7f216c1019a0 2026-03-09T17:24:35.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f216c1040d0 con 0x7f216c1019a0 2026-03-09T17:24:35.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f216c1044d0 con 0x7f216c1019a0 2026-03-09T17:24:35.779 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f21537fe640 1 -- 192.168.123.102:0/2921972089 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f2154038770 con 0x7f216c1019a0 2026-03-09T17:24:35.781 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2138005180 con 0x7f216c1019a0 2026-03-09T17:24:35.781 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f21537fe640 1 --2- 192.168.123.102:0/2921972089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2148077600 0x7f2148079ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:24:35.781 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f21537fe640 1 -- 192.168.123.102:0/2921972089 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(34..34 src has 1..34) ==== 3185+0+0 (secure 0 0 0) 0x7f21540bd6c0 con 0x7f216c1019a0 2026-03-09T17:24:35.781 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.773+0000 7f2169d74640 1 --2- 192.168.123.102:0/2921972089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2148077600 0x7f2148079ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:24:35.782 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.777+0000 7f2169d74640 1 --2- 192.168.123.102:0/2921972089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2148077600 0x7f2148079ac0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f216c19b8b0 tx=0x7f216000a380 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:24:35.782 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.777+0000 7f21537fe640 1 -- 192.168.123.102:0/2921972089 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f21540881a0 con 0x7f216c1019a0 2026-03-09T17:24:35.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:35.873+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f2138002bf0 con 0x7f2148077600 2026-03-09T17:24:37.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: cluster 2026-03-09T17:24:35.344550+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: cluster 2026-03-09T17:24:35.344550+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: audit 2026-03-09T17:24:35.879421+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: audit 2026-03-09T17:24:35.879421+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: audit 2026-03-09T17:24:35.880683+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: audit 2026-03-09T17:24:35.880683+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: audit 2026-03-09T17:24:35.882172+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: audit 2026-03-09T17:24:35.882172+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: audit 2026-03-09T17:24:35.882632+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:36 vm00 bash[28333]: audit 2026-03-09T17:24:35.882632+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: cluster 2026-03-09T17:24:35.344550+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: cluster 2026-03-09T17:24:35.344550+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: audit 2026-03-09T17:24:35.879421+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: audit 2026-03-09T17:24:35.879421+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: audit 2026-03-09T17:24:35.880683+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: audit 2026-03-09T17:24:35.880683+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: audit 2026-03-09T17:24:35.882172+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: audit 2026-03-09T17:24:35.882172+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: audit 2026-03-09T17:24:35.882632+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:36 vm00 bash[20770]: audit 2026-03-09T17:24:35.882632+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: cluster 2026-03-09T17:24:35.344550+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: cluster 2026-03-09T17:24:35.344550+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: audit 2026-03-09T17:24:35.879421+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: audit 2026-03-09T17:24:35.879421+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: audit 2026-03-09T17:24:35.880683+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: audit 2026-03-09T17:24:35.880683+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: audit 2026-03-09T17:24:35.882172+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: audit 2026-03-09T17:24:35.882172+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: audit 2026-03-09T17:24:35.882632+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:36 vm02 bash[23351]: audit 2026-03-09T17:24:35.882632+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:39.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:38 vm00 bash[28333]: cluster 2026-03-09T17:24:37.344780+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T17:24:39.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:38 vm00 bash[28333]: cluster 2026-03-09T17:24:37.344780+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T17:24:39.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:38 vm00 bash[20770]: cluster 2026-03-09T17:24:37.344780+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T17:24:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:38 vm00 bash[20770]: cluster 2026-03-09T17:24:37.344780+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T17:24:39.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:38 vm02 bash[23351]: cluster 2026-03-09T17:24:37.344780+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T17:24:39.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:38 vm02 bash[23351]: cluster 2026-03-09T17:24:37.344780+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T17:24:41.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:40 vm00 bash[28333]: cluster 2026-03-09T17:24:39.345103+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:41.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:40 vm00 bash[28333]: cluster 2026-03-09T17:24:39.345103+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:41.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:40 vm00 bash[20770]: cluster 2026-03-09T17:24:39.345103+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:41.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:40 vm00 bash[20770]: cluster 2026-03-09T17:24:39.345103+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:40 vm02 bash[23351]: cluster 2026-03-09T17:24:39.345103+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:40 vm02 bash[23351]: cluster 2026-03-09T17:24:39.345103+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: audit 2026-03-09T17:24:41.244617+0000 mon.b (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/3296452745' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: audit 2026-03-09T17:24:41.244617+0000 mon.b (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/3296452745' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: audit 2026-03-09T17:24:41.245031+0000 mon.a (mon.0) 519 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: audit 2026-03-09T17:24:41.245031+0000 mon.a (mon.0) 519 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: audit 2026-03-09T17:24:41.247959+0000 mon.a (mon.0) 520 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]': finished 2026-03-09T17:24:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: audit 2026-03-09T17:24:41.247959+0000 mon.a (mon.0) 520 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]': finished 2026-03-09T17:24:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: cluster 2026-03-09T17:24:41.252134+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T17:24:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: cluster 2026-03-09T17:24:41.252134+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: audit 2026-03-09T17:24:41.252921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:41 vm00 bash[28333]: audit 2026-03-09T17:24:41.252921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: audit 2026-03-09T17:24:41.244617+0000 mon.b (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/3296452745' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: audit 2026-03-09T17:24:41.244617+0000 mon.b (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/3296452745' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: audit 2026-03-09T17:24:41.245031+0000 mon.a (mon.0) 519 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: audit 2026-03-09T17:24:41.245031+0000 mon.a (mon.0) 519 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: audit 2026-03-09T17:24:41.247959+0000 mon.a (mon.0) 520 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]': finished 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: audit 2026-03-09T17:24:41.247959+0000 mon.a (mon.0) 520 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]': finished 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: cluster 2026-03-09T17:24:41.252134+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: cluster 2026-03-09T17:24:41.252134+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: audit 2026-03-09T17:24:41.252921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:41 vm00 bash[20770]: audit 2026-03-09T17:24:41.252921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: audit 2026-03-09T17:24:41.244617+0000 mon.b (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/3296452745' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: audit 2026-03-09T17:24:41.244617+0000 mon.b (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/3296452745' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: audit 2026-03-09T17:24:41.245031+0000 mon.a (mon.0) 519 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: audit 2026-03-09T17:24:41.245031+0000 mon.a (mon.0) 519 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]: dispatch 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: audit 2026-03-09T17:24:41.247959+0000 mon.a (mon.0) 520 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]': finished 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: audit 2026-03-09T17:24:41.247959+0000 mon.a (mon.0) 520 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "af8538f6-82da-4394-a355-4fac49048640"}]': finished 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: cluster 2026-03-09T17:24:41.252134+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: cluster 2026-03-09T17:24:41.252134+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: audit 2026-03-09T17:24:41.252921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:41 vm02 bash[23351]: audit 2026-03-09T17:24:41.252921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:43.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:42 vm00 bash[28333]: cluster 2026-03-09T17:24:41.345321+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:43.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:42 vm00 bash[28333]: cluster 2026-03-09T17:24:41.345321+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:43.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:42 vm00 bash[28333]: audit 2026-03-09T17:24:41.850710+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.102:0/1411042490' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:43.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:42 vm00 bash[28333]: audit 2026-03-09T17:24:41.850710+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.102:0/1411042490' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:43.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:42 vm00 bash[20770]: cluster 2026-03-09T17:24:41.345321+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:42 vm00 bash[20770]: cluster 2026-03-09T17:24:41.345321+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:42 vm00 bash[20770]: audit 2026-03-09T17:24:41.850710+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.102:0/1411042490' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:42 vm00 bash[20770]: audit 2026-03-09T17:24:41.850710+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.102:0/1411042490' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:42 vm02 bash[23351]: cluster 2026-03-09T17:24:41.345321+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:42 vm02 bash[23351]: cluster 2026-03-09T17:24:41.345321+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:42 vm02 bash[23351]: audit 2026-03-09T17:24:41.850710+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.102:0/1411042490' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:42 vm02 bash[23351]: audit 2026-03-09T17:24:41.850710+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.102:0/1411042490' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:24:45.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:44 vm00 bash[28333]: cluster 2026-03-09T17:24:43.345613+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:45.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:44 vm00 bash[28333]: cluster 2026-03-09T17:24:43.345613+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:44 vm00 bash[20770]: cluster 2026-03-09T17:24:43.345613+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:44 vm00 bash[20770]: cluster 2026-03-09T17:24:43.345613+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:44 vm02 bash[23351]: cluster 2026-03-09T17:24:43.345613+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:44 vm02 bash[23351]: cluster 2026-03-09T17:24:43.345613+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:47.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:46 vm00 bash[28333]: cluster 2026-03-09T17:24:45.345921+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:47.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:46 vm00 bash[28333]: cluster 2026-03-09T17:24:45.345921+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:47.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:46 vm00 bash[20770]: cluster 2026-03-09T17:24:45.345921+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:47.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:46 vm00 bash[20770]: cluster 2026-03-09T17:24:45.345921+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:47.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:46 vm02 bash[23351]: cluster 2026-03-09T17:24:45.345921+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:47.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:46 vm02 bash[23351]: cluster 2026-03-09T17:24:45.345921+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:48 vm00 bash[28333]: cluster 2026-03-09T17:24:47.346159+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:49.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:48 vm00 bash[28333]: cluster 2026-03-09T17:24:47.346159+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:49.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:48 vm00 bash[20770]: cluster 2026-03-09T17:24:47.346159+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:48 vm00 bash[20770]: cluster 2026-03-09T17:24:47.346159+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:48 vm02 bash[23351]: cluster 2026-03-09T17:24:47.346159+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:48 vm02 bash[23351]: cluster 2026-03-09T17:24:47.346159+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:50.929 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:50 vm02 bash[23351]: cluster 2026-03-09T17:24:49.346407+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:50.929 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:50 vm02 bash[23351]: cluster 2026-03-09T17:24:49.346407+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:50.929 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:50 vm02 bash[23351]: audit 2026-03-09T17:24:50.139066+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T17:24:50.929 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:50 vm02 bash[23351]: audit 2026-03-09T17:24:50.139066+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T17:24:50.930 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:50 vm02 bash[23351]: audit 2026-03-09T17:24:50.139593+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:50.930 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:50 vm02 bash[23351]: audit 2026-03-09T17:24:50.139593+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:51.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:50 vm00 bash[28333]: cluster 2026-03-09T17:24:49.346407+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:50 vm00 bash[28333]: cluster 2026-03-09T17:24:49.346407+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:50 vm00 bash[28333]: audit 2026-03-09T17:24:50.139066+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:50 vm00 bash[28333]: audit 2026-03-09T17:24:50.139066+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:50 vm00 bash[28333]: audit 2026-03-09T17:24:50.139593+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:50 vm00 bash[28333]: audit 2026-03-09T17:24:50.139593+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:50 vm00 bash[20770]: cluster 2026-03-09T17:24:49.346407+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:50 vm00 bash[20770]: cluster 2026-03-09T17:24:49.346407+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:50 vm00 bash[20770]: audit 2026-03-09T17:24:50.139066+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:50 vm00 bash[20770]: audit 2026-03-09T17:24:50.139066+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:50 vm00 bash[20770]: audit 2026-03-09T17:24:50.139593+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:51.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:50 vm00 bash[20770]: audit 2026-03-09T17:24:50.139593+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:51.200 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:51.200 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:24:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:51.200 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:24:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:51.508 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:24:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:51.508 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:24:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:51.508 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:24:51.775 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 bash[23351]: cephadm 2026-03-09T17:24:50.139981+0000 mgr.y (mgr.14150) 175 : cephadm [INF] Deploying daemon osd.5 on vm02 2026-03-09T17:24:51.775 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 bash[23351]: cephadm 2026-03-09T17:24:50.139981+0000 mgr.y (mgr.14150) 175 : cephadm [INF] Deploying daemon osd.5 on vm02 2026-03-09T17:24:51.775 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 bash[23351]: audit 2026-03-09T17:24:51.430040+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:51.775 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 bash[23351]: audit 2026-03-09T17:24:51.430040+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:51.775 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 bash[23351]: audit 2026-03-09T17:24:51.436963+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:51.775 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 bash[23351]: audit 2026-03-09T17:24:51.436963+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:51.775 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 bash[23351]: audit 2026-03-09T17:24:51.442663+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:51.775 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:51 vm02 bash[23351]: audit 2026-03-09T17:24:51.442663+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:51 vm00 bash[28333]: cephadm 2026-03-09T17:24:50.139981+0000 mgr.y (mgr.14150) 175 : cephadm [INF] Deploying daemon osd.5 on vm02 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:51 vm00 bash[28333]: cephadm 2026-03-09T17:24:50.139981+0000 mgr.y (mgr.14150) 175 : cephadm [INF] Deploying daemon osd.5 on vm02 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:51 vm00 bash[28333]: audit 2026-03-09T17:24:51.430040+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:51 vm00 bash[28333]: audit 2026-03-09T17:24:51.430040+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:51 vm00 bash[28333]: audit 2026-03-09T17:24:51.436963+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:51 vm00 bash[28333]: audit 2026-03-09T17:24:51.436963+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:51 vm00 bash[28333]: audit 2026-03-09T17:24:51.442663+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:51 vm00 bash[28333]: audit 2026-03-09T17:24:51.442663+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:51 vm00 bash[20770]: cephadm 2026-03-09T17:24:50.139981+0000 mgr.y (mgr.14150) 175 : cephadm [INF] Deploying daemon osd.5 on vm02 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:51 vm00 bash[20770]: cephadm 2026-03-09T17:24:50.139981+0000 mgr.y (mgr.14150) 175 : cephadm [INF] Deploying daemon osd.5 on vm02 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:51 vm00 bash[20770]: audit 2026-03-09T17:24:51.430040+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:51 vm00 bash[20770]: audit 2026-03-09T17:24:51.430040+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:51 vm00 bash[20770]: audit 2026-03-09T17:24:51.436963+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:51 vm00 bash[20770]: audit 2026-03-09T17:24:51.436963+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:51 vm00 bash[20770]: audit 2026-03-09T17:24:51.442663+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:51 vm00 bash[20770]: audit 2026-03-09T17:24:51.442663+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:52 vm02 bash[23351]: cluster 2026-03-09T17:24:51.346677+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:52 vm02 bash[23351]: cluster 2026-03-09T17:24:51.346677+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:52 vm00 bash[28333]: cluster 2026-03-09T17:24:51.346677+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:53.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:52 vm00 bash[28333]: cluster 2026-03-09T17:24:51.346677+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:53.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:52 vm00 bash[20770]: cluster 2026-03-09T17:24:51.346677+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:52 vm00 bash[20770]: cluster 2026-03-09T17:24:51.346677+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:55.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:54 vm00 bash[28333]: cluster 2026-03-09T17:24:53.346966+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:55.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:54 vm00 bash[28333]: cluster 2026-03-09T17:24:53.346966+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:55.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:54 vm00 bash[20770]: cluster 2026-03-09T17:24:53.346966+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:54 vm00 bash[20770]: cluster 2026-03-09T17:24:53.346966+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:55.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:54 vm02 bash[23351]: cluster 2026-03-09T17:24:53.346966+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:55.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:54 vm02 bash[23351]: cluster 2026-03-09T17:24:53.346966+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:56.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:55 vm00 bash[28333]: audit 2026-03-09T17:24:54.789061+0000 mon.b (mon.1) 18 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:55 vm00 bash[28333]: audit 2026-03-09T17:24:54.789061+0000 mon.b (mon.1) 18 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:55 vm00 bash[28333]: audit 2026-03-09T17:24:54.789336+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:55 vm00 bash[28333]: audit 2026-03-09T17:24:54.789336+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:55 vm00 bash[20770]: audit 2026-03-09T17:24:54.789061+0000 mon.b (mon.1) 18 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:55 vm00 bash[20770]: audit 2026-03-09T17:24:54.789061+0000 mon.b (mon.1) 18 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:55 vm00 bash[20770]: audit 2026-03-09T17:24:54.789336+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:55 vm00 bash[20770]: audit 2026-03-09T17:24:54.789336+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:55 vm02 bash[23351]: audit 2026-03-09T17:24:54.789061+0000 mon.b (mon.1) 18 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:55 vm02 bash[23351]: audit 2026-03-09T17:24:54.789061+0000 mon.b (mon.1) 18 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:55 vm02 bash[23351]: audit 2026-03-09T17:24:54.789336+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:56.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:55 vm02 bash[23351]: audit 2026-03-09T17:24:54.789336+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: cluster 2026-03-09T17:24:55.347278+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: cluster 2026-03-09T17:24:55.347278+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: audit 2026-03-09T17:24:55.794040+0000 mon.a (mon.0) 529 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: audit 2026-03-09T17:24:55.794040+0000 mon.a (mon.0) 529 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: cluster 2026-03-09T17:24:55.796930+0000 mon.a (mon.0) 530 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: cluster 2026-03-09T17:24:55.796930+0000 mon.a (mon.0) 530 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: audit 2026-03-09T17:24:55.797105+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: audit 2026-03-09T17:24:55.797105+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: audit 2026-03-09T17:24:55.798496+0000 mon.b (mon.1) 19 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: audit 2026-03-09T17:24:55.798496+0000 mon.b (mon.1) 19 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: audit 2026-03-09T17:24:55.798701+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:56 vm02 bash[23351]: audit 2026-03-09T17:24:55.798701+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: cluster 2026-03-09T17:24:55.347278+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: cluster 2026-03-09T17:24:55.347278+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: audit 2026-03-09T17:24:55.794040+0000 mon.a (mon.0) 529 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: audit 2026-03-09T17:24:55.794040+0000 mon.a (mon.0) 529 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: cluster 2026-03-09T17:24:55.796930+0000 mon.a (mon.0) 530 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: cluster 2026-03-09T17:24:55.796930+0000 mon.a (mon.0) 530 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: audit 2026-03-09T17:24:55.797105+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: audit 2026-03-09T17:24:55.797105+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: audit 2026-03-09T17:24:55.798496+0000 mon.b (mon.1) 19 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: audit 2026-03-09T17:24:55.798496+0000 mon.b (mon.1) 19 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: audit 2026-03-09T17:24:55.798701+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:56 vm00 bash[28333]: audit 2026-03-09T17:24:55.798701+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: cluster 2026-03-09T17:24:55.347278+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: cluster 2026-03-09T17:24:55.347278+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: audit 2026-03-09T17:24:55.794040+0000 mon.a (mon.0) 529 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: audit 2026-03-09T17:24:55.794040+0000 mon.a (mon.0) 529 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: cluster 2026-03-09T17:24:55.796930+0000 mon.a (mon.0) 530 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: cluster 2026-03-09T17:24:55.796930+0000 mon.a (mon.0) 530 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: audit 2026-03-09T17:24:55.797105+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: audit 2026-03-09T17:24:55.797105+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: audit 2026-03-09T17:24:55.798496+0000 mon.b (mon.1) 19 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: audit 2026-03-09T17:24:55.798496+0000 mon.b (mon.1) 19 : audit [INF] from='osd.5 v2:192.168.123.102:6804/2433922459' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: audit 2026-03-09T17:24:55.798701+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:56 vm00 bash[20770]: audit 2026-03-09T17:24:55.798701+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: cluster 2026-03-09T17:24:55.760333+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: cluster 2026-03-09T17:24:55.760333+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: cluster 2026-03-09T17:24:55.760393+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: cluster 2026-03-09T17:24:55.760393+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:56.797398+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:56.797398+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: cluster 2026-03-09T17:24:56.803696+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: cluster 2026-03-09T17:24:56.803696+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:56.804431+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:56.804431+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:56.806077+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:56.806077+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.694409+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.694409+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.698164+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.698164+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.698727+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.698727+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.699234+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.699234+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.703430+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:57 vm02 bash[23351]: audit 2026-03-09T17:24:57.703430+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: cluster 2026-03-09T17:24:55.760333+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: cluster 2026-03-09T17:24:55.760333+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: cluster 2026-03-09T17:24:55.760393+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: cluster 2026-03-09T17:24:55.760393+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:56.797398+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:56.797398+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: cluster 2026-03-09T17:24:56.803696+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: cluster 2026-03-09T17:24:56.803696+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:56.804431+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:56.804431+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:56.806077+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:56.806077+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.694409+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.694409+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.698164+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.698164+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.698727+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.698727+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.699234+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.699234+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.703430+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:57 vm00 bash[28333]: audit 2026-03-09T17:24:57.703430+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: cluster 2026-03-09T17:24:55.760333+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: cluster 2026-03-09T17:24:55.760333+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: cluster 2026-03-09T17:24:55.760393+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: cluster 2026-03-09T17:24:55.760393+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:56.797398+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:56.797398+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: cluster 2026-03-09T17:24:56.803696+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: cluster 2026-03-09T17:24:56.803696+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:56.804431+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:56.804431+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:56.806077+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:56.806077+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.694409+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.694409+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.698164+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.698164+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.698727+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.698727+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:24:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.699234+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.699234+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:24:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.703430+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:57 vm00 bash[20770]: audit 2026-03-09T17:24:57.703430+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:24:59.055 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 5 on host 'vm02' 2026-03-09T17:24:59.055 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.049+0000 7f21537fe640 1 -- 192.168.123.102:0/2921972089 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f2138002bf0 con 0x7f2148077600 2026-03-09T17:24:59.057 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2148077600 msgr2=0x7f2148079ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:59.057 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 --2- 192.168.123.102:0/2921972089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2148077600 0x7f2148079ac0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f216c19b8b0 tx=0x7f216000a380 comp rx=0 tx=0).stop 2026-03-09T17:24:59.057 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 msgr2=0x7f216c19a390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:24:59.057 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 0x7f216c19a390 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f2154009a00 tx=0x7f215402fd80 comp rx=0 tx=0).stop 2026-03-09T17:24:59.057 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 shutdown_connections 2026-03-09T17:24:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 --2- 192.168.123.102:0/2921972089 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2148077600 0x7f2148079ac0 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f216c103560 0x7f216c1a1950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f216c102ba0 0x7f216c19a8d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 --2- 192.168.123.102:0/2921972089 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f216c1019a0 0x7f216c19a390 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:24:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 >> 192.168.123.102:0/2921972089 conn(0x7f216c0fd150 msgr2=0x7f216c0fec10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:24:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 shutdown_connections 2026-03-09T17:24:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:24:59.053+0000 7f2170e67640 1 -- 192.168.123.102:0/2921972089 wait complete. 2026-03-09T17:24:59.070 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:59 vm02 bash[23351]: cluster 2026-03-09T17:24:57.347545+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:59.070 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:59 vm02 bash[23351]: cluster 2026-03-09T17:24:57.347545+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:59.070 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:59 vm02 bash[23351]: cluster 2026-03-09T17:24:57.809220+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 v2:192.168.123.102:6804/2433922459 boot 2026-03-09T17:24:59.070 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:59 vm02 bash[23351]: cluster 2026-03-09T17:24:57.809220+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 v2:192.168.123.102:6804/2433922459 boot 2026-03-09T17:24:59.070 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:59 vm02 bash[23351]: cluster 2026-03-09T17:24:57.809288+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T17:24:59.070 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:59 vm02 bash[23351]: cluster 2026-03-09T17:24:57.809288+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T17:24:59.070 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:59 vm02 bash[23351]: audit 2026-03-09T17:24:57.810018+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:59.070 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:24:59 vm02 bash[23351]: audit 2026-03-09T17:24:57.810018+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:59.135 DEBUG:teuthology.orchestra.run.vm02:osd.5> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.5.service 2026-03-09T17:24:59.136 INFO:tasks.cephadm:Deploying osd.6 on vm02 with /dev/vdc... 2026-03-09T17:24:59.136 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- lvm zap /dev/vdc 2026-03-09T17:24:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:59 vm00 bash[28333]: cluster 2026-03-09T17:24:57.347545+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:59 vm00 bash[28333]: cluster 2026-03-09T17:24:57.347545+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:59 vm00 bash[28333]: cluster 2026-03-09T17:24:57.809220+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 v2:192.168.123.102:6804/2433922459 boot 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:59 vm00 bash[28333]: cluster 2026-03-09T17:24:57.809220+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 v2:192.168.123.102:6804/2433922459 boot 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:59 vm00 bash[28333]: cluster 2026-03-09T17:24:57.809288+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:59 vm00 bash[28333]: cluster 2026-03-09T17:24:57.809288+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:59 vm00 bash[28333]: audit 2026-03-09T17:24:57.810018+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:24:59 vm00 bash[28333]: audit 2026-03-09T17:24:57.810018+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:59 vm00 bash[20770]: cluster 2026-03-09T17:24:57.347545+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:59 vm00 bash[20770]: cluster 2026-03-09T17:24:57.347545+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:59 vm00 bash[20770]: cluster 2026-03-09T17:24:57.809220+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 v2:192.168.123.102:6804/2433922459 boot 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:59 vm00 bash[20770]: cluster 2026-03-09T17:24:57.809220+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 v2:192.168.123.102:6804/2433922459 boot 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:59 vm00 bash[20770]: cluster 2026-03-09T17:24:57.809288+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:59 vm00 bash[20770]: cluster 2026-03-09T17:24:57.809288+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:59 vm00 bash[20770]: audit 2026-03-09T17:24:57.810018+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:24:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:24:59 vm00 bash[20770]: audit 2026-03-09T17:24:57.810018+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: cluster 2026-03-09T17:24:59.009539+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: cluster 2026-03-09T17:24:59.009539+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: audit 2026-03-09T17:24:59.043865+0000 mon.a (mon.0) 546 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: audit 2026-03-09T17:24:59.043865+0000 mon.a (mon.0) 546 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: audit 2026-03-09T17:24:59.048842+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: audit 2026-03-09T17:24:59.048842+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: audit 2026-03-09T17:24:59.053729+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: audit 2026-03-09T17:24:59.053729+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: cluster 2026-03-09T17:24:59.347845+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:00 vm02 bash[23351]: cluster 2026-03-09T17:24:59.347845+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:00.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: cluster 2026-03-09T17:24:59.009539+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T17:25:00.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: cluster 2026-03-09T17:24:59.009539+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T17:25:00.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: audit 2026-03-09T17:24:59.043865+0000 mon.a (mon.0) 546 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:00.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: audit 2026-03-09T17:24:59.043865+0000 mon.a (mon.0) 546 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:00.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: audit 2026-03-09T17:24:59.048842+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: audit 2026-03-09T17:24:59.048842+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: audit 2026-03-09T17:24:59.053729+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: audit 2026-03-09T17:24:59.053729+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: cluster 2026-03-09T17:24:59.347845+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:00 vm00 bash[28333]: cluster 2026-03-09T17:24:59.347845+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: cluster 2026-03-09T17:24:59.009539+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: cluster 2026-03-09T17:24:59.009539+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: audit 2026-03-09T17:24:59.043865+0000 mon.a (mon.0) 546 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: audit 2026-03-09T17:24:59.043865+0000 mon.a (mon.0) 546 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: audit 2026-03-09T17:24:59.048842+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: audit 2026-03-09T17:24:59.048842+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: audit 2026-03-09T17:24:59.053729+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: audit 2026-03-09T17:24:59.053729+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: cluster 2026-03-09T17:24:59.347845+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:00 vm00 bash[20770]: cluster 2026-03-09T17:24:59.347845+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:01.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:01 vm02 bash[23351]: cluster 2026-03-09T17:25:00.069882+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-09T17:25:01.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:01 vm02 bash[23351]: cluster 2026-03-09T17:25:00.069882+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-09T17:25:01.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:01 vm00 bash[28333]: cluster 2026-03-09T17:25:00.069882+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-09T17:25:01.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:01 vm00 bash[28333]: cluster 2026-03-09T17:25:00.069882+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-09T17:25:01.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:01 vm00 bash[20770]: cluster 2026-03-09T17:25:00.069882+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-09T17:25:01.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:01 vm00 bash[20770]: cluster 2026-03-09T17:25:00.069882+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-09T17:25:02.077 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:02 vm02 bash[23351]: cluster 2026-03-09T17:25:01.348150+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:02.077 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:02 vm02 bash[23351]: cluster 2026-03-09T17:25:01.348150+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:02.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:02 vm00 bash[28333]: cluster 2026-03-09T17:25:01.348150+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:02.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:02 vm00 bash[28333]: cluster 2026-03-09T17:25:01.348150+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:02 vm00 bash[20770]: cluster 2026-03-09T17:25:01.348150+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:02 vm00 bash[20770]: cluster 2026-03-09T17:25:01.348150+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:03.813 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:25:04.740 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:04 vm02 bash[23351]: cluster 2026-03-09T17:25:03.348509+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:04.740 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:04 vm02 bash[23351]: cluster 2026-03-09T17:25:03.348509+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:04.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:04 vm00 bash[28333]: cluster 2026-03-09T17:25:03.348509+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:04 vm00 bash[28333]: cluster 2026-03-09T17:25:03.348509+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:04 vm00 bash[20770]: cluster 2026-03-09T17:25:03.348509+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:04 vm00 bash[20770]: cluster 2026-03-09T17:25:03.348509+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:05.412 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:25:05.422 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch daemon add osd vm02:/dev/vdc 2026-03-09T17:25:06.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: cephadm 2026-03-09T17:25:04.671698+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:06.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: cephadm 2026-03-09T17:25:04.671698+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:06.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.680077+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.680077+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.688200+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.688200+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.689296+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.689296+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.689783+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.689783+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: cephadm 2026-03-09T17:25:04.690101+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: cephadm 2026-03-09T17:25:04.690101+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: cephadm 2026-03-09T17:25:04.690445+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: cephadm 2026-03-09T17:25:04.690445+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.690729+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.690729+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.691101+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.691101+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.696267+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:05 vm00 bash[28333]: audit 2026-03-09T17:25:04.696267+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: cephadm 2026-03-09T17:25:04.671698+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: cephadm 2026-03-09T17:25:04.671698+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.680077+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.680077+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.688200+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.688200+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.689296+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.689296+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.689783+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.689783+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: cephadm 2026-03-09T17:25:04.690101+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: cephadm 2026-03-09T17:25:04.690101+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: cephadm 2026-03-09T17:25:04.690445+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: cephadm 2026-03-09T17:25:04.690445+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.690729+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.690729+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.691101+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.691101+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.696267+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:05 vm00 bash[20770]: audit 2026-03-09T17:25:04.696267+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: cephadm 2026-03-09T17:25:04.671698+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: cephadm 2026-03-09T17:25:04.671698+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.680077+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.680077+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.688200+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.688200+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.689296+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.689296+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.689783+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.689783+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: cephadm 2026-03-09T17:25:04.690101+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: cephadm 2026-03-09T17:25:04.690101+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: cephadm 2026-03-09T17:25:04.690445+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: cephadm 2026-03-09T17:25:04.690445+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.690729+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.690729+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.691101+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.691101+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.696267+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:05 vm02 bash[23351]: audit 2026-03-09T17:25:04.696267+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:06 vm00 bash[28333]: cluster 2026-03-09T17:25:05.348824+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:06 vm00 bash[28333]: cluster 2026-03-09T17:25:05.348824+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:07.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:06 vm00 bash[20770]: cluster 2026-03-09T17:25:05.348824+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:07.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:06 vm00 bash[20770]: cluster 2026-03-09T17:25:05.348824+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:06 vm02 bash[23351]: cluster 2026-03-09T17:25:05.348824+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:06 vm02 bash[23351]: cluster 2026-03-09T17:25:05.348824+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:09.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:08 vm00 bash[28333]: cluster 2026-03-09T17:25:07.349053+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T17:25:09.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:08 vm00 bash[28333]: cluster 2026-03-09T17:25:07.349053+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T17:25:09.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:08 vm00 bash[20770]: cluster 2026-03-09T17:25:07.349053+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T17:25:09.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:08 vm00 bash[20770]: cluster 2026-03-09T17:25:07.349053+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T17:25:09.134 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:08 vm02 bash[23351]: cluster 2026-03-09T17:25:07.349053+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T17:25:09.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:08 vm02 bash[23351]: cluster 2026-03-09T17:25:07.349053+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T17:25:10.053 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:25:10.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- 192.168.123.102:0/2368159144 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c104d80 msgr2=0x7f2d9c105180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 --2- 192.168.123.102:0/2368159144 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c104d80 0x7f2d9c105180 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f2d88009a80 tx=0x7f2d8802f270 comp rx=0 tx=0).stop 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- 192.168.123.102:0/2368159144 shutdown_connections 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 --2- 192.168.123.102:0/2368159144 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d9c106940 0x7f2d9c10d1d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 --2- 192.168.123.102:0/2368159144 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2d9c105f80 0x7f2d9c106400 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 --2- 192.168.123.102:0/2368159144 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c104d80 0x7f2d9c105180 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- 192.168.123.102:0/2368159144 >> 192.168.123.102:0/2368159144 conn(0x7f2d9c100510 msgr2=0x7f2d9c102950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- 192.168.123.102:0/2368159144 shutdown_connections 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- 192.168.123.102:0/2368159144 wait complete. 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 Processor -- start 2026-03-09T17:25:10.222 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- start start 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d9c104d80 0x7f2d9c19c5e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c105f80 0x7f2d9c19cb20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2d9c106940 0x7f2d9c1a3ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2d9c10fe60 con 0x7f2d9c104d80 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f2d9c10fce0 con 0x7f2d9c105f80 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2da1b63640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f2d9c10ffe0 con 0x7f2d9c106940 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9b7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d9c104d80 0x7f2d9c19c5e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9b7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d9c104d80 0x7f2d9c19c5e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:58426/0 (socket says 192.168.123.102:58426) 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9b7fe640 1 -- 192.168.123.102:0/1073792182 learned_addr learned my addr 192.168.123.102:0/1073792182 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9affd640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c105f80 0x7f2d9c19cb20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:25:10.223 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9bfff640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2d9c106940 0x7f2d9c1a3ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9affd640 1 -- 192.168.123.102:0/1073792182 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2d9c106940 msgr2=0x7f2d9c1a3ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9affd640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2d9c106940 0x7f2d9c1a3ba0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9affd640 1 -- 192.168.123.102:0/1073792182 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d9c104d80 msgr2=0x7f2d9c19c5e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9affd640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d9c104d80 0x7f2d9c19c5e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.217+0000 7f2d9affd640 1 -- 192.168.123.102:0/1073792182 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2d9c1a42a0 con 0x7f2d9c105f80 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2d9affd640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c105f80 0x7f2d9c19cb20 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f2d8c00d730 tx=0x7f2d8c00dc00 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2d98ff9640 1 -- 192.168.123.102:0/1073792182 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2d8c014070 con 0x7f2d9c105f80 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2d98ff9640 1 -- 192.168.123.102:0/1073792182 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2d8c0044e0 con 0x7f2d9c105f80 2026-03-09T17:25:10.224 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2d9c1a4590 con 0x7f2d9c105f80 2026-03-09T17:25:10.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2d98ff9640 1 -- 192.168.123.102:0/1073792182 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2d8c005020 con 0x7f2d9c105f80 2026-03-09T17:25:10.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2d9c1a4b20 con 0x7f2d9c105f80 2026-03-09T17:25:10.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2d98ff9640 1 -- 192.168.123.102:0/1073792182 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f2d8c004780 con 0x7f2d9c105f80 2026-03-09T17:25:10.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2d9c105180 con 0x7f2d9c105f80 2026-03-09T17:25:10.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2d98ff9640 1 --2- 192.168.123.102:0/1073792182 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2d74077540 0x7f2d74079a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:25:10.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2d98ff9640 1 -- 192.168.123.102:0/1073792182 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(40..40 src has 1..40) ==== 3477+0+0 (secure 0 0 0) 0x7f2d8c098740 con 0x7f2d9c105f80 2026-03-09T17:25:10.228 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.221+0000 7f2d9b7fe640 1 --2- 192.168.123.102:0/1073792182 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2d74077540 0x7f2d74079a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:25:10.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.225+0000 7f2d98ff9640 1 -- 192.168.123.102:0/1073792182 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2d8c01d070 con 0x7f2d9c105f80 2026-03-09T17:25:10.231 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.225+0000 7f2d9b7fe640 1 --2- 192.168.123.102:0/1073792182 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2d74077540 0x7f2d74079a00 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f2d88002c40 tx=0x7f2d88031040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:25:10.340 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:10.333+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f2d9c10b920 con 0x7f2d74077540 2026-03-09T17:25:11.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:10 vm00 bash[28333]: cluster 2026-03-09T17:25:09.349354+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:10 vm00 bash[28333]: cluster 2026-03-09T17:25:09.349354+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:10 vm00 bash[28333]: audit 2026-03-09T17:25:10.343179+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:10 vm00 bash[28333]: audit 2026-03-09T17:25:10.343179+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:10 vm00 bash[28333]: audit 2026-03-09T17:25:10.344603+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:10 vm00 bash[28333]: audit 2026-03-09T17:25:10.344603+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:10 vm00 bash[28333]: audit 2026-03-09T17:25:10.345018+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:10 vm00 bash[28333]: audit 2026-03-09T17:25:10.345018+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:10 vm00 bash[20770]: cluster 2026-03-09T17:25:09.349354+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:10 vm00 bash[20770]: cluster 2026-03-09T17:25:09.349354+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:10 vm00 bash[20770]: audit 2026-03-09T17:25:10.343179+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:10 vm00 bash[20770]: audit 2026-03-09T17:25:10.343179+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:10 vm00 bash[20770]: audit 2026-03-09T17:25:10.344603+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:10 vm00 bash[20770]: audit 2026-03-09T17:25:10.344603+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:10 vm00 bash[20770]: audit 2026-03-09T17:25:10.345018+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:10 vm00 bash[20770]: audit 2026-03-09T17:25:10.345018+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:10 vm02 bash[23351]: cluster 2026-03-09T17:25:09.349354+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:25:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:10 vm02 bash[23351]: cluster 2026-03-09T17:25:09.349354+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T17:25:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:10 vm02 bash[23351]: audit 2026-03-09T17:25:10.343179+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:10 vm02 bash[23351]: audit 2026-03-09T17:25:10.343179+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:10 vm02 bash[23351]: audit 2026-03-09T17:25:10.344603+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:10 vm02 bash[23351]: audit 2026-03-09T17:25:10.344603+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:10 vm02 bash[23351]: audit 2026-03-09T17:25:10.345018+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:11.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:10 vm02 bash[23351]: audit 2026-03-09T17:25:10.345018+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:12.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:11 vm00 bash[28333]: audit 2026-03-09T17:25:10.341692+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:12.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:11 vm00 bash[28333]: audit 2026-03-09T17:25:10.341692+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:12.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:11 vm00 bash[20770]: audit 2026-03-09T17:25:10.341692+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:12.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:11 vm00 bash[20770]: audit 2026-03-09T17:25:10.341692+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:11 vm02 bash[23351]: audit 2026-03-09T17:25:10.341692+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:12.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:11 vm02 bash[23351]: audit 2026-03-09T17:25:10.341692+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:12 vm00 bash[28333]: cluster 2026-03-09T17:25:11.349624+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T17:25:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:12 vm00 bash[28333]: cluster 2026-03-09T17:25:11.349624+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T17:25:13.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:12 vm00 bash[20770]: cluster 2026-03-09T17:25:11.349624+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T17:25:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:12 vm00 bash[20770]: cluster 2026-03-09T17:25:11.349624+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T17:25:13.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:12 vm02 bash[23351]: cluster 2026-03-09T17:25:11.349624+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T17:25:13.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:12 vm02 bash[23351]: cluster 2026-03-09T17:25:11.349624+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T17:25:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:14 vm00 bash[28333]: cluster 2026-03-09T17:25:13.349892+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:15.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:14 vm00 bash[28333]: cluster 2026-03-09T17:25:13.349892+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:14 vm00 bash[20770]: cluster 2026-03-09T17:25:13.349892+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:14 vm00 bash[20770]: cluster 2026-03-09T17:25:13.349892+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:15.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:14 vm02 bash[23351]: cluster 2026-03-09T17:25:13.349892+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:15.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:14 vm02 bash[23351]: cluster 2026-03-09T17:25:13.349892+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: audit 2026-03-09T17:25:15.711737+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.102:0/2872916254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: audit 2026-03-09T17:25:15.711737+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.102:0/2872916254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: audit 2026-03-09T17:25:15.712324+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: audit 2026-03-09T17:25:15.712324+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: audit 2026-03-09T17:25:15.717323+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]': finished 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: audit 2026-03-09T17:25:15.717323+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]': finished 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: cluster 2026-03-09T17:25:15.723303+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: cluster 2026-03-09T17:25:15.723303+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T17:25:16.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: audit 2026-03-09T17:25:15.723496+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:15 vm00 bash[28333]: audit 2026-03-09T17:25:15.723496+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: audit 2026-03-09T17:25:15.711737+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.102:0/2872916254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: audit 2026-03-09T17:25:15.711737+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.102:0/2872916254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: audit 2026-03-09T17:25:15.712324+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: audit 2026-03-09T17:25:15.712324+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: audit 2026-03-09T17:25:15.717323+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]': finished 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: audit 2026-03-09T17:25:15.717323+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]': finished 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: cluster 2026-03-09T17:25:15.723303+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: cluster 2026-03-09T17:25:15.723303+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: audit 2026-03-09T17:25:15.723496+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:15 vm00 bash[20770]: audit 2026-03-09T17:25:15.723496+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: audit 2026-03-09T17:25:15.711737+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.102:0/2872916254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: audit 2026-03-09T17:25:15.711737+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.102:0/2872916254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: audit 2026-03-09T17:25:15.712324+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: audit 2026-03-09T17:25:15.712324+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]: dispatch 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: audit 2026-03-09T17:25:15.717323+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]': finished 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: audit 2026-03-09T17:25:15.717323+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6"}]': finished 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: cluster 2026-03-09T17:25:15.723303+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: cluster 2026-03-09T17:25:15.723303+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: audit 2026-03-09T17:25:15.723496+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:15 vm02 bash[23351]: audit 2026-03-09T17:25:15.723496+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:17.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:16 vm00 bash[28333]: cluster 2026-03-09T17:25:15.350291+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:16 vm00 bash[28333]: cluster 2026-03-09T17:25:15.350291+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:16 vm00 bash[28333]: audit 2026-03-09T17:25:16.583060+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/331885679' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:16 vm00 bash[28333]: audit 2026-03-09T17:25:16.583060+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/331885679' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:16 vm00 bash[20770]: cluster 2026-03-09T17:25:15.350291+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:16 vm00 bash[20770]: cluster 2026-03-09T17:25:15.350291+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:16 vm00 bash[20770]: audit 2026-03-09T17:25:16.583060+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/331885679' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:16 vm00 bash[20770]: audit 2026-03-09T17:25:16.583060+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/331885679' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:16 vm02 bash[23351]: cluster 2026-03-09T17:25:15.350291+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:16 vm02 bash[23351]: cluster 2026-03-09T17:25:15.350291+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:16 vm02 bash[23351]: audit 2026-03-09T17:25:16.583060+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/331885679' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:16 vm02 bash[23351]: audit 2026-03-09T17:25:16.583060+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/331885679' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:19.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:18 vm00 bash[28333]: cluster 2026-03-09T17:25:17.350540+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:19.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:18 vm00 bash[28333]: cluster 2026-03-09T17:25:17.350540+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:18 vm00 bash[20770]: cluster 2026-03-09T17:25:17.350540+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:18 vm00 bash[20770]: cluster 2026-03-09T17:25:17.350540+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:19.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:18 vm02 bash[23351]: cluster 2026-03-09T17:25:17.350540+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:19.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:18 vm02 bash[23351]: cluster 2026-03-09T17:25:17.350540+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:20 vm00 bash[28333]: cluster 2026-03-09T17:25:19.350837+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:20 vm00 bash[28333]: cluster 2026-03-09T17:25:19.350837+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:20 vm00 bash[20770]: cluster 2026-03-09T17:25:19.350837+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:20 vm00 bash[20770]: cluster 2026-03-09T17:25:19.350837+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:20 vm02 bash[23351]: cluster 2026-03-09T17:25:19.350837+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:20 vm02 bash[23351]: cluster 2026-03-09T17:25:19.350837+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:23.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:22 vm00 bash[28333]: cluster 2026-03-09T17:25:21.351140+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:23.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:22 vm00 bash[28333]: cluster 2026-03-09T17:25:21.351140+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:23.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:22 vm00 bash[20770]: cluster 2026-03-09T17:25:21.351140+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:23.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:22 vm00 bash[20770]: cluster 2026-03-09T17:25:21.351140+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:23.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:22 vm02 bash[23351]: cluster 2026-03-09T17:25:21.351140+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:23.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:22 vm02 bash[23351]: cluster 2026-03-09T17:25:21.351140+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:24 vm00 bash[28333]: cluster 2026-03-09T17:25:23.351416+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:24 vm00 bash[28333]: cluster 2026-03-09T17:25:23.351416+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:24 vm00 bash[20770]: cluster 2026-03-09T17:25:23.351416+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:24 vm00 bash[20770]: cluster 2026-03-09T17:25:23.351416+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:25.050 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:24 vm02 bash[23351]: cluster 2026-03-09T17:25:23.351416+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:25.050 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:24 vm02 bash[23351]: cluster 2026-03-09T17:25:23.351416+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:25.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:25 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:25.635 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:25:25 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:25.635 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:25:25 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:25.635 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:25:25 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:25.917 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:25:25 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:25.918 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:25:25 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:25.918 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:25 vm02 bash[23351]: audit 2026-03-09T17:25:24.790729+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T17:25:25.918 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:25 vm02 bash[23351]: audit 2026-03-09T17:25:24.790729+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T17:25:25.918 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:25 vm02 bash[23351]: audit 2026-03-09T17:25:24.791236+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:25.918 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:25 vm02 bash[23351]: audit 2026-03-09T17:25:24.791236+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:25.918 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:25 vm02 bash[23351]: cephadm 2026-03-09T17:25:24.791650+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm02 2026-03-09T17:25:25.918 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:25 vm02 bash[23351]: cephadm 2026-03-09T17:25:24.791650+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm02 2026-03-09T17:25:25.918 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:25 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:25.918 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:25:25 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:26.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:25 vm00 bash[28333]: audit 2026-03-09T17:25:24.790729+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T17:25:26.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:25 vm00 bash[28333]: audit 2026-03-09T17:25:24.790729+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T17:25:26.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:25 vm00 bash[28333]: audit 2026-03-09T17:25:24.791236+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:26.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:25 vm00 bash[28333]: audit 2026-03-09T17:25:24.791236+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:26.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:25 vm00 bash[28333]: cephadm 2026-03-09T17:25:24.791650+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm02 2026-03-09T17:25:26.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:25 vm00 bash[28333]: cephadm 2026-03-09T17:25:24.791650+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm02 2026-03-09T17:25:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:25 vm00 bash[20770]: audit 2026-03-09T17:25:24.790729+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T17:25:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:25 vm00 bash[20770]: audit 2026-03-09T17:25:24.790729+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T17:25:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:25 vm00 bash[20770]: audit 2026-03-09T17:25:24.791236+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:25 vm00 bash[20770]: audit 2026-03-09T17:25:24.791236+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:25 vm00 bash[20770]: cephadm 2026-03-09T17:25:24.791650+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm02 2026-03-09T17:25:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:25 vm00 bash[20770]: cephadm 2026-03-09T17:25:24.791650+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm02 2026-03-09T17:25:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:26 vm00 bash[28333]: cluster 2026-03-09T17:25:25.351737+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:26 vm00 bash[28333]: cluster 2026-03-09T17:25:25.351737+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:26 vm00 bash[28333]: audit 2026-03-09T17:25:25.896352+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:26 vm00 bash[28333]: audit 2026-03-09T17:25:25.896352+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:27.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:26 vm00 bash[28333]: audit 2026-03-09T17:25:25.904577+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:26 vm00 bash[28333]: audit 2026-03-09T17:25:25.904577+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:26 vm00 bash[28333]: audit 2026-03-09T17:25:25.910632+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:26 vm00 bash[28333]: audit 2026-03-09T17:25:25.910632+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:26 vm00 bash[20770]: cluster 2026-03-09T17:25:25.351737+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:26 vm00 bash[20770]: cluster 2026-03-09T17:25:25.351737+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:26 vm00 bash[20770]: audit 2026-03-09T17:25:25.896352+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:26 vm00 bash[20770]: audit 2026-03-09T17:25:25.896352+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:26 vm00 bash[20770]: audit 2026-03-09T17:25:25.904577+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:26 vm00 bash[20770]: audit 2026-03-09T17:25:25.904577+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:26 vm00 bash[20770]: audit 2026-03-09T17:25:25.910632+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:26 vm00 bash[20770]: audit 2026-03-09T17:25:25.910632+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:26 vm02 bash[23351]: cluster 2026-03-09T17:25:25.351737+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:26 vm02 bash[23351]: cluster 2026-03-09T17:25:25.351737+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:26 vm02 bash[23351]: audit 2026-03-09T17:25:25.896352+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:26 vm02 bash[23351]: audit 2026-03-09T17:25:25.896352+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:26 vm02 bash[23351]: audit 2026-03-09T17:25:25.904577+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:26 vm02 bash[23351]: audit 2026-03-09T17:25:25.904577+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:26 vm02 bash[23351]: audit 2026-03-09T17:25:25.910632+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:27.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:26 vm02 bash[23351]: audit 2026-03-09T17:25:25.910632+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:28.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:28 vm02 bash[23351]: cluster 2026-03-09T17:25:27.351967+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:28.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:28 vm02 bash[23351]: cluster 2026-03-09T17:25:27.351967+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:29.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:28 vm00 bash[28333]: cluster 2026-03-09T17:25:27.351967+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:29.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:28 vm00 bash[28333]: cluster 2026-03-09T17:25:27.351967+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:29.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:28 vm00 bash[20770]: cluster 2026-03-09T17:25:27.351967+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:29.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:28 vm00 bash[20770]: cluster 2026-03-09T17:25:27.351967+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:31.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:30 vm02 bash[23351]: cluster 2026-03-09T17:25:29.352237+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:31.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:30 vm02 bash[23351]: cluster 2026-03-09T17:25:29.352237+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:31.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:30 vm02 bash[23351]: audit 2026-03-09T17:25:30.285830+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:30 vm02 bash[23351]: audit 2026-03-09T17:25:30.285830+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:30 vm02 bash[23351]: audit 2026-03-09T17:25:30.286327+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:30 vm02 bash[23351]: audit 2026-03-09T17:25:30.286327+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:30 vm00 bash[28333]: cluster 2026-03-09T17:25:29.352237+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:31.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:30 vm00 bash[28333]: cluster 2026-03-09T17:25:29.352237+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:31.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:30 vm00 bash[28333]: audit 2026-03-09T17:25:30.285830+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:30 vm00 bash[28333]: audit 2026-03-09T17:25:30.285830+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:30 vm00 bash[28333]: audit 2026-03-09T17:25:30.286327+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:30 vm00 bash[28333]: audit 2026-03-09T17:25:30.286327+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:30 vm00 bash[20770]: cluster 2026-03-09T17:25:29.352237+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:30 vm00 bash[20770]: cluster 2026-03-09T17:25:29.352237+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:30 vm00 bash[20770]: audit 2026-03-09T17:25:30.285830+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:30 vm00 bash[20770]: audit 2026-03-09T17:25:30.285830+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:30 vm00 bash[20770]: audit 2026-03-09T17:25:30.286327+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:30 vm00 bash[20770]: audit 2026-03-09T17:25:30.286327+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:30.793614+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:30.793614+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:30.797354+0000 mon.b (mon.1) 22 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:30.797354+0000 mon.b (mon.1) 22 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: cluster 2026-03-09T17:25:30.797536+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: cluster 2026-03-09T17:25:30.797536+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:30.798302+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:30.798302+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:30.798407+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:30.798407+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:31.796696+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: audit 2026-03-09T17:25:31.796696+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: cluster 2026-03-09T17:25:31.799729+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T17:25:32.124 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:31 vm02 bash[23351]: cluster 2026-03-09T17:25:31.799729+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:30.793614+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:30.793614+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:30.797354+0000 mon.b (mon.1) 22 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:30.797354+0000 mon.b (mon.1) 22 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: cluster 2026-03-09T17:25:30.797536+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: cluster 2026-03-09T17:25:30.797536+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:30.798302+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:30.798302+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:30.798407+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:30.798407+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:31.796696+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: audit 2026-03-09T17:25:31.796696+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: cluster 2026-03-09T17:25:31.799729+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:31 vm00 bash[28333]: cluster 2026-03-09T17:25:31.799729+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:30.793614+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:30.793614+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:30.797354+0000 mon.b (mon.1) 22 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:30.797354+0000 mon.b (mon.1) 22 : audit [INF] from='osd.6 v2:192.168.123.102:6808/2053868073' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: cluster 2026-03-09T17:25:30.797536+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: cluster 2026-03-09T17:25:30.797536+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:30.798302+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:30.798302+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:30.798407+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:30.798407+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:31.796696+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: audit 2026-03-09T17:25:31.796696+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: cluster 2026-03-09T17:25:31.799729+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T17:25:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:31 vm00 bash[20770]: cluster 2026-03-09T17:25:31.799729+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: cluster 2026-03-09T17:25:31.352511+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: cluster 2026-03-09T17:25:31.352511+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:31.800728+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:31.800728+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:31.803796+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:31.803796+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.129566+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.129566+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.133337+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.133337+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.604459+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.604459+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.605048+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.605048+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.610284+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.610284+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: cluster 2026-03-09T17:25:32.803198+0000 mon.a (mon.0) 583 : cluster [INF] osd.6 v2:192.168.123.102:6808/2053868073 boot 2026-03-09T17:25:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: cluster 2026-03-09T17:25:32.803198+0000 mon.a (mon.0) 583 : cluster [INF] osd.6 v2:192.168.123.102:6808/2053868073 boot 2026-03-09T17:25:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: cluster 2026-03-09T17:25:32.804529+0000 mon.a (mon.0) 584 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T17:25:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: cluster 2026-03-09T17:25:32.804529+0000 mon.a (mon.0) 584 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T17:25:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.805149+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:32 vm02 bash[23351]: audit 2026-03-09T17:25:32.805149+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2d98ff9640 1 -- 192.168.123.102:0/1073792182 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f2d9c10b920 con 0x7f2d74077540 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 6 on host 'vm02' 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2d74077540 msgr2=0x7f2d74079a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 --2- 192.168.123.102:0/1073792182 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2d74077540 0x7f2d74079a00 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f2d88002c40 tx=0x7f2d88031040 comp rx=0 tx=0).stop 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c105f80 msgr2=0x7f2d9c19cb20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c105f80 0x7f2d9c19cb20 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f2d8c00d730 tx=0x7f2d8c00dc00 comp rx=0 tx=0).stop 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 shutdown_connections 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 --2- 192.168.123.102:0/1073792182 >> v2:192.168.123.100:6800/3114914985 conn(0x7f2d74077540 0x7f2d74079a00 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2d9c106940 0x7f2d9c1a3ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d9c105f80 0x7f2d9c19cb20 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 --2- 192.168.123.102:0/1073792182 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2d9c104d80 0x7f2d9c19c5e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 >> 192.168.123.102:0/1073792182 conn(0x7f2d9c100510 msgr2=0x7f2d9c101fd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 shutdown_connections 2026-03-09T17:25:33.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:33.196+0000 7f2da1b63640 1 -- 192.168.123.102:0/1073792182 wait complete. 2026-03-09T17:25:33.279 DEBUG:teuthology.orchestra.run.vm02:osd.6> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.6.service 2026-03-09T17:25:33.280 INFO:tasks.cephadm:Deploying osd.7 on vm02 with /dev/vdb... 2026-03-09T17:25:33.280 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- lvm zap /dev/vdb 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: cluster 2026-03-09T17:25:31.352511+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: cluster 2026-03-09T17:25:31.352511+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:31.800728+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:31.800728+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:31.803796+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:31.803796+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.129566+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.129566+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.133337+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.133337+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.604459+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.604459+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.605048+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.605048+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.610284+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.610284+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: cluster 2026-03-09T17:25:32.803198+0000 mon.a (mon.0) 583 : cluster [INF] osd.6 v2:192.168.123.102:6808/2053868073 boot 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: cluster 2026-03-09T17:25:32.803198+0000 mon.a (mon.0) 583 : cluster [INF] osd.6 v2:192.168.123.102:6808/2053868073 boot 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: cluster 2026-03-09T17:25:32.804529+0000 mon.a (mon.0) 584 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: cluster 2026-03-09T17:25:32.804529+0000 mon.a (mon.0) 584 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.805149+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:32 vm00 bash[28333]: audit 2026-03-09T17:25:32.805149+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: cluster 2026-03-09T17:25:31.352511+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: cluster 2026-03-09T17:25:31.352511+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:31.800728+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:31.800728+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:31.803796+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:31.803796+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.129566+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.129566+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.133337+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.133337+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.604459+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.604459+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.605048+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.605048+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.610284+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.610284+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: cluster 2026-03-09T17:25:32.803198+0000 mon.a (mon.0) 583 : cluster [INF] osd.6 v2:192.168.123.102:6808/2053868073 boot 2026-03-09T17:25:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: cluster 2026-03-09T17:25:32.803198+0000 mon.a (mon.0) 583 : cluster [INF] osd.6 v2:192.168.123.102:6808/2053868073 boot 2026-03-09T17:25:33.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: cluster 2026-03-09T17:25:32.804529+0000 mon.a (mon.0) 584 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T17:25:33.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: cluster 2026-03-09T17:25:32.804529+0000 mon.a (mon.0) 584 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T17:25:33.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.805149+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:33.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:32 vm00 bash[20770]: audit 2026-03-09T17:25:32.805149+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: cluster 2026-03-09T17:25:31.248011+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: cluster 2026-03-09T17:25:31.248011+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: cluster 2026-03-09T17:25:31.248061+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: cluster 2026-03-09T17:25:31.248061+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: audit 2026-03-09T17:25:33.186878+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: audit 2026-03-09T17:25:33.186878+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: audit 2026-03-09T17:25:33.193898+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: audit 2026-03-09T17:25:33.193898+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: audit 2026-03-09T17:25:33.199269+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: audit 2026-03-09T17:25:33.199269+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: cluster 2026-03-09T17:25:33.807758+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T17:25:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:33 vm02 bash[23351]: cluster 2026-03-09T17:25:33.807758+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T17:25:34.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: cluster 2026-03-09T17:25:31.248011+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: cluster 2026-03-09T17:25:31.248011+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: cluster 2026-03-09T17:25:31.248061+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: cluster 2026-03-09T17:25:31.248061+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: audit 2026-03-09T17:25:33.186878+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: audit 2026-03-09T17:25:33.186878+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: audit 2026-03-09T17:25:33.193898+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: audit 2026-03-09T17:25:33.193898+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: audit 2026-03-09T17:25:33.199269+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: audit 2026-03-09T17:25:33.199269+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: cluster 2026-03-09T17:25:33.807758+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:33 vm00 bash[28333]: cluster 2026-03-09T17:25:33.807758+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: cluster 2026-03-09T17:25:31.248011+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: cluster 2026-03-09T17:25:31.248011+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: cluster 2026-03-09T17:25:31.248061+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: cluster 2026-03-09T17:25:31.248061+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: audit 2026-03-09T17:25:33.186878+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: audit 2026-03-09T17:25:33.186878+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: audit 2026-03-09T17:25:33.193898+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: audit 2026-03-09T17:25:33.193898+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: audit 2026-03-09T17:25:33.199269+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: audit 2026-03-09T17:25:33.199269+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: cluster 2026-03-09T17:25:33.807758+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T17:25:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:33 vm00 bash[20770]: cluster 2026-03-09T17:25:33.807758+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T17:25:35.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:34 vm02 bash[23351]: cluster 2026-03-09T17:25:33.352846+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:35.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:34 vm02 bash[23351]: cluster 2026-03-09T17:25:33.352846+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:34 vm00 bash[28333]: cluster 2026-03-09T17:25:33.352846+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:34 vm00 bash[28333]: cluster 2026-03-09T17:25:33.352846+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:34 vm00 bash[20770]: cluster 2026-03-09T17:25:33.352846+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:34 vm00 bash[20770]: cluster 2026-03-09T17:25:33.352846+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:36 vm00 bash[28333]: cluster 2026-03-09T17:25:35.210775+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T17:25:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:36 vm00 bash[28333]: cluster 2026-03-09T17:25:35.210775+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T17:25:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:36 vm00 bash[28333]: cluster 2026-03-09T17:25:35.353188+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:36 vm00 bash[28333]: cluster 2026-03-09T17:25:35.353188+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:36 vm00 bash[20770]: cluster 2026-03-09T17:25:35.210775+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T17:25:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:36 vm00 bash[20770]: cluster 2026-03-09T17:25:35.210775+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T17:25:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:36 vm00 bash[20770]: cluster 2026-03-09T17:25:35.353188+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:36 vm00 bash[20770]: cluster 2026-03-09T17:25:35.353188+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:36 vm02 bash[23351]: cluster 2026-03-09T17:25:35.210775+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T17:25:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:36 vm02 bash[23351]: cluster 2026-03-09T17:25:35.210775+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T17:25:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:36 vm02 bash[23351]: cluster 2026-03-09T17:25:35.353188+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:36 vm02 bash[23351]: cluster 2026-03-09T17:25:35.353188+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:37.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:37 vm00 bash[28333]: cluster 2026-03-09T17:25:36.203675+0000 mon.a (mon.0) 591 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:25:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:37 vm00 bash[28333]: cluster 2026-03-09T17:25:36.203675+0000 mon.a (mon.0) 591 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:25:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:37 vm00 bash[20770]: cluster 2026-03-09T17:25:36.203675+0000 mon.a (mon.0) 591 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:25:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:37 vm00 bash[20770]: cluster 2026-03-09T17:25:36.203675+0000 mon.a (mon.0) 591 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:25:37.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:37 vm02 bash[23351]: cluster 2026-03-09T17:25:36.203675+0000 mon.a (mon.0) 591 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:25:37.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:37 vm02 bash[23351]: cluster 2026-03-09T17:25:36.203675+0000 mon.a (mon.0) 591 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:25:37.948 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:25:38.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:38 vm00 bash[28333]: cluster 2026-03-09T17:25:37.353497+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:38 vm00 bash[28333]: cluster 2026-03-09T17:25:37.353497+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:38 vm00 bash[20770]: cluster 2026-03-09T17:25:37.353497+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:38 vm00 bash[20770]: cluster 2026-03-09T17:25:37.353497+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:38 vm02 bash[23351]: cluster 2026-03-09T17:25:37.353497+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:38 vm02 bash[23351]: cluster 2026-03-09T17:25:37.353497+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:39.492 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-09T17:25:39.507 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch daemon add osd vm02:/dev/vdb 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: cephadm 2026-03-09T17:25:38.779521+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: cephadm 2026-03-09T17:25:38.779521+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.787306+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.787306+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.793609+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.793609+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.794432+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.794432+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.795064+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.795064+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.795804+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.795804+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: cephadm 2026-03-09T17:25:38.796217+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Adjusting osd_memory_target on vm02 to 151.9M 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: cephadm 2026-03-09T17:25:38.796217+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Adjusting osd_memory_target on vm02 to 151.9M 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: cephadm 2026-03-09T17:25:38.796759+0000 mgr.y (mgr.14150) 207 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: cephadm 2026-03-09T17:25:38.796759+0000 mgr.y (mgr.14150) 207 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.797119+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.797119+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.797602+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.797602+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.802251+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:39 vm00 bash[28333]: audit 2026-03-09T17:25:38.802251+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: cephadm 2026-03-09T17:25:38.779521+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: cephadm 2026-03-09T17:25:38.779521+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.787306+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.787306+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.793609+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.793609+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.794432+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.794432+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.795064+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.795064+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.795804+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.795804+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: cephadm 2026-03-09T17:25:38.796217+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Adjusting osd_memory_target on vm02 to 151.9M 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: cephadm 2026-03-09T17:25:38.796217+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Adjusting osd_memory_target on vm02 to 151.9M 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: cephadm 2026-03-09T17:25:38.796759+0000 mgr.y (mgr.14150) 207 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: cephadm 2026-03-09T17:25:38.796759+0000 mgr.y (mgr.14150) 207 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.797119+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.797119+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.797602+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.797602+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.802251+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:39 vm00 bash[20770]: audit 2026-03-09T17:25:38.802251+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: cephadm 2026-03-09T17:25:38.779521+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: cephadm 2026-03-09T17:25:38.779521+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.787306+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.787306+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.793609+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.793609+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.794432+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.794432+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.795064+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.795064+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.795804+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.795804+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: cephadm 2026-03-09T17:25:38.796217+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Adjusting osd_memory_target on vm02 to 151.9M 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: cephadm 2026-03-09T17:25:38.796217+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Adjusting osd_memory_target on vm02 to 151.9M 2026-03-09T17:25:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: cephadm 2026-03-09T17:25:38.796759+0000 mgr.y (mgr.14150) 207 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T17:25:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: cephadm 2026-03-09T17:25:38.796759+0000 mgr.y (mgr.14150) 207 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T17:25:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.797119+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.797119+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.797602+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.797602+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:25:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.802251+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:39 vm02 bash[23351]: audit 2026-03-09T17:25:38.802251+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:25:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:40 vm02 bash[23351]: cluster 2026-03-09T17:25:39.353823+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:40 vm02 bash[23351]: cluster 2026-03-09T17:25:39.353823+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:40 vm02 bash[23351]: cluster 2026-03-09T17:25:39.800435+0000 mon.a (mon.0) 600 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:25:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:40 vm02 bash[23351]: cluster 2026-03-09T17:25:39.800435+0000 mon.a (mon.0) 600 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:25:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:40 vm02 bash[23351]: cluster 2026-03-09T17:25:39.800446+0000 mon.a (mon.0) 601 : cluster [INF] Cluster is now healthy 2026-03-09T17:25:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:40 vm02 bash[23351]: cluster 2026-03-09T17:25:39.800446+0000 mon.a (mon.0) 601 : cluster [INF] Cluster is now healthy 2026-03-09T17:25:41.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:40 vm00 bash[28333]: cluster 2026-03-09T17:25:39.353823+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:40 vm00 bash[28333]: cluster 2026-03-09T17:25:39.353823+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:40 vm00 bash[28333]: cluster 2026-03-09T17:25:39.800435+0000 mon.a (mon.0) 600 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:40 vm00 bash[28333]: cluster 2026-03-09T17:25:39.800435+0000 mon.a (mon.0) 600 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:40 vm00 bash[28333]: cluster 2026-03-09T17:25:39.800446+0000 mon.a (mon.0) 601 : cluster [INF] Cluster is now healthy 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:40 vm00 bash[28333]: cluster 2026-03-09T17:25:39.800446+0000 mon.a (mon.0) 601 : cluster [INF] Cluster is now healthy 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:40 vm00 bash[20770]: cluster 2026-03-09T17:25:39.353823+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:40 vm00 bash[20770]: cluster 2026-03-09T17:25:39.353823+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:40 vm00 bash[20770]: cluster 2026-03-09T17:25:39.800435+0000 mon.a (mon.0) 600 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:40 vm00 bash[20770]: cluster 2026-03-09T17:25:39.800435+0000 mon.a (mon.0) 600 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:40 vm00 bash[20770]: cluster 2026-03-09T17:25:39.800446+0000 mon.a (mon.0) 601 : cluster [INF] Cluster is now healthy 2026-03-09T17:25:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:40 vm00 bash[20770]: cluster 2026-03-09T17:25:39.800446+0000 mon.a (mon.0) 601 : cluster [INF] Cluster is now healthy 2026-03-09T17:25:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:42 vm02 bash[23351]: cluster 2026-03-09T17:25:41.354076+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:42 vm02 bash[23351]: cluster 2026-03-09T17:25:41.354076+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:42 vm00 bash[28333]: cluster 2026-03-09T17:25:41.354076+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:42 vm00 bash[28333]: cluster 2026-03-09T17:25:41.354076+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:42 vm00 bash[20770]: cluster 2026-03-09T17:25:41.354076+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:42 vm00 bash[20770]: cluster 2026-03-09T17:25:41.354076+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:44.127 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:25:44.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- 192.168.123.102:0/2777153662 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc0770a0 msgr2=0x7f33fc075500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:25:44.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 --2- 192.168.123.102:0/2777153662 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc0770a0 0x7f33fc075500 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f33e8009a30 tx=0x7f33e802f240 comp rx=0 tx=0).stop 2026-03-09T17:25:44.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- 192.168.123.102:0/2777153662 shutdown_connections 2026-03-09T17:25:44.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 --2- 192.168.123.102:0/2777153662 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33fc1064c0 0x7f33fc1113d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:44.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 --2- 192.168.123.102:0/2777153662 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f33fc075a40 0x7f33fc075ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:44.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 --2- 192.168.123.102:0/2777153662 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc0770a0 0x7f33fc075500 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:44.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- 192.168.123.102:0/2777153662 >> 192.168.123.102:0/2777153662 conn(0x7f33fc0fe290 msgr2=0x7f33fc1006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:25:44.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- 192.168.123.102:0/2777153662 shutdown_connections 2026-03-09T17:25:44.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- 192.168.123.102:0/2777153662 wait complete. 2026-03-09T17:25:44.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 Processor -- start 2026-03-09T17:25:44.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- start start 2026-03-09T17:25:44.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc075a40 0x7f33fc1a0c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33fc0770a0 0x7f33fc1a1170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f33fc1064c0 0x7f33fc1a81b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc075a40 0x7f33fc1a0c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc075a40 0x7f33fc1a0c30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:46776/0 (socket says 192.168.123.102:46776) 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 -- 192.168.123.102:0/2730287410 learned_addr learned my addr 192.168.123.102:0/2730287410 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f33fc114450 con 0x7f33fc0770a0 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f33fc1142d0 con 0x7f33fc075a40 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f33fc1145d0 con 0x7f33fc1064c0 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f3400ab2640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f33fc1064c0 0x7f33fc1a81b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fb7fe640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33fc0770a0 0x7f33fc1a1170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 -- 192.168.123.102:0/2730287410 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f33fc1064c0 msgr2=0x7f33fc1a81b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f33fc1064c0 0x7f33fc1a81b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 -- 192.168.123.102:0/2730287410 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33fc0770a0 msgr2=0x7f33fc1a1170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33fc0770a0 0x7f33fc1a1170 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:25:44.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 -- 192.168.123.102:0/2730287410 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f33fc1a88e0 con 0x7f33fc075a40 2026-03-09T17:25:44.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.272+0000 7f33fbfff640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc075a40 0x7f33fc1a0c30 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f33e8009a00 tx=0x7f33e8002ea0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:25:44.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f33f97fa640 1 -- 192.168.123.102:0/2730287410 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f33e8004280 con 0x7f33fc075a40 2026-03-09T17:25:44.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f33f97fa640 1 -- 192.168.123.102:0/2730287410 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f33e8004420 con 0x7f33fc075a40 2026-03-09T17:25:44.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f33f97fa640 1 -- 192.168.123.102:0/2730287410 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f33e8004db0 con 0x7f33fc075a40 2026-03-09T17:25:44.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f33fc1a8b70 con 0x7f33fc075a40 2026-03-09T17:25:44.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f33fc1a9000 con 0x7f33fc075a40 2026-03-09T17:25:44.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f33f97fa640 1 -- 192.168.123.102:0/2730287410 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f33e8005030 con 0x7f33fc075a40 2026-03-09T17:25:44.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f33fc075f20 con 0x7f33fc075a40 2026-03-09T17:25:44.285 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f33f97fa640 1 --2- 192.168.123.102:0/2730287410 >> v2:192.168.123.100:6800/3114914985 conn(0x7f33dc0775d0 0x7f33dc079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:25:44.285 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f33f97fa640 1 -- 192.168.123.102:0/2730287410 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(46..46 src has 1..46) ==== 3769+0+0 (secure 0 0 0) 0x7f33e80bcc40 con 0x7f33fc075a40 2026-03-09T17:25:44.285 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f33fb7fe640 1 --2- 192.168.123.102:0/2730287410 >> v2:192.168.123.100:6800/3114914985 conn(0x7f33dc0775d0 0x7f33dc079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:25:44.285 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.276+0000 7f33fb7fe640 1 --2- 192.168.123.102:0/2730287410 >> v2:192.168.123.100:6800/3114914985 conn(0x7f33dc0775d0 0x7f33dc079a90 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f33ec006fd0 tx=0x7f33ec008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:25:44.286 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.280+0000 7f33f97fa640 1 -- 192.168.123.102:0/2730287410 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f33e808a390 con 0x7f33fc075a40 2026-03-09T17:25:44.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:25:44.388+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f33fc0630c0 con 0x7f33dc0775d0 2026-03-09T17:25:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:44 vm02 bash[23351]: cluster 2026-03-09T17:25:43.354369+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T17:25:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:44 vm02 bash[23351]: cluster 2026-03-09T17:25:43.354369+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T17:25:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:44 vm02 bash[23351]: audit 2026-03-09T17:25:44.395817+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:44 vm02 bash[23351]: audit 2026-03-09T17:25:44.395817+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:44 vm02 bash[23351]: audit 2026-03-09T17:25:44.397578+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:44 vm02 bash[23351]: audit 2026-03-09T17:25:44.397578+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:44 vm02 bash[23351]: audit 2026-03-09T17:25:44.398045+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:45.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:44 vm02 bash[23351]: audit 2026-03-09T17:25:44.398045+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:45.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:44 vm00 bash[28333]: cluster 2026-03-09T17:25:43.354369+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T17:25:45.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:44 vm00 bash[28333]: cluster 2026-03-09T17:25:43.354369+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T17:25:45.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:44 vm00 bash[28333]: audit 2026-03-09T17:25:44.395817+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:45.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:44 vm00 bash[28333]: audit 2026-03-09T17:25:44.395817+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:45.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:44 vm00 bash[28333]: audit 2026-03-09T17:25:44.397578+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:44 vm00 bash[28333]: audit 2026-03-09T17:25:44.397578+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:44 vm00 bash[28333]: audit 2026-03-09T17:25:44.398045+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:44 vm00 bash[28333]: audit 2026-03-09T17:25:44.398045+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:44 vm00 bash[20770]: cluster 2026-03-09T17:25:43.354369+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:44 vm00 bash[20770]: cluster 2026-03-09T17:25:43.354369+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:44 vm00 bash[20770]: audit 2026-03-09T17:25:44.395817+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:44 vm00 bash[20770]: audit 2026-03-09T17:25:44.395817+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:44 vm00 bash[20770]: audit 2026-03-09T17:25:44.397578+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:44 vm00 bash[20770]: audit 2026-03-09T17:25:44.397578+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:44 vm00 bash[20770]: audit 2026-03-09T17:25:44.398045+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:44 vm00 bash[20770]: audit 2026-03-09T17:25:44.398045+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:46.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:45 vm02 bash[23351]: audit 2026-03-09T17:25:44.394391+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:46.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:45 vm02 bash[23351]: audit 2026-03-09T17:25:44.394391+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:45 vm00 bash[28333]: audit 2026-03-09T17:25:44.394391+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:45 vm00 bash[28333]: audit 2026-03-09T17:25:44.394391+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:45 vm00 bash[20770]: audit 2026-03-09T17:25:44.394391+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:45 vm00 bash[20770]: audit 2026-03-09T17:25:44.394391+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:25:47.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:46 vm02 bash[23351]: cluster 2026-03-09T17:25:45.354626+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T17:25:47.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:46 vm02 bash[23351]: cluster 2026-03-09T17:25:45.354626+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T17:25:47.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:46 vm00 bash[28333]: cluster 2026-03-09T17:25:45.354626+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T17:25:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:46 vm00 bash[28333]: cluster 2026-03-09T17:25:45.354626+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T17:25:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:46 vm00 bash[20770]: cluster 2026-03-09T17:25:45.354626+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T17:25:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:46 vm00 bash[20770]: cluster 2026-03-09T17:25:45.354626+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T17:25:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:48 vm02 bash[23351]: cluster 2026-03-09T17:25:47.354851+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:49.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:48 vm02 bash[23351]: cluster 2026-03-09T17:25:47.354851+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:49.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:48 vm00 bash[28333]: cluster 2026-03-09T17:25:47.354851+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:49.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:48 vm00 bash[28333]: cluster 2026-03-09T17:25:47.354851+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:48 vm00 bash[20770]: cluster 2026-03-09T17:25:47.354851+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:48 vm00 bash[20770]: cluster 2026-03-09T17:25:47.354851+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:50.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: audit 2026-03-09T17:25:49.789232+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/1774234426' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: audit 2026-03-09T17:25:49.789232+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/1774234426' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: audit 2026-03-09T17:25:49.789675+0000 mon.a (mon.0) 605 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: audit 2026-03-09T17:25:49.789675+0000 mon.a (mon.0) 605 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: audit 2026-03-09T17:25:49.792728+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]': finished 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: audit 2026-03-09T17:25:49.792728+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]': finished 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: cluster 2026-03-09T17:25:49.797734+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: cluster 2026-03-09T17:25:49.797734+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: audit 2026-03-09T17:25:49.797810+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:25:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:49 vm02 bash[23351]: audit 2026-03-09T17:25:49.797810+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: audit 2026-03-09T17:25:49.789232+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/1774234426' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: audit 2026-03-09T17:25:49.789232+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/1774234426' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: audit 2026-03-09T17:25:49.789675+0000 mon.a (mon.0) 605 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: audit 2026-03-09T17:25:49.789675+0000 mon.a (mon.0) 605 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: audit 2026-03-09T17:25:49.792728+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]': finished 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: audit 2026-03-09T17:25:49.792728+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]': finished 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: cluster 2026-03-09T17:25:49.797734+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: cluster 2026-03-09T17:25:49.797734+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T17:25:50.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: audit 2026-03-09T17:25:49.797810+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:49 vm00 bash[28333]: audit 2026-03-09T17:25:49.797810+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: audit 2026-03-09T17:25:49.789232+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/1774234426' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: audit 2026-03-09T17:25:49.789232+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/1774234426' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: audit 2026-03-09T17:25:49.789675+0000 mon.a (mon.0) 605 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: audit 2026-03-09T17:25:49.789675+0000 mon.a (mon.0) 605 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]: dispatch 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: audit 2026-03-09T17:25:49.792728+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]': finished 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: audit 2026-03-09T17:25:49.792728+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e273c6f-f4ee-411b-abb9-572d1556cdc9"}]': finished 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: cluster 2026-03-09T17:25:49.797734+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: cluster 2026-03-09T17:25:49.797734+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: audit 2026-03-09T17:25:49.797810+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:25:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:49 vm00 bash[20770]: audit 2026-03-09T17:25:49.797810+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:25:51.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:50 vm02 bash[23351]: cluster 2026-03-09T17:25:49.355169+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:51.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:50 vm02 bash[23351]: cluster 2026-03-09T17:25:49.355169+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:51.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:50 vm02 bash[23351]: audit 2026-03-09T17:25:50.458082+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/157172401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:51.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:50 vm02 bash[23351]: audit 2026-03-09T17:25:50.458082+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/157172401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:51.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:50 vm00 bash[28333]: cluster 2026-03-09T17:25:49.355169+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:51.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:50 vm00 bash[28333]: cluster 2026-03-09T17:25:49.355169+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:51.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:50 vm00 bash[28333]: audit 2026-03-09T17:25:50.458082+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/157172401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:51.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:50 vm00 bash[28333]: audit 2026-03-09T17:25:50.458082+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/157172401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:51.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:50 vm00 bash[20770]: cluster 2026-03-09T17:25:49.355169+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:51.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:50 vm00 bash[20770]: cluster 2026-03-09T17:25:49.355169+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T17:25:51.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:50 vm00 bash[20770]: audit 2026-03-09T17:25:50.458082+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/157172401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:51.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:50 vm00 bash[20770]: audit 2026-03-09T17:25:50.458082+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/157172401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:25:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:52 vm02 bash[23351]: cluster 2026-03-09T17:25:51.355811+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:52 vm02 bash[23351]: cluster 2026-03-09T17:25:51.355811+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:53.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:52 vm00 bash[28333]: cluster 2026-03-09T17:25:51.355811+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:53.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:52 vm00 bash[28333]: cluster 2026-03-09T17:25:51.355811+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:53.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:52 vm00 bash[20770]: cluster 2026-03-09T17:25:51.355811+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:53.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:52 vm00 bash[20770]: cluster 2026-03-09T17:25:51.355811+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:55.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:54 vm00 bash[28333]: cluster 2026-03-09T17:25:53.356181+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:55.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:54 vm00 bash[28333]: cluster 2026-03-09T17:25:53.356181+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:55.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:54 vm00 bash[20770]: cluster 2026-03-09T17:25:53.356181+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:55.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:54 vm00 bash[20770]: cluster 2026-03-09T17:25:53.356181+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:55.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:54 vm02 bash[23351]: cluster 2026-03-09T17:25:53.356181+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:55.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:54 vm02 bash[23351]: cluster 2026-03-09T17:25:53.356181+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:56 vm00 bash[28333]: cluster 2026-03-09T17:25:55.356460+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:56 vm00 bash[28333]: cluster 2026-03-09T17:25:55.356460+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:56 vm00 bash[20770]: cluster 2026-03-09T17:25:55.356460+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:56 vm00 bash[20770]: cluster 2026-03-09T17:25:55.356460+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:57.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:56 vm02 bash[23351]: cluster 2026-03-09T17:25:55.356460+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:57.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:56 vm02 bash[23351]: cluster 2026-03-09T17:25:55.356460+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:59.175 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:58 vm02 bash[23351]: cluster 2026-03-09T17:25:57.356694+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:59.175 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:58 vm02 bash[23351]: cluster 2026-03-09T17:25:57.356694+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:59.175 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:58 vm02 bash[23351]: audit 2026-03-09T17:25:58.703888+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T17:25:59.175 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:58 vm02 bash[23351]: audit 2026-03-09T17:25:58.703888+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T17:25:59.176 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:58 vm02 bash[23351]: audit 2026-03-09T17:25:58.704443+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:59.176 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:58 vm02 bash[23351]: audit 2026-03-09T17:25:58.704443+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:58 vm00 bash[28333]: cluster 2026-03-09T17:25:57.356694+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:58 vm00 bash[28333]: cluster 2026-03-09T17:25:57.356694+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:58 vm00 bash[28333]: audit 2026-03-09T17:25:58.703888+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T17:25:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:58 vm00 bash[28333]: audit 2026-03-09T17:25:58.703888+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T17:25:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:58 vm00 bash[28333]: audit 2026-03-09T17:25:58.704443+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:59.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:58 vm00 bash[28333]: audit 2026-03-09T17:25:58.704443+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:58 vm00 bash[20770]: cluster 2026-03-09T17:25:57.356694+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:58 vm00 bash[20770]: cluster 2026-03-09T17:25:57.356694+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:25:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:58 vm00 bash[20770]: audit 2026-03-09T17:25:58.703888+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T17:25:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:58 vm00 bash[20770]: audit 2026-03-09T17:25:58.703888+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T17:25:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:58 vm00 bash[20770]: audit 2026-03-09T17:25:58.704443+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:58 vm00 bash[20770]: audit 2026-03-09T17:25:58.704443+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:25:59.811 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.811 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.811 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.811 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.812 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.812 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.812 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.812 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.812 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:25:59.812 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:25:59 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:00.114 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 bash[23351]: cephadm 2026-03-09T17:25:58.704858+0000 mgr.y (mgr.14150) 219 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-09T17:26:00.114 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 bash[23351]: cephadm 2026-03-09T17:25:58.704858+0000 mgr.y (mgr.14150) 219 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-09T17:26:00.114 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 bash[23351]: audit 2026-03-09T17:25:59.835894+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:00.114 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 bash[23351]: audit 2026-03-09T17:25:59.835894+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:00.114 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 bash[23351]: audit 2026-03-09T17:25:59.844135+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.114 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 bash[23351]: audit 2026-03-09T17:25:59.844135+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.114 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 bash[23351]: audit 2026-03-09T17:25:59.849280+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.114 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:25:59 vm02 bash[23351]: audit 2026-03-09T17:25:59.849280+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:59 vm00 bash[28333]: cephadm 2026-03-09T17:25:58.704858+0000 mgr.y (mgr.14150) 219 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:59 vm00 bash[28333]: cephadm 2026-03-09T17:25:58.704858+0000 mgr.y (mgr.14150) 219 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:59 vm00 bash[28333]: audit 2026-03-09T17:25:59.835894+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:59 vm00 bash[28333]: audit 2026-03-09T17:25:59.835894+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:59 vm00 bash[28333]: audit 2026-03-09T17:25:59.844135+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:59 vm00 bash[28333]: audit 2026-03-09T17:25:59.844135+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:59 vm00 bash[28333]: audit 2026-03-09T17:25:59.849280+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:25:59 vm00 bash[28333]: audit 2026-03-09T17:25:59.849280+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:59 vm00 bash[20770]: cephadm 2026-03-09T17:25:58.704858+0000 mgr.y (mgr.14150) 219 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:59 vm00 bash[20770]: cephadm 2026-03-09T17:25:58.704858+0000 mgr.y (mgr.14150) 219 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:59 vm00 bash[20770]: audit 2026-03-09T17:25:59.835894+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:59 vm00 bash[20770]: audit 2026-03-09T17:25:59.835894+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:59 vm00 bash[20770]: audit 2026-03-09T17:25:59.844135+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:59 vm00 bash[20770]: audit 2026-03-09T17:25:59.844135+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:59 vm00 bash[20770]: audit 2026-03-09T17:25:59.849280+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:25:59 vm00 bash[20770]: audit 2026-03-09T17:25:59.849280+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:00 vm00 bash[28333]: cluster 2026-03-09T17:25:59.356993+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:00 vm00 bash[28333]: cluster 2026-03-09T17:25:59.356993+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:00 vm00 bash[20770]: cluster 2026-03-09T17:25:59.356993+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:00 vm00 bash[20770]: cluster 2026-03-09T17:25:59.356993+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:01.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:00 vm02 bash[23351]: cluster 2026-03-09T17:25:59.356993+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:01.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:00 vm02 bash[23351]: cluster 2026-03-09T17:25:59.356993+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:03.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:03 vm00 bash[28333]: cluster 2026-03-09T17:26:01.357240+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:03.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:03 vm00 bash[28333]: cluster 2026-03-09T17:26:01.357240+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:03.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:03 vm00 bash[20770]: cluster 2026-03-09T17:26:01.357240+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:03 vm00 bash[20770]: cluster 2026-03-09T17:26:01.357240+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:03 vm02 bash[23351]: cluster 2026-03-09T17:26:01.357240+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:03 vm02 bash[23351]: cluster 2026-03-09T17:26:01.357240+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:04 vm00 bash[28333]: audit 2026-03-09T17:26:03.220467+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:04 vm00 bash[28333]: audit 2026-03-09T17:26:03.220467+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:04 vm00 bash[28333]: audit 2026-03-09T17:26:03.220922+0000 mon.a (mon.0) 614 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:04 vm00 bash[28333]: audit 2026-03-09T17:26:03.220922+0000 mon.a (mon.0) 614 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:04 vm00 bash[28333]: cluster 2026-03-09T17:26:03.357490+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:04 vm00 bash[28333]: cluster 2026-03-09T17:26:03.357490+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:04 vm00 bash[20770]: audit 2026-03-09T17:26:03.220467+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:04 vm00 bash[20770]: audit 2026-03-09T17:26:03.220467+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:04 vm00 bash[20770]: audit 2026-03-09T17:26:03.220922+0000 mon.a (mon.0) 614 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:04 vm00 bash[20770]: audit 2026-03-09T17:26:03.220922+0000 mon.a (mon.0) 614 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:04 vm00 bash[20770]: cluster 2026-03-09T17:26:03.357490+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:04 vm00 bash[20770]: cluster 2026-03-09T17:26:03.357490+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:04 vm02 bash[23351]: audit 2026-03-09T17:26:03.220467+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:04 vm02 bash[23351]: audit 2026-03-09T17:26:03.220467+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:04 vm02 bash[23351]: audit 2026-03-09T17:26:03.220922+0000 mon.a (mon.0) 614 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:04 vm02 bash[23351]: audit 2026-03-09T17:26:03.220922+0000 mon.a (mon.0) 614 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T17:26:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:04 vm02 bash[23351]: cluster 2026-03-09T17:26:03.357490+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:04 vm02 bash[23351]: cluster 2026-03-09T17:26:03.357490+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:04.205850+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:04.205850+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: cluster 2026-03-09T17:26:04.270613+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: cluster 2026-03-09T17:26:04.270613+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:04.277200+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:04.277200+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:04.290019+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:04.290019+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:04.290284+0000 mon.a (mon.0) 618 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:04.290284+0000 mon.a (mon.0) 618 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:05.209142+0000 mon.a (mon.0) 619 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:05.209142+0000 mon.a (mon.0) 619 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: cluster 2026-03-09T17:26:05.216045+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: cluster 2026-03-09T17:26:05.216045+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:05.216393+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:05 vm02 bash[23351]: audit 2026-03-09T17:26:05.216393+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:04.205850+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:04.205850+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: cluster 2026-03-09T17:26:04.270613+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: cluster 2026-03-09T17:26:04.270613+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:04.277200+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:04.277200+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:04.290019+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:04.290019+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:04.290284+0000 mon.a (mon.0) 618 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:04.290284+0000 mon.a (mon.0) 618 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:05.209142+0000 mon.a (mon.0) 619 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:05.209142+0000 mon.a (mon.0) 619 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: cluster 2026-03-09T17:26:05.216045+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: cluster 2026-03-09T17:26:05.216045+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:05.216393+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:05 vm00 bash[20770]: audit 2026-03-09T17:26:05.216393+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:04.205850+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:04.205850+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: cluster 2026-03-09T17:26:04.270613+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: cluster 2026-03-09T17:26:04.270613+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:04.277200+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:04.277200+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.102:6812/3460410068' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:04.290019+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:04.290019+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:04.290284+0000 mon.a (mon.0) 618 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:04.290284+0000 mon.a (mon.0) 618 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:05.209142+0000 mon.a (mon.0) 619 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:05.209142+0000 mon.a (mon.0) 619 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: cluster 2026-03-09T17:26:05.216045+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: cluster 2026-03-09T17:26:05.216045+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:05.216393+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:05 vm00 bash[28333]: audit 2026-03-09T17:26:05.216393+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:04.171805+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:04.171805+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:04.171873+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:04.171873+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:05.357756+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:05.357756+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.006421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.006421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.012086+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.012086+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.012833+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.012833+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.013265+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.013265+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.017771+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.017771+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:06.201197+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.102:6812/3460410068 boot 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:06.201197+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.102:6812/3460410068 boot 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:06.201353+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: cluster 2026-03-09T17:26:06.201353+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.201548+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:06 vm02 bash[23351]: audit 2026-03-09T17:26:06.201548+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:07.007 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 7 on host 'vm02' 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.000+0000 7f33f97fa640 1 -- 192.168.123.102:0/2730287410 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f33fc0630c0 con 0x7f33dc0775d0 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.000+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 >> v2:192.168.123.100:6800/3114914985 conn(0x7f33dc0775d0 msgr2=0x7f33dc079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.000+0000 7f340253c640 1 --2- 192.168.123.102:0/2730287410 >> v2:192.168.123.100:6800/3114914985 conn(0x7f33dc0775d0 0x7f33dc079a90 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f33ec006fd0 tx=0x7f33ec008040 comp rx=0 tx=0).stop 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.000+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc075a40 msgr2=0x7f33fc1a0c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.000+0000 7f340253c640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc075a40 0x7f33fc1a0c30 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f33e8009a00 tx=0x7f33e8002ea0 comp rx=0 tx=0).stop 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.004+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 shutdown_connections 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.004+0000 7f340253c640 1 --2- 192.168.123.102:0/2730287410 >> v2:192.168.123.100:6800/3114914985 conn(0x7f33dc0775d0 0x7f33dc079a90 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.004+0000 7f340253c640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f33fc1064c0 0x7f33fc1a81b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.004+0000 7f340253c640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33fc0770a0 0x7f33fc1a1170 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.004+0000 7f340253c640 1 --2- 192.168.123.102:0/2730287410 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33fc075a40 0x7f33fc1a0c30 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.004+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 >> 192.168.123.102:0/2730287410 conn(0x7f33fc0fe290 msgr2=0x7f33fc102320 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.004+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 shutdown_connections 2026-03-09T17:26:07.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:07.004+0000 7f340253c640 1 -- 192.168.123.102:0/2730287410 wait complete. 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:04.171805+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:04.171805+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:04.171873+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:04.171873+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:05.357756+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:05.357756+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.006421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.006421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.012086+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.012086+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.012833+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.012833+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.013265+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.013265+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.017771+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.017771+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:06.201197+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.102:6812/3460410068 boot 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:06.201197+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.102:6812/3460410068 boot 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:06.201353+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: cluster 2026-03-09T17:26:06.201353+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.201548+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:06 vm00 bash[28333]: audit 2026-03-09T17:26:06.201548+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:04.171805+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:04.171805+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:04.171873+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:04.171873+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:05.357756+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:05.357756+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.006421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.006421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.012086+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.012086+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.012833+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.012833+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.013265+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.013265+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.017771+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.017771+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:06.201197+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.102:6812/3460410068 boot 2026-03-09T17:26:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:06.201197+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.102:6812/3460410068 boot 2026-03-09T17:26:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:06.201353+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T17:26:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: cluster 2026-03-09T17:26:06.201353+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T17:26:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.201548+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:06 vm00 bash[20770]: audit 2026-03-09T17:26:06.201548+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:07.072 DEBUG:teuthology.orchestra.run.vm02:osd.7> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.7.service 2026-03-09T17:26:07.072 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T17:26:07.073 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd stat -f json 2026-03-09T17:26:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:07 vm02 bash[23351]: audit 2026-03-09T17:26:06.992444+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:07 vm02 bash[23351]: audit 2026-03-09T17:26:06.992444+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:07 vm02 bash[23351]: audit 2026-03-09T17:26:06.998153+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:07 vm02 bash[23351]: audit 2026-03-09T17:26:06.998153+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:07 vm02 bash[23351]: audit 2026-03-09T17:26:07.003058+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:07 vm02 bash[23351]: audit 2026-03-09T17:26:07.003058+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:07 vm02 bash[23351]: cluster 2026-03-09T17:26:07.390990+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T17:26:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:07 vm02 bash[23351]: cluster 2026-03-09T17:26:07.390990+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T17:26:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:07 vm00 bash[28333]: audit 2026-03-09T17:26:06.992444+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:07 vm00 bash[28333]: audit 2026-03-09T17:26:06.992444+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:07 vm00 bash[28333]: audit 2026-03-09T17:26:06.998153+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:07 vm00 bash[28333]: audit 2026-03-09T17:26:06.998153+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:07 vm00 bash[28333]: audit 2026-03-09T17:26:07.003058+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:08.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:07 vm00 bash[28333]: audit 2026-03-09T17:26:07.003058+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:07 vm00 bash[28333]: cluster 2026-03-09T17:26:07.390990+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:07 vm00 bash[28333]: cluster 2026-03-09T17:26:07.390990+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:07 vm00 bash[20770]: audit 2026-03-09T17:26:06.992444+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:07 vm00 bash[20770]: audit 2026-03-09T17:26:06.992444+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:07 vm00 bash[20770]: audit 2026-03-09T17:26:06.998153+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:07 vm00 bash[20770]: audit 2026-03-09T17:26:06.998153+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:07 vm00 bash[20770]: audit 2026-03-09T17:26:07.003058+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:07 vm00 bash[20770]: audit 2026-03-09T17:26:07.003058+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:07 vm00 bash[20770]: cluster 2026-03-09T17:26:07.390990+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T17:26:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:07 vm00 bash[20770]: cluster 2026-03-09T17:26:07.390990+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T17:26:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:08 vm02 bash[23351]: cluster 2026-03-09T17:26:07.358048+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:08 vm02 bash[23351]: cluster 2026-03-09T17:26:07.358048+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:08 vm02 bash[23351]: cluster 2026-03-09T17:26:08.307611+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T17:26:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:08 vm02 bash[23351]: cluster 2026-03-09T17:26:08.307611+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T17:26:09.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:08 vm00 bash[28333]: cluster 2026-03-09T17:26:07.358048+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:09.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:08 vm00 bash[28333]: cluster 2026-03-09T17:26:07.358048+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:09.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:08 vm00 bash[28333]: cluster 2026-03-09T17:26:08.307611+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T17:26:09.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:08 vm00 bash[28333]: cluster 2026-03-09T17:26:08.307611+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T17:26:09.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:08 vm00 bash[20770]: cluster 2026-03-09T17:26:07.358048+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:08 vm00 bash[20770]: cluster 2026-03-09T17:26:07.358048+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:08 vm00 bash[20770]: cluster 2026-03-09T17:26:08.307611+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T17:26:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:08 vm00 bash[20770]: cluster 2026-03-09T17:26:08.307611+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T17:26:10.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:10 vm02 bash[23351]: cluster 2026-03-09T17:26:09.358352+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:10.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:10 vm02 bash[23351]: cluster 2026-03-09T17:26:09.358352+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:11.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:10 vm00 bash[28333]: cluster 2026-03-09T17:26:09.358352+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:11.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:10 vm00 bash[28333]: cluster 2026-03-09T17:26:09.358352+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:10 vm00 bash[20770]: cluster 2026-03-09T17:26:09.358352+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:10 vm00 bash[20770]: cluster 2026-03-09T17:26:09.358352+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:11.701 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- 192.168.123.100:0/3411876230 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318106940 msgr2=0x7fe31810d1d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/3411876230 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318106940 0x7fe31810d1d0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fe30c00b0a0 tx=0x7fe30c02f470 comp rx=0 tx=0).stop 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- 192.168.123.100:0/3411876230 shutdown_connections 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/3411876230 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318106940 0x7fe31810d1d0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/3411876230 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe318105f80 0x7fe318106400 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/3411876230 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe318104d80 0x7fe318105180 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- 192.168.123.100:0/3411876230 >> 192.168.123.100:0/3411876230 conn(0x7fe318100510 msgr2=0x7fe318102950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- 192.168.123.100:0/3411876230 shutdown_connections 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- 192.168.123.100:0/3411876230 wait complete. 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 Processor -- start 2026-03-09T17:26:11.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- start start 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe318104d80 0x7fe318111340 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318105f80 0x7fe318111880 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318105f80 0x7fe318111880 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318105f80 0x7fe318111880 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:50710/0 (socket says 192.168.123.100:50710) 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe317fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe318104d80 0x7fe318111340 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe317fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe318104d80 0x7fe318111340 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:48596/0 (socket says 192.168.123.100:48596) 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe318106940 0x7fe3181168a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe3180783f0 con 0x7fe318104d80 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fe318078270 con 0x7fe318106940 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fe318078570 con 0x7fe318105f80 2026-03-09T17:26:11.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 -- 192.168.123.100:0/1127629404 learned_addr learned my addr 192.168.123.100:0/1127629404 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31cff3640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe318106940 0x7fe3181168a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 -- 192.168.123.100:0/1127629404 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe318106940 msgr2=0x7fe3181168a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe318106940 0x7fe3181168a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 -- 192.168.123.100:0/1127629404 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe318104d80 msgr2=0x7fe318111340 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe318104d80 0x7fe318111340 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 -- 192.168.123.100:0/1127629404 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe318116fd0 con 0x7fe318105f80 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3177fe640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318105f80 0x7fe318111880 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fe30400d950 tx=0x7fe30400de20 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe317fff640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe318104d80 0x7fe318111340 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:26:11.858 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe3157fa640 1 -- 192.168.123.100:0/1127629404 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe304014070 con 0x7fe318105f80 2026-03-09T17:26:11.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe3181172c0 con 0x7fe318105f80 2026-03-09T17:26:11.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.855+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fe318117850 con 0x7fe318105f80 2026-03-09T17:26:11.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.859+0000 7fe3157fa640 1 -- 192.168.123.100:0/1127629404 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fe3040044e0 con 0x7fe318105f80 2026-03-09T17:26:11.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.859+0000 7fe3157fa640 1 -- 192.168.123.100:0/1127629404 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe304005020 con 0x7fe318105f80 2026-03-09T17:26:11.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.859+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe2dc005180 con 0x7fe318105f80 2026-03-09T17:26:11.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.859+0000 7fe3157fa640 1 -- 192.168.123.100:0/1127629404 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7fe304020020 con 0x7fe318105f80 2026-03-09T17:26:11.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.859+0000 7fe3157fa640 1 --2- 192.168.123.100:0/1127629404 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe2ec0775d0 0x7fe2ec079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:11.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.859+0000 7fe3157fa640 1 -- 192.168.123.100:0/1127629404 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(52..52 src has 1..52) ==== 4061+0+0 (secure 0 0 0) 0x7fe304099e30 con 0x7fe318105f80 2026-03-09T17:26:11.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.859+0000 7fe3157fa640 1 -- 192.168.123.100:0/1127629404 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe3040674e0 con 0x7fe318105f80 2026-03-09T17:26:11.864 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.863+0000 7fe317fff640 1 --2- 192.168.123.100:0/1127629404 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe2ec0775d0 0x7fe2ec079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:11.864 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.863+0000 7fe317fff640 1 --2- 192.168.123.100:0/1127629404 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe2ec0775d0 0x7fe2ec079a90 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fe3000059c0 tx=0x7fe30000a380 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:11.956 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.955+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7fe2dc005740 con 0x7fe318105f80 2026-03-09T17:26:11.957 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.955+0000 7fe3157fa640 1 -- 192.168.123.100:0/1127629404 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v52) ==== 74+0+130 (secure 0 0 0) 0x7fe30406c390 con 0x7fe318105f80 2026-03-09T17:26:11.957 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:26:11.959 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe2ec0775d0 msgr2=0x7fe2ec079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:11.959 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/1127629404 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe2ec0775d0 0x7fe2ec079a90 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fe3000059c0 tx=0x7fe30000a380 comp rx=0 tx=0).stop 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318105f80 msgr2=0x7fe318111880 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318105f80 0x7fe318111880 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fe30400d950 tx=0x7fe30400de20 comp rx=0 tx=0).stop 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 shutdown_connections 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/1127629404 >> v2:192.168.123.100:6800/3114914985 conn(0x7fe2ec0775d0 0x7fe2ec079a90 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe318106940 0x7fe3181168a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe318105f80 0x7fe318111880 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 --2- 192.168.123.100:0/1127629404 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe318104d80 0x7fe318111340 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 >> 192.168.123.100:0/1127629404 conn(0x7fe318100510 msgr2=0x7fe318102900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 shutdown_connections 2026-03-09T17:26:11.960 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:11.959+0000 7fe31ea7d640 1 -- 192.168.123.100:0/1127629404 wait complete. 2026-03-09T17:26:12.015 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":52,"num_osds":8,"num_up_osds":8,"osd_up_since":1773077166,"num_in_osds":8,"osd_in_since":1773077149,"num_remapped_pgs":0} 2026-03-09T17:26:12.015 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd dump --format=json 2026-03-09T17:26:12.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:12 vm02 bash[23351]: cluster 2026-03-09T17:26:11.358572+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:12.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:12 vm02 bash[23351]: cluster 2026-03-09T17:26:11.358572+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:12.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:12 vm02 bash[23351]: audit 2026-03-09T17:26:11.957278+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.100:0/1127629404' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T17:26:12.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:12 vm02 bash[23351]: audit 2026-03-09T17:26:11.957278+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.100:0/1127629404' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T17:26:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:12 vm00 bash[28333]: cluster 2026-03-09T17:26:11.358572+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:12 vm00 bash[28333]: cluster 2026-03-09T17:26:11.358572+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:12 vm00 bash[28333]: audit 2026-03-09T17:26:11.957278+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.100:0/1127629404' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T17:26:13.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:12 vm00 bash[28333]: audit 2026-03-09T17:26:11.957278+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.100:0/1127629404' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T17:26:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:12 vm00 bash[20770]: cluster 2026-03-09T17:26:11.358572+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:12 vm00 bash[20770]: cluster 2026-03-09T17:26:11.358572+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:12 vm00 bash[20770]: audit 2026-03-09T17:26:11.957278+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.100:0/1127629404' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T17:26:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:12 vm00 bash[20770]: audit 2026-03-09T17:26:11.957278+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.100:0/1127629404' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: cephadm 2026-03-09T17:26:12.628130+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: cephadm 2026-03-09T17:26:12.628130+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.634162+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.634162+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.639751+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.639751+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.640753+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.640753+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.641178+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.641178+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.641503+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.641503+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.641873+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.641873+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: cephadm 2026-03-09T17:26:12.642274+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Adjusting osd_memory_target on vm02 to 113.9M 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: cephadm 2026-03-09T17:26:12.642274+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Adjusting osd_memory_target on vm02 to 113.9M 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: cephadm 2026-03-09T17:26:12.642772+0000 mgr.y (mgr.14150) 229 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: cephadm 2026-03-09T17:26:12.642772+0000 mgr.y (mgr.14150) 229 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.643056+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.643056+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.643449+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.643449+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.648510+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:13 vm00 bash[28333]: audit 2026-03-09T17:26:12.648510+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: cephadm 2026-03-09T17:26:12.628130+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: cephadm 2026-03-09T17:26:12.628130+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.634162+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.634162+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.639751+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.639751+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.640753+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.640753+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.641178+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.641178+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.641503+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.641503+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.641873+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.641873+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: cephadm 2026-03-09T17:26:12.642274+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Adjusting osd_memory_target on vm02 to 113.9M 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: cephadm 2026-03-09T17:26:12.642274+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Adjusting osd_memory_target on vm02 to 113.9M 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: cephadm 2026-03-09T17:26:12.642772+0000 mgr.y (mgr.14150) 229 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: cephadm 2026-03-09T17:26:12.642772+0000 mgr.y (mgr.14150) 229 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.643056+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.643056+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.643449+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.643449+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.648510+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:13 vm00 bash[20770]: audit 2026-03-09T17:26:12.648510+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: cephadm 2026-03-09T17:26:12.628130+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: cephadm 2026-03-09T17:26:12.628130+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Detected new or changed devices on vm02 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.634162+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.634162+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.639751+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.639751+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.640753+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.640753+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.641178+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.641178+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.641503+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.641503+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.641873+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.641873+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: cephadm 2026-03-09T17:26:12.642274+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Adjusting osd_memory_target on vm02 to 113.9M 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: cephadm 2026-03-09T17:26:12.642274+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Adjusting osd_memory_target on vm02 to 113.9M 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: cephadm 2026-03-09T17:26:12.642772+0000 mgr.y (mgr.14150) 229 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: cephadm 2026-03-09T17:26:12.642772+0000 mgr.y (mgr.14150) 229 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.643056+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.643056+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.643449+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.643449+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.648510+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:13 vm02 bash[23351]: audit 2026-03-09T17:26:12.648510+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:14 vm00 bash[28333]: cluster 2026-03-09T17:26:13.358848+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:14 vm00 bash[28333]: cluster 2026-03-09T17:26:13.358848+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:14 vm00 bash[20770]: cluster 2026-03-09T17:26:13.358848+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:14 vm00 bash[20770]: cluster 2026-03-09T17:26:13.358848+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:15.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:14 vm02 bash[23351]: cluster 2026-03-09T17:26:13.358848+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:15.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:14 vm02 bash[23351]: cluster 2026-03-09T17:26:13.358848+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:15.721 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:26:15.883 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.879+0000 7f76ac6f3640 1 -- 192.168.123.100:0/2264039204 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 msgr2=0x7f76a4105870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:15.883 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.879+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/2264039204 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 0x7f76a4105870 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f7694009a30 tx=0x7f769402f260 comp rx=0 tx=0).stop 2026-03-09T17:26:15.883 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- 192.168.123.100:0/2264039204 shutdown_connections 2026-03-09T17:26:15.883 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/2264039204 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f76a410bb10 0x7f76a410df00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.883 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/2264039204 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f76a41017b0 0x7f76a4105db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.883 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/2264039204 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 0x7f76a4105870 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.883 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- 192.168.123.100:0/2264039204 >> 192.168.123.100:0/2264039204 conn(0x7f76a40fc910 msgr2=0x7f76a40fed30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:15.884 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- 192.168.123.100:0/2264039204 shutdown_connections 2026-03-09T17:26:15.884 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- 192.168.123.100:0/2264039204 wait complete. 2026-03-09T17:26:15.884 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 Processor -- start 2026-03-09T17:26:15.884 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- start start 2026-03-09T17:26:15.884 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 0x7f76a410a3f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f76a41017b0 0x7f76a410a930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f76a410bb10 0x7f76a41a1400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f76a4110070 con 0x7f76a4069a50 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f76a410fef0 con 0x7f76a410bb10 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f76a41101f0 con 0x7f76a41017b0 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76aa468640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 0x7f76a410a3f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76aac69640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f76a410bb10 0x7f76a41a1400 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76aa468640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 0x7f76a410a3f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:48618/0 (socket says 192.168.123.100:48618) 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76aa468640 1 -- 192.168.123.100:0/1094411166 learned_addr learned my addr 192.168.123.100:0/1094411166 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76a9c67640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f76a41017b0 0x7f76a410a930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76a9c67640 1 -- 192.168.123.100:0/1094411166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f76a410bb10 msgr2=0x7f76a41a1400 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76a9c67640 1 --2- 192.168.123.100:0/1094411166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f76a410bb10 0x7f76a41a1400 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76a9c67640 1 -- 192.168.123.100:0/1094411166 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 msgr2=0x7f76a410a3f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76a9c67640 1 --2- 192.168.123.100:0/1094411166 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 0x7f76a410a3f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.885 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76a9c67640 1 -- 192.168.123.100:0/1094411166 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f76a41a1b80 con 0x7f76a41017b0 2026-03-09T17:26:15.886 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76a9c67640 1 --2- 192.168.123.100:0/1094411166 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f76a41017b0 0x7f76a410a930 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f769800cce0 tx=0x7f7698007590 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:15.887 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76937fe640 1 -- 192.168.123.100:0/1094411166 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7698007cb0 con 0x7f76a41017b0 2026-03-09T17:26:15.887 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76937fe640 1 -- 192.168.123.100:0/1094411166 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f7698007e50 con 0x7f76a41017b0 2026-03-09T17:26:15.887 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76937fe640 1 -- 192.168.123.100:0/1094411166 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f769800f450 con 0x7f76a41017b0 2026-03-09T17:26:15.887 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f76a41a1e70 con 0x7f76a41017b0 2026-03-09T17:26:15.887 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.883+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f76a4199d40 con 0x7f76a41017b0 2026-03-09T17:26:15.888 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.887+0000 7f76937fe640 1 -- 192.168.123.100:0/1094411166 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f7698020050 con 0x7f76a41017b0 2026-03-09T17:26:15.888 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.887+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7674005180 con 0x7f76a41017b0 2026-03-09T17:26:15.891 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.887+0000 7f76937fe640 1 --2- 192.168.123.100:0/1094411166 >> v2:192.168.123.100:6800/3114914985 conn(0x7f76840775d0 0x7f7684079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:15.891 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.887+0000 7f76937fe640 1 -- 192.168.123.100:0/1094411166 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(52..52 src has 1..52) ==== 4061+0+0 (secure 0 0 0) 0x7f7698098be0 con 0x7f76a41017b0 2026-03-09T17:26:15.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.891+0000 7f76937fe640 1 -- 192.168.123.100:0/1094411166 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7698014030 con 0x7f76a41017b0 2026-03-09T17:26:15.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.891+0000 7f76aa468640 1 --2- 192.168.123.100:0/1094411166 >> v2:192.168.123.100:6800/3114914985 conn(0x7f76840775d0 0x7f7684079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:15.895 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.891+0000 7f76aa468640 1 --2- 192.168.123.100:0/1094411166 >> v2:192.168.123.100:6800/3114914985 conn(0x7f76840775d0 0x7f7684079a90 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f76940097c0 tx=0x7f7694005e50 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:15.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.983+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f7674005470 con 0x7f76a41017b0 2026-03-09T17:26:15.985 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.983+0000 7f76937fe640 1 -- 192.168.123.100:0/1094411166 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v52) ==== 74+0+11712 (secure 0 0 0) 0x7f7698066210 con 0x7f76a41017b0 2026-03-09T17:26:15.986 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:26:15.986 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":52,"fsid":"16190428-1bdc-11f1-aea4-d920f1c7e51e","created":"2026-03-09T17:20:18.987272+0000","modified":"2026-03-09T17:26:08.296114+0000","last_up_change":"2026-03-09T17:26:06.190800+0000","last_in_change":"2026-03-09T17:25:49.789972+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T17:23:15.373179+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"568cb8ad-2652-448a-8223-a18b7a893c0f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6801","nonce":1564530650}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":1564530650}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":1564530650}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":1564530650}]},"public_addr":"192.168.123.100:6801/1564530650","cluster_addr":"192.168.123.100:6802/1564530650","heartbeat_back_addr":"192.168.123.100:6804/1564530650","heartbeat_front_addr":"192.168.123.100:6803/1564530650","state":["exists","up"]},{"osd":1,"uuid":"e9e37873-3fd7-4a71-be36-c91d099132ac","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":33,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6805","nonce":1086087815}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":1086087815}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":1086087815}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":1086087815}]},"public_addr":"192.168.123.100:6805/1086087815","cluster_addr":"192.168.123.100:6806/1086087815","heartbeat_back_addr":"192.168.123.100:6808/1086087815","heartbeat_front_addr":"192.168.123.100:6807/1086087815","state":["exists","up"]},{"osd":2,"uuid":"7306de3d-a962-4f03-99cd-7f218259f7e5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6809","nonce":4038313383}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":4038313383}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":4038313383}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":4038313383}]},"public_addr":"192.168.123.100:6809/4038313383","cluster_addr":"192.168.123.100:6810/4038313383","heartbeat_back_addr":"192.168.123.100:6812/4038313383","heartbeat_front_addr":"192.168.123.100:6811/4038313383","state":["exists","up"]},{"osd":3,"uuid":"814401b5-fa87-447f-8581-6e4a7fde7f2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6813","nonce":652999983}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":652999983}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":652999983}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":652999983}]},"public_addr":"192.168.123.100:6813/652999983","cluster_addr":"192.168.123.100:6814/652999983","heartbeat_back_addr":"192.168.123.100:6816/652999983","heartbeat_front_addr":"192.168.123.100:6815/652999983","state":["exists","up"]},{"osd":4,"uuid":"dae36d57-3a29-4f68-ae98-8c24557509f1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":1924015120}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6801","nonce":1924015120}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6803","nonce":1924015120}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":1924015120}]},"public_addr":"192.168.123.102:6800/1924015120","cluster_addr":"192.168.123.102:6801/1924015120","heartbeat_back_addr":"192.168.123.102:6803/1924015120","heartbeat_front_addr":"192.168.123.102:6802/1924015120","state":["exists","up"]},{"osd":5,"uuid":"af8538f6-82da-4394-a355-4fac49048640","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":39,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2433922459}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6805","nonce":2433922459}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6807","nonce":2433922459}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2433922459}]},"public_addr":"192.168.123.102:6804/2433922459","cluster_addr":"192.168.123.102:6805/2433922459","heartbeat_back_addr":"192.168.123.102:6807/2433922459","heartbeat_front_addr":"192.168.123.102:6806/2433922459","state":["exists","up"]},{"osd":6,"uuid":"8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":45,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2053868073}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6809","nonce":2053868073}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6811","nonce":2053868073}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2053868073}]},"public_addr":"192.168.123.102:6808/2053868073","cluster_addr":"192.168.123.102:6809/2053868073","heartbeat_back_addr":"192.168.123.102:6811/2053868073","heartbeat_front_addr":"192.168.123.102:6810/2053868073","state":["exists","up"]},{"osd":7,"uuid":"5e273c6f-f4ee-411b-abb9-572d1556cdc9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":50,"up_thru":51,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":3460410068}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6813","nonce":3460410068}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6815","nonce":3460410068}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":3460410068}]},"public_addr":"192.168.123.102:6812/3460410068","cluster_addr":"192.168.123.102:6813/3460410068","heartbeat_back_addr":"192.168.123.102:6815/3460410068","heartbeat_front_addr":"192.168.123.102:6814/3460410068","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:22:05.721108+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:22:38.798160+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:23:11.822442+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:23:47.183439+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:24:21.490435+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:24:55.760395+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:25:31.248063+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:26:04.171875+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6800/1655006894":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/2883785981":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/2563954711":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/4291408295":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/2951980884":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/408944714":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/327913294":"2026-03-10T17:20:41.305093+0000","192.168.123.100:6800/2312331294":"2026-03-10T17:20:31.039632+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T17:26:15.988 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 >> v2:192.168.123.100:6800/3114914985 conn(0x7f76840775d0 msgr2=0x7f7684079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:15.988 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/1094411166 >> v2:192.168.123.100:6800/3114914985 conn(0x7f76840775d0 0x7f7684079a90 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f76940097c0 tx=0x7f7694005e50 comp rx=0 tx=0).stop 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f76a41017b0 msgr2=0x7f76a410a930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/1094411166 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f76a41017b0 0x7f76a410a930 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f769800cce0 tx=0x7f7698007590 comp rx=0 tx=0).stop 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 shutdown_connections 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/1094411166 >> v2:192.168.123.100:6800/3114914985 conn(0x7f76840775d0 0x7f7684079a90 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/1094411166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f76a410bb10 0x7f76a41a1400 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/1094411166 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f76a41017b0 0x7f76a410a930 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 --2- 192.168.123.100:0/1094411166 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f76a4069a50 0x7f76a410a3f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 >> 192.168.123.100:0/1094411166 conn(0x7f76a40fc910 msgr2=0x7f76a410d950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 shutdown_connections 2026-03-09T17:26:15.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:15.987+0000 7f76ac6f3640 1 -- 192.168.123.100:0/1094411166 wait complete. 2026-03-09T17:26:16.049 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T17:23:15.373179+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '22', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T17:26:16.049 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd pool get .mgr pg_num 2026-03-09T17:26:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:16 vm00 bash[20770]: cluster 2026-03-09T17:26:15.359166+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:16 vm00 bash[20770]: cluster 2026-03-09T17:26:15.359166+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:16 vm00 bash[20770]: audit 2026-03-09T17:26:15.985485+0000 mon.c (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/1094411166' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:26:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:16 vm00 bash[20770]: audit 2026-03-09T17:26:15.985485+0000 mon.c (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/1094411166' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:26:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:16 vm00 bash[28333]: cluster 2026-03-09T17:26:15.359166+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:16 vm00 bash[28333]: cluster 2026-03-09T17:26:15.359166+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:16 vm00 bash[28333]: audit 2026-03-09T17:26:15.985485+0000 mon.c (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/1094411166' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:26:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:16 vm00 bash[28333]: audit 2026-03-09T17:26:15.985485+0000 mon.c (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/1094411166' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:26:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:16 vm02 bash[23351]: cluster 2026-03-09T17:26:15.359166+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:16 vm02 bash[23351]: cluster 2026-03-09T17:26:15.359166+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:16 vm02 bash[23351]: audit 2026-03-09T17:26:15.985485+0000 mon.c (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/1094411166' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:26:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:16 vm02 bash[23351]: audit 2026-03-09T17:26:15.985485+0000 mon.c (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/1094411166' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:26:19.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:18 vm00 bash[20770]: cluster 2026-03-09T17:26:17.359513+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:18 vm00 bash[20770]: cluster 2026-03-09T17:26:17.359513+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:19.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:18 vm00 bash[28333]: cluster 2026-03-09T17:26:17.359513+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:19.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:18 vm00 bash[28333]: cluster 2026-03-09T17:26:17.359513+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:19.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:18 vm02 bash[23351]: cluster 2026-03-09T17:26:17.359513+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:19.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:18 vm02 bash[23351]: cluster 2026-03-09T17:26:17.359513+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:19.744 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- 192.168.123.100:0/222198035 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 msgr2=0x7f0744105db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 --2- 192.168.123.100:0/222198035 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 0x7f0744105db0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f072c00ab80 tx=0x7f072c030600 comp rx=0 tx=0).stop 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- 192.168.123.100:0/222198035 shutdown_connections 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 --2- 192.168.123.100:0/222198035 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f074410bb10 0x7f074410df00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 --2- 192.168.123.100:0/222198035 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 0x7f0744105db0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 --2- 192.168.123.100:0/222198035 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0744069a50 0x7f0744105870 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- 192.168.123.100:0/222198035 >> 192.168.123.100:0/222198035 conn(0x7f07440fc910 msgr2=0x7f07440fed30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- 192.168.123.100:0/222198035 shutdown_connections 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- 192.168.123.100:0/222198035 wait complete. 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 Processor -- start 2026-03-09T17:26:19.904 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- start start 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0744069a50 0x7f07441a27a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 0x7f07441a2ce0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074a47b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0744069a50 0x7f07441a27a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074a47b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0744069a50 0x7f07441a27a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:49192/0 (socket says 192.168.123.100:49192) 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f074410bb10 0x7f074419c960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0744110070 con 0x7f0744069a50 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f074410fef0 con 0x7f074410bb10 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f07441101f0 con 0x7f07441017b0 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074a47b640 1 -- 192.168.123.100:0/1291767903 learned_addr learned my addr 192.168.123.100:0/1291767903 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:26:19.905 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074ac7c640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f074410bb10 0x7f074419c960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f0749c7a640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 0x7f07441a2ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074ac7c640 1 -- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 msgr2=0x7f07441a2ce0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074ac7c640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 0x7f07441a2ce0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074ac7c640 1 -- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0744069a50 msgr2=0x7f07441a27a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074ac7c640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0744069a50 0x7f07441a27a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074ac7c640 1 -- 192.168.123.100:0/1291767903 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f074419d130 con 0x7f074410bb10 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f0749c7a640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 0x7f07441a2ce0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074a47b640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0744069a50 0x7f07441a27a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074ac7c640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f074410bb10 0x7f074419c960 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f074000bdf0 tx=0x7f074000bef0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:19.906 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f073b7fe640 1 -- 192.168.123.100:0/1291767903 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f074000ca60 con 0x7f074410bb10 2026-03-09T17:26:19.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f074419d420 con 0x7f074410bb10 2026-03-09T17:26:19.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f073b7fe640 1 -- 192.168.123.100:0/1291767903 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0740010070 con 0x7f074410bb10 2026-03-09T17:26:19.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f073b7fe640 1 -- 192.168.123.100:0/1291767903 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f07400156d0 con 0x7f074410bb10 2026-03-09T17:26:19.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.903+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f07441a9840 con 0x7f074410bb10 2026-03-09T17:26:19.907 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.907+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0744105870 con 0x7f074410bb10 2026-03-09T17:26:19.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.907+0000 7f073b7fe640 1 -- 192.168.123.100:0/1291767903 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f07400040b0 con 0x7f074410bb10 2026-03-09T17:26:19.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.907+0000 7f073b7fe640 1 --2- 192.168.123.100:0/1291767903 >> v2:192.168.123.100:6800/3114914985 conn(0x7f071c0775d0 0x7f071c079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:19.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.907+0000 7f074a47b640 1 --2- 192.168.123.100:0/1291767903 >> v2:192.168.123.100:6800/3114914985 conn(0x7f071c0775d0 0x7f071c079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:19.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.907+0000 7f073b7fe640 1 -- 192.168.123.100:0/1291767903 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) ==== 4061+0+0 (secure 0 0 0) 0x7f0740098ee0 con 0x7f074410bb10 2026-03-09T17:26:19.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.907+0000 7f074a47b640 1 --2- 192.168.123.100:0/1291767903 >> v2:192.168.123.100:6800/3114914985 conn(0x7f071c0775d0 0x7f071c079a90 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f0734007330 tx=0x7f07340072c0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:19.911 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:19.907+0000 7f073b7fe640 1 -- 192.168.123.100:0/1291767903 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f07400628e0 con 0x7f074410bb10 2026-03-09T17:26:20.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.011+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) -- 0x7f07441039c0 con 0x7f074410bb10 2026-03-09T17:26:20.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.011+0000 7f073b7fe640 1 -- 192.168.123.100:0/1291767903 <== mon.1 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v52) ==== 93+0+10 (secure 0 0 0) 0x7f0740066590 con 0x7f074410bb10 2026-03-09T17:26:20.014 INFO:teuthology.orchestra.run.vm00.stdout:pg_num: 1 2026-03-09T17:26:20.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 >> v2:192.168.123.100:6800/3114914985 conn(0x7f071c0775d0 msgr2=0x7f071c079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:20.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 --2- 192.168.123.100:0/1291767903 >> v2:192.168.123.100:6800/3114914985 conn(0x7f071c0775d0 0x7f071c079a90 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f0734007330 tx=0x7f07340072c0 comp rx=0 tx=0).stop 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f074410bb10 msgr2=0x7f074419c960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f074410bb10 0x7f074419c960 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f074000bdf0 tx=0x7f074000bef0 comp rx=0 tx=0).stop 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 shutdown_connections 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 --2- 192.168.123.100:0/1291767903 >> v2:192.168.123.100:6800/3114914985 conn(0x7f071c0775d0 0x7f071c079a90 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f074410bb10 0x7f074419c960 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f07441017b0 0x7f07441a2ce0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 --2- 192.168.123.100:0/1291767903 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0744069a50 0x7f07441a27a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 >> 192.168.123.100:0/1291767903 conn(0x7f07440fc910 msgr2=0x7f074410dbd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 shutdown_connections 2026-03-09T17:26:20.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:26:20.015+0000 7f074c706640 1 -- 192.168.123.100:0/1291767903 wait complete. 2026-03-09T17:26:20.121 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm00 2026-03-09T17:26:20.121 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch apply rgw foo.a --placement '1;vm00=foo.a' 2026-03-09T17:26:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:20 vm00 bash[28333]: cluster 2026-03-09T17:26:19.359804+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:20 vm00 bash[28333]: cluster 2026-03-09T17:26:19.359804+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:20 vm00 bash[28333]: audit 2026-03-09T17:26:20.014194+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/1291767903' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T17:26:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:20 vm00 bash[28333]: audit 2026-03-09T17:26:20.014194+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/1291767903' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T17:26:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:20 vm00 bash[20770]: cluster 2026-03-09T17:26:19.359804+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:20 vm00 bash[20770]: cluster 2026-03-09T17:26:19.359804+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:20 vm00 bash[20770]: audit 2026-03-09T17:26:20.014194+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/1291767903' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T17:26:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:20 vm00 bash[20770]: audit 2026-03-09T17:26:20.014194+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/1291767903' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T17:26:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:20 vm02 bash[23351]: cluster 2026-03-09T17:26:19.359804+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:20 vm02 bash[23351]: cluster 2026-03-09T17:26:19.359804+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:20 vm02 bash[23351]: audit 2026-03-09T17:26:20.014194+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/1291767903' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T17:26:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:20 vm02 bash[23351]: audit 2026-03-09T17:26:20.014194+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/1291767903' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T17:26:23.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:22 vm00 bash[20770]: cluster 2026-03-09T17:26:21.360128+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:22 vm00 bash[20770]: cluster 2026-03-09T17:26:21.360128+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:22 vm00 bash[28333]: cluster 2026-03-09T17:26:21.360128+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:22 vm00 bash[28333]: cluster 2026-03-09T17:26:21.360128+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:23.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:22 vm02 bash[23351]: cluster 2026-03-09T17:26:21.360128+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:23.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:22 vm02 bash[23351]: cluster 2026-03-09T17:26:21.360128+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:24.758 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:26:24.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- 192.168.123.102:0/49958997 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 msgr2=0x7fa29810d1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:24.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/49958997 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 0x7fa29810d1c0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fa29400b3e0 tx=0x7fa29402f690 comp rx=0 tx=0).stop 2026-03-09T17:26:24.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- 192.168.123.102:0/49958997 shutdown_connections 2026-03-09T17:26:24.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/49958997 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 0x7fa29810d1c0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:24.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/49958997 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa298105f70 0x7fa2981063f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:24.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/49958997 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa298104d70 0x7fa298105170 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:24.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- 192.168.123.102:0/49958997 >> 192.168.123.102:0/49958997 conn(0x7fa298100520 msgr2=0x7fa298102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:24.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- 192.168.123.102:0/49958997 shutdown_connections 2026-03-09T17:26:24.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- 192.168.123.102:0/49958997 wait complete. 2026-03-09T17:26:24.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 Processor -- start 2026-03-09T17:26:24.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- start start 2026-03-09T17:26:24.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa298104d70 0x7fa29819c520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:24.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa298105f70 0x7fa29819ca60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 0x7fa2981a3ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa29810fe10 con 0x7fa298105f70 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fa29810fc90 con 0x7fa298106930 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29fa7e640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fa29810ff90 con 0x7fa298104d70 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29d7f3640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa298104d70 0x7fa29819c520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29d7f3640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa298104d70 0x7fa29819c520 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.102:55514/0 (socket says 192.168.123.102:55514) 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29d7f3640 1 -- 192.168.123.102:0/580784420 learned_addr learned my addr 192.168.123.102:0/580784420 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29d7f3640 1 -- 192.168.123.102:0/580784420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 msgr2=0x7fa2981a3ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.904+0000 7fa29cff2640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa298105f70 0x7fa29819ca60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:24.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29dff4640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 0x7fa2981a3ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:24.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29d7f3640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 0x7fa2981a3ae0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:24.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29d7f3640 1 -- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa298105f70 msgr2=0x7fa29819ca60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:24.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29d7f3640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa298105f70 0x7fa29819ca60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:24.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29d7f3640 1 -- 192.168.123.102:0/580784420 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa2981a41e0 con 0x7fa298104d70 2026-03-09T17:26:24.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29cff2640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa298105f70 0x7fa29819ca60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:26:24.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29dff4640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 0x7fa2981a3ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:26:24.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29d7f3640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa298104d70 0x7fa29819c520 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fa28c0027e0 tx=0x7fa28c002cb0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:24.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa2867fc640 1 -- 192.168.123.102:0/580784420 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa28c00ed40 con 0x7fa298104d70 2026-03-09T17:26:24.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa298077590 con 0x7fa298104d70 2026-03-09T17:26:24.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fa298077b20 con 0x7fa298104d70 2026-03-09T17:26:24.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa2867fc640 1 -- 192.168.123.102:0/580784420 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa28c01b070 con 0x7fa298104d70 2026-03-09T17:26:24.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa2867fc640 1 -- 192.168.123.102:0/580784420 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa28c01e370 con 0x7fa298104d70 2026-03-09T17:26:24.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa260005180 con 0x7fa298104d70 2026-03-09T17:26:24.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa2867fc640 1 -- 192.168.123.102:0/580784420 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7fa28c043700 con 0x7fa298104d70 2026-03-09T17:26:24.915 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa2867fc640 1 --2- 192.168.123.102:0/580784420 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa26c077600 0x7fa26c079ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:24.915 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa2867fc640 1 -- 192.168.123.102:0/580784420 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(52..52 src has 1..52) ==== 4061+0+0 (secure 0 0 0) 0x7fa28c0a32e0 con 0x7fa298104d70 2026-03-09T17:26:24.915 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29cff2640 1 --2- 192.168.123.102:0/580784420 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa26c077600 0x7fa26c079ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:24.915 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.908+0000 7fa29cff2640 1 --2- 192.168.123.102:0/580784420 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa26c077600 0x7fa26c079ac0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fa29819da40 tx=0x7fa288005f50 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:24.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:24.912+0000 7fa2867fc640 1 -- 192.168.123.102:0/580784420 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa28c06cce0 con 0x7fa298104d70 2026-03-09T17:26:25.015 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.008+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}) -- 0x7fa260002bf0 con 0x7fa26c077600 2026-03-09T17:26:25.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.016+0000 7fa2867fc640 1 -- 192.168.123.102:0/580784420 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+30 (secure 0 0 0) 0x7fa260002bf0 con 0x7fa26c077600 2026-03-09T17:26:25.023 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled rgw.foo.a update... 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa26c077600 msgr2=0x7fa26c079ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/580784420 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa26c077600 0x7fa26c079ac0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fa29819da40 tx=0x7fa288005f50 comp rx=0 tx=0).stop 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa298104d70 msgr2=0x7fa29819c520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa298104d70 0x7fa29819c520 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fa28c0027e0 tx=0x7fa28c002cb0 comp rx=0 tx=0).stop 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 shutdown_connections 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/580784420 >> v2:192.168.123.100:6800/3114914985 conn(0x7fa26c077600 0x7fa26c079ac0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa298106930 0x7fa2981a3ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa298105f70 0x7fa29819ca60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 --2- 192.168.123.102:0/580784420 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa298104d70 0x7fa29819c520 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:25.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 >> 192.168.123.102:0/580784420 conn(0x7fa298100520 msgr2=0x7fa298101fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:25.027 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 shutdown_connections 2026-03-09T17:26:25.027 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:25.020+0000 7fa29fa7e640 1 -- 192.168.123.102:0/580784420 wait complete. 2026-03-09T17:26:25.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:24 vm00 bash[20770]: cluster 2026-03-09T17:26:23.360409+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:25.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:24 vm00 bash[20770]: cluster 2026-03-09T17:26:23.360409+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:24 vm00 bash[28333]: cluster 2026-03-09T17:26:23.360409+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:25.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:24 vm00 bash[28333]: cluster 2026-03-09T17:26:23.360409+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:25.041 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:24 vm02 bash[23351]: cluster 2026-03-09T17:26:23.360409+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:25.041 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:24 vm02 bash[23351]: cluster 2026-03-09T17:26:23.360409+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:25.088 DEBUG:teuthology.orchestra.run.vm00:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@rgw.foo.a.service 2026-03-09T17:26:25.088 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm02 2026-03-09T17:26:25.089 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd pool create datapool 3 3 replicated 2026-03-09T17:26:25.855 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.855 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.855 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.855 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.855 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.855 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.855 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.855 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.856 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.856 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.856 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.856 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.857 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.857 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:25.857 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 17:26:25 vm00 systemd[1]: Started Ceph rgw.foo.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.016171+0000 mgr.y (mgr.14150) 236 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.016171+0000 mgr.y (mgr.14150) 236 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: cephadm 2026-03-09T17:26:25.017127+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: cephadm 2026-03-09T17:26:25.017127+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.023559+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.023559+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.024762+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.024762+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.025810+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.025810+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.026273+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.026273+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.031042+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.031042+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.032123+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.032123+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.034315+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.016171+0000 mgr.y (mgr.14150) 236 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.016171+0000 mgr.y (mgr.14150) 236 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: cephadm 2026-03-09T17:26:25.017127+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: cephadm 2026-03-09T17:26:25.017127+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.023559+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.023559+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.024762+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.024762+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.025810+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.025810+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.026273+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.026273+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.031042+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.031042+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.032123+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.032123+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.034315+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.034315+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.042096+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.042096+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.043914+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.043914+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: cephadm 2026-03-09T17:26:25.044414+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: cephadm 2026-03-09T17:26:25.044414+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: cluster 2026-03-09T17:26:25.360666+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: cluster 2026-03-09T17:26:25.360666+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.896600+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.896600+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.902988+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.902988+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.910154+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.910154+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: cephadm 2026-03-09T17:26:25.910842+0000 mgr.y (mgr.14150) 240 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: cephadm 2026-03-09T17:26:25.910842+0000 mgr.y (mgr.14150) 240 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.915655+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.915655+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.921723+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.921723+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.933107+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:26 vm00 bash[20770]: audit 2026-03-09T17:26:25.933107+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.034315+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.042096+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.042096+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.043914+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.043914+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: cephadm 2026-03-09T17:26:25.044414+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: cephadm 2026-03-09T17:26:25.044414+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: cluster 2026-03-09T17:26:25.360666+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: cluster 2026-03-09T17:26:25.360666+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.896600+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.896600+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.902988+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.902988+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.910154+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.910154+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: cephadm 2026-03-09T17:26:25.910842+0000 mgr.y (mgr.14150) 240 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: cephadm 2026-03-09T17:26:25.910842+0000 mgr.y (mgr.14150) 240 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.915655+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.915655+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.921723+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.921723+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.933107+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.290 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:26 vm00 bash[28333]: audit 2026-03-09T17:26:25.933107+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.016171+0000 mgr.y (mgr.14150) 236 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.016171+0000 mgr.y (mgr.14150) 236 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: cephadm 2026-03-09T17:26:25.017127+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: cephadm 2026-03-09T17:26:25.017127+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.023559+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.023559+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.024762+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.024762+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.025810+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.025810+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.026273+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.026273+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.031042+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.031042+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.032123+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.032123+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.034315+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.034315+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.042096+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.042096+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.043914+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.043914+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: cephadm 2026-03-09T17:26:25.044414+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: cephadm 2026-03-09T17:26:25.044414+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: cluster 2026-03-09T17:26:25.360666+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: cluster 2026-03-09T17:26:25.360666+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.896600+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.896600+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.902988+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.902988+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.910154+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.910154+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: cephadm 2026-03-09T17:26:25.910842+0000 mgr.y (mgr.14150) 240 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: cephadm 2026-03-09T17:26:25.910842+0000 mgr.y (mgr.14150) 240 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.915655+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.915655+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.921723+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.921723+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.933107+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:26.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:26 vm02 bash[23351]: audit 2026-03-09T17:26:25.933107+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:28 vm02 bash[23351]: audit 2026-03-09T17:26:27.075453+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/652074395' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:28 vm02 bash[23351]: audit 2026-03-09T17:26:27.075453+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/652074395' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:28 vm02 bash[23351]: cluster 2026-03-09T17:26:27.075831+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T17:26:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:28 vm02 bash[23351]: cluster 2026-03-09T17:26:27.075831+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T17:26:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:28 vm02 bash[23351]: audit 2026-03-09T17:26:27.077563+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:28 vm02 bash[23351]: audit 2026-03-09T17:26:27.077563+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:28 vm02 bash[23351]: cluster 2026-03-09T17:26:27.360954+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v216: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:28 vm02 bash[23351]: cluster 2026-03-09T17:26:27.360954+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v216: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:28 vm00 bash[28333]: audit 2026-03-09T17:26:27.075453+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/652074395' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:28 vm00 bash[28333]: audit 2026-03-09T17:26:27.075453+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/652074395' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:28 vm00 bash[28333]: cluster 2026-03-09T17:26:27.075831+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T17:26:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:28 vm00 bash[28333]: cluster 2026-03-09T17:26:27.075831+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T17:26:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:28 vm00 bash[28333]: audit 2026-03-09T17:26:27.077563+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:28 vm00 bash[28333]: audit 2026-03-09T17:26:27.077563+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:28 vm00 bash[28333]: cluster 2026-03-09T17:26:27.360954+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v216: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:28 vm00 bash[28333]: cluster 2026-03-09T17:26:27.360954+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v216: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:28 vm00 bash[20770]: audit 2026-03-09T17:26:27.075453+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/652074395' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:28 vm00 bash[20770]: audit 2026-03-09T17:26:27.075453+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/652074395' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:28 vm00 bash[20770]: cluster 2026-03-09T17:26:27.075831+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T17:26:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:28 vm00 bash[20770]: cluster 2026-03-09T17:26:27.075831+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T17:26:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:28 vm00 bash[20770]: audit 2026-03-09T17:26:27.077563+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:28 vm00 bash[20770]: audit 2026-03-09T17:26:27.077563+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T17:26:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:28 vm00 bash[20770]: cluster 2026-03-09T17:26:27.360954+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v216: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:28 vm00 bash[20770]: cluster 2026-03-09T17:26:27.360954+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v216: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:26:28.785 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:26:28.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- 192.168.123.102:0/2715514712 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 msgr2=0x7f098810ac10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:28.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 --2- 192.168.123.102:0/2715514712 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 0x7f098810ac10 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f098400b0a0 tx=0x7f098402f430 comp rx=0 tx=0).stop 2026-03-09T17:26:28.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- 192.168.123.102:0/2715514712 shutdown_connections 2026-03-09T17:26:28.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 --2- 192.168.123.102:0/2715514712 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 0x7f098810ac10 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:28.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 --2- 192.168.123.102:0/2715514712 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0988105c40 0x7f0988108090 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:28.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 --2- 192.168.123.102:0/2715514712 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f098806be30 0x7f0988105670 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:28.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- 192.168.123.102:0/2715514712 >> 192.168.123.102:0/2715514712 conn(0x7f09880fd120 msgr2=0x7f09880ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:28.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- 192.168.123.102:0/2715514712 shutdown_connections 2026-03-09T17:26:28.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- 192.168.123.102:0/2715514712 wait complete. 2026-03-09T17:26:28.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 Processor -- start 2026-03-09T17:26:28.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- start start 2026-03-09T17:26:28.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f098806be30 0x7f0988102f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0988105c40 0x7f0988101590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098da4f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f098806be30 0x7f0988102f40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098da4f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f098806be30 0x7f0988102f40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:49686/0 (socket says 192.168.123.102:49686) 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 0x7f0988101ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098da4f640 1 -- 192.168.123.102:0/4018232842 learned_addr learned my addr 192.168.123.102:0/4018232842 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f098810dba0 con 0x7f098806be30 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- 192.168.123.102:0/4018232842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f098810da20 con 0x7f0988108750 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- 192.168.123.102:0/4018232842 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f098810dd20 con 0x7f0988105c40 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098e250640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 0x7f0988101ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:28.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098d24e640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0988105c40 0x7f0988101590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:28.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098e250640 1 -- 192.168.123.102:0/4018232842 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0988105c40 msgr2=0x7f0988101590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:28.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098e250640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0988105c40 0x7f0988101590 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:28.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098e250640 1 -- 192.168.123.102:0/4018232842 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f098806be30 msgr2=0x7f0988102f40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:28.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098e250640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f098806be30 0x7f0988102f40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:28.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098e250640 1 -- 192.168.123.102:0/4018232842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0988102010 con 0x7f0988108750 2026-03-09T17:26:28.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098e250640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 0x7f0988101ad0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f0984036000 tx=0x7f0984004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:28.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f0976ffd640 1 -- 192.168.123.102:0/4018232842 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0984047070 con 0x7f0988108750 2026-03-09T17:26:28.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098da4f640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f098806be30 0x7f0988102f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:26:28.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f098fcda640 1 -- 192.168.123.102:0/4018232842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f09881022a0 con 0x7f0988108750 2026-03-09T17:26:28.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f0976ffd640 1 -- 192.168.123.102:0/4018232842 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0984004770 con 0x7f0988108750 2026-03-09T17:26:28.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.936+0000 7f0976ffd640 1 -- 192.168.123.102:0/4018232842 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0984042420 con 0x7f0988108750 2026-03-09T17:26:28.945 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.940+0000 7f098fcda640 1 -- 192.168.123.102:0/4018232842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f09881a8f80 con 0x7f0988108750 2026-03-09T17:26:28.945 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.940+0000 7f0974ff9640 1 -- 192.168.123.102:0/4018232842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0954005180 con 0x7f0988108750 2026-03-09T17:26:28.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.948+0000 7f0976ffd640 1 -- 192.168.123.102:0/4018232842 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f0984002a60 con 0x7f0988108750 2026-03-09T17:26:28.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.948+0000 7f0976ffd640 1 --2- 192.168.123.102:0/4018232842 >> v2:192.168.123.100:6800/3114914985 conn(0x7f095c0775d0 0x7f095c079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:28.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.948+0000 7f0976ffd640 1 -- 192.168.123.102:0/4018232842 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(54..54 src has 1..54) ==== 4420+0+0 (secure 0 0 0) 0x7f09840bdeb0 con 0x7f0988108750 2026-03-09T17:26:28.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.948+0000 7f098da4f640 1 --2- 192.168.123.102:0/4018232842 >> v2:192.168.123.100:6800/3114914985 conn(0x7f095c0775d0 0x7f095c079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:28.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.948+0000 7f0976ffd640 1 -- 192.168.123.102:0/4018232842 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f098403d070 con 0x7f0988108750 2026-03-09T17:26:28.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:28.948+0000 7f098da4f640 1 --2- 192.168.123.102:0/4018232842 >> v2:192.168.123.100:6800/3114914985 conn(0x7f095c0775d0 0x7f095c079a90 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f09780045d0 tx=0x7f0978009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:29.047 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.040+0000 7f0974ff9640 1 -- 192.168.123.102:0/4018232842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"} v 0) -- 0x7f0954005470 con 0x7f0988108750 2026-03-09T17:26:29.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.084+0000 7f0976ffd640 1 -- 192.168.123.102:0/4018232842 <== mon.1 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]=0 pool 'datapool' created v55) ==== 160+0+0 (secure 0 0 0) 0x7f098408b3f0 con 0x7f0988108750 2026-03-09T17:26:29.095 INFO:teuthology.orchestra.run.vm02.stderr:pool 'datapool' created 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 -- 192.168.123.102:0/4018232842 >> v2:192.168.123.100:6800/3114914985 conn(0x7f095c0775d0 msgr2=0x7f095c079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 --2- 192.168.123.102:0/4018232842 >> v2:192.168.123.100:6800/3114914985 conn(0x7f095c0775d0 0x7f095c079a90 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f09780045d0 tx=0x7f0978009290 comp rx=0 tx=0).stop 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 -- 192.168.123.102:0/4018232842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 msgr2=0x7f0988101ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 0x7f0988101ad0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f0984036000 tx=0x7f0984004290 comp rx=0 tx=0).stop 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 -- 192.168.123.102:0/4018232842 shutdown_connections 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 --2- 192.168.123.102:0/4018232842 >> v2:192.168.123.100:6800/3114914985 conn(0x7f095c0775d0 0x7f095c079a90 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0988108750 0x7f0988101ad0 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0988105c40 0x7f0988101590 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 --2- 192.168.123.102:0/4018232842 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f098806be30 0x7f0988102f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:29.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 -- 192.168.123.102:0/4018232842 >> 192.168.123.102:0/4018232842 conn(0x7f09880fd120 msgr2=0x7f0988103e80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:29.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 -- 192.168.123.102:0/4018232842 shutdown_connections 2026-03-09T17:26:29.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:29.092+0000 7f0974ff9640 1 -- 192.168.123.102:0/4018232842 wait complete. 2026-03-09T17:26:29.148 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- rbd pool init datapool 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: cluster 2026-03-09T17:26:28.058285+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: cluster 2026-03-09T17:26:28.058285+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: audit 2026-03-09T17:26:28.069452+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: audit 2026-03-09T17:26:28.069452+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: cluster 2026-03-09T17:26:28.083105+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: cluster 2026-03-09T17:26:28.083105+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: audit 2026-03-09T17:26:28.893434+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: audit 2026-03-09T17:26:28.893434+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: audit 2026-03-09T17:26:29.047712+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.102:0/4018232842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: audit 2026-03-09T17:26:29.047712+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.102:0/4018232842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: audit 2026-03-09T17:26:29.048261+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:29 vm02 bash[23351]: audit 2026-03-09T17:26:29.048261+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: cluster 2026-03-09T17:26:28.058285+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: cluster 2026-03-09T17:26:28.058285+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: audit 2026-03-09T17:26:28.069452+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: audit 2026-03-09T17:26:28.069452+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: cluster 2026-03-09T17:26:28.083105+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: cluster 2026-03-09T17:26:28.083105+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: audit 2026-03-09T17:26:28.893434+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: audit 2026-03-09T17:26:28.893434+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: audit 2026-03-09T17:26:29.047712+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.102:0/4018232842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: audit 2026-03-09T17:26:29.047712+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.102:0/4018232842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: audit 2026-03-09T17:26:29.048261+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:29 vm00 bash[28333]: audit 2026-03-09T17:26:29.048261+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: cluster 2026-03-09T17:26:28.058285+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: cluster 2026-03-09T17:26:28.058285+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: audit 2026-03-09T17:26:28.069452+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: audit 2026-03-09T17:26:28.069452+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: cluster 2026-03-09T17:26:28.083105+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: cluster 2026-03-09T17:26:28.083105+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: audit 2026-03-09T17:26:28.893434+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: audit 2026-03-09T17:26:28.893434+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: audit 2026-03-09T17:26:29.047712+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.102:0/4018232842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: audit 2026-03-09T17:26:29.047712+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.102:0/4018232842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: audit 2026-03-09T17:26:29.048261+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:29 vm00 bash[20770]: audit 2026-03-09T17:26:29.048261+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T17:26:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:30 vm02 bash[23351]: audit 2026-03-09T17:26:29.079507+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T17:26:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:30 vm02 bash[23351]: audit 2026-03-09T17:26:29.079507+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T17:26:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:30 vm02 bash[23351]: cluster 2026-03-09T17:26:29.084517+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T17:26:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:30 vm02 bash[23351]: cluster 2026-03-09T17:26:29.084517+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T17:26:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:30 vm02 bash[23351]: audit 2026-03-09T17:26:29.093399+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T17:26:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:30 vm02 bash[23351]: audit 2026-03-09T17:26:29.093399+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T17:26:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:30 vm02 bash[23351]: cluster 2026-03-09T17:26:29.361309+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 68 pgs: 5 creating+peering, 51 unknown, 12 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T17:26:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:30 vm02 bash[23351]: cluster 2026-03-09T17:26:29.361309+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 68 pgs: 5 creating+peering, 51 unknown, 12 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T17:26:30.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:30 vm00 bash[28333]: audit 2026-03-09T17:26:29.079507+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:30 vm00 bash[28333]: audit 2026-03-09T17:26:29.079507+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:30 vm00 bash[28333]: cluster 2026-03-09T17:26:29.084517+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:30 vm00 bash[28333]: cluster 2026-03-09T17:26:29.084517+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:30 vm00 bash[28333]: audit 2026-03-09T17:26:29.093399+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:30 vm00 bash[28333]: audit 2026-03-09T17:26:29.093399+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:30 vm00 bash[28333]: cluster 2026-03-09T17:26:29.361309+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 68 pgs: 5 creating+peering, 51 unknown, 12 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:30 vm00 bash[28333]: cluster 2026-03-09T17:26:29.361309+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 68 pgs: 5 creating+peering, 51 unknown, 12 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:30 vm00 bash[20770]: audit 2026-03-09T17:26:29.079507+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:30 vm00 bash[20770]: audit 2026-03-09T17:26:29.079507+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:30 vm00 bash[20770]: cluster 2026-03-09T17:26:29.084517+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:30 vm00 bash[20770]: cluster 2026-03-09T17:26:29.084517+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:30 vm00 bash[20770]: audit 2026-03-09T17:26:29.093399+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:30 vm00 bash[20770]: audit 2026-03-09T17:26:29.093399+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:30 vm00 bash[20770]: cluster 2026-03-09T17:26:29.361309+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 68 pgs: 5 creating+peering, 51 unknown, 12 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T17:26:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:30 vm00 bash[20770]: cluster 2026-03-09T17:26:29.361309+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 68 pgs: 5 creating+peering, 51 unknown, 12 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T17:26:31.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:31 vm02 bash[23351]: audit 2026-03-09T17:26:30.088211+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T17:26:31.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:31 vm02 bash[23351]: audit 2026-03-09T17:26:30.088211+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T17:26:31.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:31 vm02 bash[23351]: cluster 2026-03-09T17:26:30.107502+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T17:26:31.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:31 vm02 bash[23351]: cluster 2026-03-09T17:26:30.107502+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T17:26:31.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:31 vm02 bash[23351]: audit 2026-03-09T17:26:30.970930+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:31 vm02 bash[23351]: audit 2026-03-09T17:26:30.970930+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:31 vm02 bash[23351]: audit 2026-03-09T17:26:30.977391+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:31 vm02 bash[23351]: audit 2026-03-09T17:26:30.977391+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:31 vm00 bash[28333]: audit 2026-03-09T17:26:30.088211+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T17:26:31.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:31 vm00 bash[28333]: audit 2026-03-09T17:26:30.088211+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T17:26:31.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:31 vm00 bash[28333]: cluster 2026-03-09T17:26:30.107502+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:31 vm00 bash[28333]: cluster 2026-03-09T17:26:30.107502+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:31 vm00 bash[28333]: audit 2026-03-09T17:26:30.970930+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:31 vm00 bash[28333]: audit 2026-03-09T17:26:30.970930+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:31 vm00 bash[28333]: audit 2026-03-09T17:26:30.977391+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:31 vm00 bash[28333]: audit 2026-03-09T17:26:30.977391+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:31 vm00 bash[20770]: audit 2026-03-09T17:26:30.088211+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:31 vm00 bash[20770]: audit 2026-03-09T17:26:30.088211+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:31 vm00 bash[20770]: cluster 2026-03-09T17:26:30.107502+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:31 vm00 bash[20770]: cluster 2026-03-09T17:26:30.107502+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:31 vm00 bash[20770]: audit 2026-03-09T17:26:30.970930+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:31 vm00 bash[20770]: audit 2026-03-09T17:26:30.970930+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:31 vm00 bash[20770]: audit 2026-03-09T17:26:30.977391+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:31.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:31 vm00 bash[20770]: audit 2026-03-09T17:26:30.977391+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: cluster 2026-03-09T17:26:31.102043+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: cluster 2026-03-09T17:26:31.102043+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: audit 2026-03-09T17:26:31.113584+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: audit 2026-03-09T17:26:31.113584+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: audit 2026-03-09T17:26:31.315829+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: audit 2026-03-09T17:26:31.315829+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: audit 2026-03-09T17:26:31.316363+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: audit 2026-03-09T17:26:31.316363+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: cephadm 2026-03-09T17:26:31.319278+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: cephadm 2026-03-09T17:26:31.319278+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: cluster 2026-03-09T17:26:31.362026+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v222: 100 pgs: 21 creating+peering, 41 unknown, 38 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: cluster 2026-03-09T17:26:31.362026+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v222: 100 pgs: 21 creating+peering, 41 unknown, 38 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: audit 2026-03-09T17:26:32.096574+0000 mon.a (mon.0) 677 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: audit 2026-03-09T17:26:32.096574+0000 mon.a (mon.0) 677 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: cluster 2026-03-09T17:26:32.105018+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:32 vm00 bash[28333]: cluster 2026-03-09T17:26:32.105018+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: cluster 2026-03-09T17:26:31.102043+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: cluster 2026-03-09T17:26:31.102043+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: audit 2026-03-09T17:26:31.113584+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: audit 2026-03-09T17:26:31.113584+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: audit 2026-03-09T17:26:31.315829+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: audit 2026-03-09T17:26:31.315829+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: audit 2026-03-09T17:26:31.316363+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: audit 2026-03-09T17:26:31.316363+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: cephadm 2026-03-09T17:26:31.319278+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: cephadm 2026-03-09T17:26:31.319278+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: cluster 2026-03-09T17:26:31.362026+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v222: 100 pgs: 21 creating+peering, 41 unknown, 38 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: cluster 2026-03-09T17:26:31.362026+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v222: 100 pgs: 21 creating+peering, 41 unknown, 38 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: audit 2026-03-09T17:26:32.096574+0000 mon.a (mon.0) 677 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: audit 2026-03-09T17:26:32.096574+0000 mon.a (mon.0) 677 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: cluster 2026-03-09T17:26:32.105018+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T17:26:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:32 vm00 bash[20770]: cluster 2026-03-09T17:26:32.105018+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T17:26:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: cluster 2026-03-09T17:26:31.102043+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T17:26:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: cluster 2026-03-09T17:26:31.102043+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T17:26:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: audit 2026-03-09T17:26:31.113584+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T17:26:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: audit 2026-03-09T17:26:31.113584+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: audit 2026-03-09T17:26:31.315829+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: audit 2026-03-09T17:26:31.315829+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: audit 2026-03-09T17:26:31.316363+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: audit 2026-03-09T17:26:31.316363+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: cephadm 2026-03-09T17:26:31.319278+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: cephadm 2026-03-09T17:26:31.319278+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: cluster 2026-03-09T17:26:31.362026+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v222: 100 pgs: 21 creating+peering, 41 unknown, 38 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: cluster 2026-03-09T17:26:31.362026+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v222: 100 pgs: 21 creating+peering, 41 unknown, 38 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: audit 2026-03-09T17:26:32.096574+0000 mon.a (mon.0) 677 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: audit 2026-03-09T17:26:32.096574+0000 mon.a (mon.0) 677 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: cluster 2026-03-09T17:26:32.105018+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T17:26:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:32 vm02 bash[23351]: cluster 2026-03-09T17:26:32.105018+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T17:26:32.803 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:32.944260+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.102:0/1273745611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:32.944260+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.102:0/1273745611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:32.944714+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:32.944714+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:33.099332+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:33.099332+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: cluster 2026-03-09T17:26:33.102087+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: cluster 2026-03-09T17:26:33.102087+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:33.103644+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:33.103644+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:33.112450+0000 mon.a (mon.0) 682 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:33.112450+0000 mon.a (mon.0) 682 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:33.112536+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:33 vm00 bash[28333]: audit 2026-03-09T17:26:33.112536+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:32.944260+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.102:0/1273745611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:32.944260+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.102:0/1273745611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:32.944714+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:32.944714+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:33.099332+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:33.099332+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T17:26:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: cluster 2026-03-09T17:26:33.102087+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T17:26:33.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: cluster 2026-03-09T17:26:33.102087+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T17:26:33.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:33.103644+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:33.103644+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:33.112450+0000 mon.a (mon.0) 682 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:33.112450+0000 mon.a (mon.0) 682 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:33.112536+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:33 vm00 bash[20770]: audit 2026-03-09T17:26:33.112536+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:32.944260+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.102:0/1273745611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:32.944260+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.102:0/1273745611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:32.944714+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:32.944714+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:33.099332+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:33.099332+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: cluster 2026-03-09T17:26:33.102087+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: cluster 2026-03-09T17:26:33.102087+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:33.103644+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:33.103644+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:33.112450+0000 mon.a (mon.0) 682 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:33.112450+0000 mon.a (mon.0) 682 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:33.112536+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:33 vm02 bash[23351]: audit 2026-03-09T17:26:33.112536+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T17:26:34.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: cluster 2026-03-09T17:26:33.362437+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v225: 132 pgs: 16 creating+peering, 68 unknown, 48 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: cluster 2026-03-09T17:26:33.362437+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v225: 132 pgs: 16 creating+peering, 68 unknown, 48 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.102351+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.102351+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.102440+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.102440+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.110981+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.110981+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: cluster 2026-03-09T17:26:34.117259+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: cluster 2026-03-09T17:26:34.117259+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.126154+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.126154+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.126895+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:34 vm00 bash[28333]: audit 2026-03-09T17:26:34.126895+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: cluster 2026-03-09T17:26:33.362437+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v225: 132 pgs: 16 creating+peering, 68 unknown, 48 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: cluster 2026-03-09T17:26:33.362437+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v225: 132 pgs: 16 creating+peering, 68 unknown, 48 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.102351+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.102351+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.102440+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.102440+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.110981+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.110981+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: cluster 2026-03-09T17:26:34.117259+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: cluster 2026-03-09T17:26:34.117259+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.126154+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.126154+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.126895+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:34 vm00 bash[20770]: audit 2026-03-09T17:26:34.126895+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: cluster 2026-03-09T17:26:33.362437+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v225: 132 pgs: 16 creating+peering, 68 unknown, 48 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: cluster 2026-03-09T17:26:33.362437+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v225: 132 pgs: 16 creating+peering, 68 unknown, 48 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.102351+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.102351+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.102440+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.102440+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.110981+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.110981+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/1657504970' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: cluster 2026-03-09T17:26:34.117259+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: cluster 2026-03-09T17:26:34.117259+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.126154+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.126154+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.126895+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:34 vm02 bash[23351]: audit 2026-03-09T17:26:34.126895+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T17:26:35.201 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.102 --placement '1;vm02=iscsi.a' 2026-03-09T17:26:35.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:35 vm02 bash[23351]: cluster 2026-03-09T17:26:34.137389+0000 mon.a (mon.0) 689 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:35.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:35 vm02 bash[23351]: cluster 2026-03-09T17:26:34.137389+0000 mon.a (mon.0) 689 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:35.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:35 vm02 bash[23351]: audit 2026-03-09T17:26:35.105491+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:35 vm02 bash[23351]: audit 2026-03-09T17:26:35.105491+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:35 vm02 bash[23351]: audit 2026-03-09T17:26:35.105602+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:35 vm02 bash[23351]: audit 2026-03-09T17:26:35.105602+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:35 vm02 bash[23351]: cluster 2026-03-09T17:26:35.115041+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T17:26:35.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:35 vm02 bash[23351]: cluster 2026-03-09T17:26:35.115041+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:35 vm00 bash[28333]: cluster 2026-03-09T17:26:34.137389+0000 mon.a (mon.0) 689 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:35 vm00 bash[28333]: cluster 2026-03-09T17:26:34.137389+0000 mon.a (mon.0) 689 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:35 vm00 bash[28333]: audit 2026-03-09T17:26:35.105491+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:35 vm00 bash[28333]: audit 2026-03-09T17:26:35.105491+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:35 vm00 bash[28333]: audit 2026-03-09T17:26:35.105602+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:35 vm00 bash[28333]: audit 2026-03-09T17:26:35.105602+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:35 vm00 bash[28333]: cluster 2026-03-09T17:26:35.115041+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:35 vm00 bash[28333]: cluster 2026-03-09T17:26:35.115041+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[20770]: cluster 2026-03-09T17:26:34.137389+0000 mon.a (mon.0) 689 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[20770]: cluster 2026-03-09T17:26:34.137389+0000 mon.a (mon.0) 689 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[20770]: audit 2026-03-09T17:26:35.105491+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[20770]: audit 2026-03-09T17:26:35.105491+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[20770]: audit 2026-03-09T17:26:35.105602+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[20770]: audit 2026-03-09T17:26:35.105602+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.100:0/147358814' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[20770]: cluster 2026-03-09T17:26:35.115041+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T17:26:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[20770]: cluster 2026-03-09T17:26:35.115041+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T17:26:35.538 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 17:26:35 vm00 bash[53845]: debug 2026-03-09T17:26:35.255+0000 7f12a67cd980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.362809+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.362809+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: cluster 2026-03-09T17:26:35.362904+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v228: 132 pgs: 15 creating+peering, 13 unknown, 104 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: cluster 2026-03-09T17:26:35.362904+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v228: 132 pgs: 15 creating+peering, 13 unknown, 104 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.376144+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.376144+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.398334+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.398334+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.442099+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.442099+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.788993+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.788993+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.789610+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.789610+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: cephadm 2026-03-09T17:26:35.792180+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: cephadm 2026-03-09T17:26:35.792180+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.994589+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:36 vm00 bash[28333]: audit 2026-03-09T17:26:35.994589+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.362809+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.362809+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: cluster 2026-03-09T17:26:35.362904+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v228: 132 pgs: 15 creating+peering, 13 unknown, 104 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: cluster 2026-03-09T17:26:35.362904+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v228: 132 pgs: 15 creating+peering, 13 unknown, 104 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.376144+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.376144+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.398334+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.398334+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.442099+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.442099+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.788993+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.788993+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.789610+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.789610+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: cephadm 2026-03-09T17:26:35.792180+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: cephadm 2026-03-09T17:26:35.792180+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.994589+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:36 vm00 bash[20770]: audit 2026-03-09T17:26:35.994589+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.362809+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.362809+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: cluster 2026-03-09T17:26:35.362904+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v228: 132 pgs: 15 creating+peering, 13 unknown, 104 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: cluster 2026-03-09T17:26:35.362904+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v228: 132 pgs: 15 creating+peering, 13 unknown, 104 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.376144+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.376144+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.398334+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.398334+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.442099+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.442099+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.788993+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.788993+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.789610+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.789610+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: cephadm 2026-03-09T17:26:35.792180+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: cephadm 2026-03-09T17:26:35.792180+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.994589+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:36 vm02 bash[23351]: audit 2026-03-09T17:26:35.994589+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:37.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:37 vm00 bash[28333]: cluster 2026-03-09T17:26:36.360592+0000 mon.a (mon.0) 700 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T17:26:37.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:37 vm00 bash[28333]: cluster 2026-03-09T17:26:36.360592+0000 mon.a (mon.0) 700 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T17:26:37.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:37 vm00 bash[28333]: cluster 2026-03-09T17:26:36.360618+0000 mon.a (mon.0) 701 : cluster [INF] Cluster is now healthy 2026-03-09T17:26:37.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:37 vm00 bash[28333]: cluster 2026-03-09T17:26:36.360618+0000 mon.a (mon.0) 701 : cluster [INF] Cluster is now healthy 2026-03-09T17:26:37.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:37 vm00 bash[20770]: cluster 2026-03-09T17:26:36.360592+0000 mon.a (mon.0) 700 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T17:26:37.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:37 vm00 bash[20770]: cluster 2026-03-09T17:26:36.360592+0000 mon.a (mon.0) 700 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T17:26:37.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:37 vm00 bash[20770]: cluster 2026-03-09T17:26:36.360618+0000 mon.a (mon.0) 701 : cluster [INF] Cluster is now healthy 2026-03-09T17:26:37.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:37 vm00 bash[20770]: cluster 2026-03-09T17:26:36.360618+0000 mon.a (mon.0) 701 : cluster [INF] Cluster is now healthy 2026-03-09T17:26:37.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:37 vm02 bash[23351]: cluster 2026-03-09T17:26:36.360592+0000 mon.a (mon.0) 700 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T17:26:37.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:37 vm02 bash[23351]: cluster 2026-03-09T17:26:36.360592+0000 mon.a (mon.0) 700 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T17:26:37.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:37 vm02 bash[23351]: cluster 2026-03-09T17:26:36.360618+0000 mon.a (mon.0) 701 : cluster [INF] Cluster is now healthy 2026-03-09T17:26:37.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:37 vm02 bash[23351]: cluster 2026-03-09T17:26:36.360618+0000 mon.a (mon.0) 701 : cluster [INF] Cluster is now healthy 2026-03-09T17:26:38.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:38 vm00 bash[28333]: cluster 2026-03-09T17:26:37.363337+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v229: 132 pgs: 11 creating+peering, 121 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 33 KiB/s rd, 3.2 KiB/s wr, 77 op/s 2026-03-09T17:26:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:38 vm00 bash[28333]: cluster 2026-03-09T17:26:37.363337+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v229: 132 pgs: 11 creating+peering, 121 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 33 KiB/s rd, 3.2 KiB/s wr, 77 op/s 2026-03-09T17:26:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:38 vm00 bash[20770]: cluster 2026-03-09T17:26:37.363337+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v229: 132 pgs: 11 creating+peering, 121 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 33 KiB/s rd, 3.2 KiB/s wr, 77 op/s 2026-03-09T17:26:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:38 vm00 bash[20770]: cluster 2026-03-09T17:26:37.363337+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v229: 132 pgs: 11 creating+peering, 121 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 33 KiB/s rd, 3.2 KiB/s wr, 77 op/s 2026-03-09T17:26:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:38 vm02 bash[23351]: cluster 2026-03-09T17:26:37.363337+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v229: 132 pgs: 11 creating+peering, 121 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 33 KiB/s rd, 3.2 KiB/s wr, 77 op/s 2026-03-09T17:26:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:38 vm02 bash[23351]: cluster 2026-03-09T17:26:37.363337+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v229: 132 pgs: 11 creating+peering, 121 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 33 KiB/s rd, 3.2 KiB/s wr, 77 op/s 2026-03-09T17:26:39.892 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- 192.168.123.102:0/1642764915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fddc4103c40 msgr2=0x7fddc41060b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 --2- 192.168.123.102:0/1642764915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fddc4103c40 0x7fddc41060b0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fddb0009a30 tx=0x7fddb002f260 comp rx=0 tx=0).stop 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- 192.168.123.102:0/1642764915 shutdown_connections 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 --2- 192.168.123.102:0/1642764915 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fddc41065f0 0x7fddc4106aa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 --2- 192.168.123.102:0/1642764915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fddc4103c40 0x7fddc41060b0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 --2- 192.168.123.102:0/1642764915 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fddc41012f0 0x7fddc4103700 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- 192.168.123.102:0/1642764915 >> 192.168.123.102:0/1642764915 conn(0x7fddc40fd100 msgr2=0x7fddc40ff540 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- 192.168.123.102:0/1642764915 shutdown_connections 2026-03-09T17:26:40.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- 192.168.123.102:0/1642764915 wait complete. 2026-03-09T17:26:40.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 Processor -- start 2026-03-09T17:26:40.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- start start 2026-03-09T17:26:40.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fddc41012f0 0x7fddc41a0a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:40.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fddc4103c40 0x7fddc41a0f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fddc41065f0 0x7fddc41a8000 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fddc4114370 con 0x7fddc4103c40 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fddc41141f0 con 0x7fddc41012f0 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddca646640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fddc41144f0 con 0x7fddc41065f0 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddc8bbc640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fddc41065f0 0x7fddc41a8000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddc37fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fddc4103c40 0x7fddc41a0f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddc3fff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fddc41012f0 0x7fddc41a0a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddc8bbc640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fddc41065f0 0x7fddc41a8000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.102:56472/0 (socket says 192.168.123.102:56472) 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddc8bbc640 1 -- 192.168.123.102:0/3764144412 learned_addr learned my addr 192.168.123.102:0/3764144412 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.064+0000 7fddc37fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fddc4103c40 0x7fddc41a0f80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:41282/0 (socket says 192.168.123.102:41282) 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc8bbc640 1 -- 192.168.123.102:0/3764144412 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fddc41012f0 msgr2=0x7fddc41a0a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc8bbc640 1 --2- 192.168.123.102:0/3764144412 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fddc41012f0 0x7fddc41a0a40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc8bbc640 1 -- 192.168.123.102:0/3764144412 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fddc4103c40 msgr2=0x7fddc41a0f80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:40.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc8bbc640 1 --2- 192.168.123.102:0/3764144412 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fddc4103c40 0x7fddc41a0f80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc8bbc640 1 -- 192.168.123.102:0/3764144412 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fddc41a8700 con 0x7fddc41065f0 2026-03-09T17:26:40.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc8bbc640 1 --2- 192.168.123.102:0/3764144412 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fddc41065f0 0x7fddc41a8000 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fddb800ef30 tx=0x7fddb800c560 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:40.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc17fa640 1 -- 192.168.123.102:0/3764144412 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fddb8019070 con 0x7fddc41065f0 2026-03-09T17:26:40.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc17fa640 1 -- 192.168.123.102:0/3764144412 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fddb80092d0 con 0x7fddc41065f0 2026-03-09T17:26:40.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fddc41a8990 con 0x7fddc41065f0 2026-03-09T17:26:40.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fddc41a8ef0 con 0x7fddc41065f0 2026-03-09T17:26:40.070 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fdd88005180 con 0x7fddc41065f0 2026-03-09T17:26:40.070 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc17fa640 1 -- 192.168.123.102:0/3764144412 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fddb8004880 con 0x7fddc41065f0 2026-03-09T17:26:40.071 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc17fa640 1 -- 192.168.123.102:0/3764144412 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7fddb8002a60 con 0x7fddc41065f0 2026-03-09T17:26:40.071 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc17fa640 1 --2- 192.168.123.102:0/3764144412 >> v2:192.168.123.100:6800/3114914985 conn(0x7fdda4077540 0x7fdda4079a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:40.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc17fa640 1 -- 192.168.123.102:0/3764144412 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(61..61 src has 1..61) ==== 5950+0+0 (secure 0 0 0) 0x7fddb8099d00 con 0x7fddc41065f0 2026-03-09T17:26:40.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.068+0000 7fddc3fff640 1 --2- 192.168.123.102:0/3764144412 >> v2:192.168.123.100:6800/3114914985 conn(0x7fdda4077540 0x7fdda4079a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:40.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.072+0000 7fddc3fff640 1 --2- 192.168.123.102:0/3764144412 >> v2:192.168.123.100:6800/3114914985 conn(0x7fdda4077540 0x7fdda4079a00 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fddb4004500 tx=0x7fddb4009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:40.074 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.072+0000 7fddc17fa640 1 -- 192.168.123.102:0/3764144412 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fddb8010040 con 0x7fddc41065f0 2026-03-09T17:26:40.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.176+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102", "placement": "1;vm02=iscsi.a", "target": ["mon-mgr", ""]}) -- 0x7fdd88002cc0 con 0x7fdda4077540 2026-03-09T17:26:40.185 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.184+0000 7fddc17fa640 1 -- 192.168.123.102:0/3764144412 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+35 (secure 0 0 0) 0x7fdd88002cc0 con 0x7fdda4077540 2026-03-09T17:26:40.185 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled iscsi.datapool update... 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.184+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 >> v2:192.168.123.100:6800/3114914985 conn(0x7fdda4077540 msgr2=0x7fdda4079a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.184+0000 7fddca646640 1 --2- 192.168.123.102:0/3764144412 >> v2:192.168.123.100:6800/3114914985 conn(0x7fdda4077540 0x7fdda4079a00 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fddb4004500 tx=0x7fddb4009290 comp rx=0 tx=0).stop 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.184+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fddc41065f0 msgr2=0x7fddc41a8000 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.184+0000 7fddca646640 1 --2- 192.168.123.102:0/3764144412 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fddc41065f0 0x7fddc41a8000 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fddb800ef30 tx=0x7fddb800c560 comp rx=0 tx=0).stop 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.188+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 shutdown_connections 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.188+0000 7fddca646640 1 --2- 192.168.123.102:0/3764144412 >> v2:192.168.123.100:6800/3114914985 conn(0x7fdda4077540 0x7fdda4079a00 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.188+0000 7fddca646640 1 --2- 192.168.123.102:0/3764144412 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fddc41065f0 0x7fddc41a8000 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.188+0000 7fddca646640 1 --2- 192.168.123.102:0/3764144412 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fddc4103c40 0x7fddc41a0f80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.188+0000 7fddca646640 1 --2- 192.168.123.102:0/3764144412 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fddc41012f0 0x7fddc41a0a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.188+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 >> 192.168.123.102:0/3764144412 conn(0x7fddc40fd100 msgr2=0x7fddc4101ef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.188+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 shutdown_connections 2026-03-09T17:26:40.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:40.188+0000 7fddca646640 1 -- 192.168.123.102:0/3764144412 wait complete. 2026-03-09T17:26:40.258 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-09T17:26:40.258 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:26:40.258 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T17:26:40.265 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:26:40.265 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T17:26:40.274 DEBUG:teuthology.orchestra.run.vm02:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@iscsi.iscsi.a.service 2026-03-09T17:26:40.317 INFO:tasks.cephadm:Adding prometheus.a on vm02 2026-03-09T17:26:40.317 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch apply prometheus '1;vm02=a' 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: cluster 2026-03-09T17:26:39.363749+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 127 op/s 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: cluster 2026-03-09T17:26:39.363749+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 127 op/s 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.185381+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.185381+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.186376+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.186376+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.187551+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.187551+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.187986+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.187986+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.192224+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.192224+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.193987+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.193987+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.195992+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.195992+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.200515+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 bash[23351]: audit 2026-03-09T17:26:40.200515+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.486 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:40.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: cluster 2026-03-09T17:26:39.363749+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 127 op/s 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: cluster 2026-03-09T17:26:39.363749+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 127 op/s 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.185381+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.185381+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.186376+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.186376+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.187551+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.187551+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.187986+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.187986+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.192224+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.192224+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.193987+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.193987+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.195992+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.195992+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.200515+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:40 vm00 bash[28333]: audit 2026-03-09T17:26:40.200515+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: cluster 2026-03-09T17:26:39.363749+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 127 op/s 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: cluster 2026-03-09T17:26:39.363749+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 127 op/s 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.185381+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.185381+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.186376+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.186376+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.187551+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.187551+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.187986+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.187986+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.192224+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.192224+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.193987+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.193987+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.195992+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.195992+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.200515+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:40 vm00 bash[20770]: audit 2026-03-09T17:26:40.200515+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:41.045 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.045 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 17:26:40 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:41.385 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 systemd[1]: Started Ceph iscsi.iscsi.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:26:41.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:40.179668+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24383 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102", "placement": "1;vm02=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:40.179668+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24383 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102", "placement": "1;vm02=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: cephadm 2026-03-09T17:26:40.180899+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;count:1 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: cephadm 2026-03-09T17:26:40.180899+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;count:1 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: cephadm 2026-03-09T17:26:40.201077+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: cephadm 2026-03-09T17:26:40.201077+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.090983+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.090983+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.099951+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.099951+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.108614+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.108614+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.120222+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.120222+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.132106+0000 mon.a (mon.0) 714 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.132106+0000 mon.a (mon.0) 714 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.335896+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:41 vm00 bash[28333]: audit 2026-03-09T17:26:41.335896+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:40.179668+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24383 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102", "placement": "1;vm02=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:40.179668+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24383 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102", "placement": "1;vm02=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: cephadm 2026-03-09T17:26:40.180899+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;count:1 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: cephadm 2026-03-09T17:26:40.180899+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;count:1 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: cephadm 2026-03-09T17:26:40.201077+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: cephadm 2026-03-09T17:26:40.201077+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.090983+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.090983+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.099951+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.099951+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.108614+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.108614+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.120222+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.120222+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.132106+0000 mon.a (mon.0) 714 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.132106+0000 mon.a (mon.0) 714 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.335896+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T17:26:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:41 vm00 bash[20770]: audit 2026-03-09T17:26:41.335896+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:40.179668+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24383 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102", "placement": "1;vm02=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:40.179668+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24383 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102", "placement": "1;vm02=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: cephadm 2026-03-09T17:26:40.180899+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;count:1 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: cephadm 2026-03-09T17:26:40.180899+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;count:1 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: cephadm 2026-03-09T17:26:40.201077+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: cephadm 2026-03-09T17:26:40.201077+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.090983+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.090983+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.099951+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.099951+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.108614+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.108614+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.120222+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.120222+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.132106+0000 mon.a (mon.0) 714 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.132106+0000 mon.a (mon.0) 714 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.335896+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T17:26:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:41 vm02 bash[23351]: audit 2026-03-09T17:26:41.335896+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T17:26:41.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: debug Started the configuration object watcher 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: debug Checking for config object changes every 1s 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: debug Processing osd blocklist entries for this node 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: debug Reading the configuration object to update local LIO configuration 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: debug Configuration does not have an entry for this host(vm02.local) - nothing to define to LIO 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: * Environment: production 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: Use a production WSGI server instead. 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: * Debug mode: off 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: debug * Running on all addresses. 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: * Running on all addresses. 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T17:26:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:41 vm02 bash[48996]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: cephadm 2026-03-09T17:26:41.109165+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: cephadm 2026-03-09T17:26:41.109165+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: cluster 2026-03-09T17:26:41.364122+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 4.7 KiB/s wr, 144 op/s 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: cluster 2026-03-09T17:26:41.364122+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 4.7 KiB/s wr, 144 op/s 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: audit 2026-03-09T17:26:41.587736+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1192227753' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: audit 2026-03-09T17:26:41.587736+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1192227753' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: audit 2026-03-09T17:26:42.132992+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: audit 2026-03-09T17:26:42.132992+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: cluster 2026-03-09T17:26:42.162440+0000 mon.a (mon.0) 717 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: cluster 2026-03-09T17:26:42.162440+0000 mon.a (mon.0) 717 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: cluster 2026-03-09T17:26:42.162531+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T17:26:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:42 vm02 bash[23351]: cluster 2026-03-09T17:26:42.162531+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T17:26:42.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: cephadm 2026-03-09T17:26:41.109165+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T17:26:42.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: cephadm 2026-03-09T17:26:41.109165+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: cluster 2026-03-09T17:26:41.364122+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 4.7 KiB/s wr, 144 op/s 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: cluster 2026-03-09T17:26:41.364122+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 4.7 KiB/s wr, 144 op/s 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: audit 2026-03-09T17:26:41.587736+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1192227753' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: audit 2026-03-09T17:26:41.587736+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1192227753' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: audit 2026-03-09T17:26:42.132992+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: audit 2026-03-09T17:26:42.132992+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: cluster 2026-03-09T17:26:42.162440+0000 mon.a (mon.0) 717 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: cluster 2026-03-09T17:26:42.162440+0000 mon.a (mon.0) 717 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: cluster 2026-03-09T17:26:42.162531+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:42 vm00 bash[28333]: cluster 2026-03-09T17:26:42.162531+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: cephadm 2026-03-09T17:26:41.109165+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: cephadm 2026-03-09T17:26:41.109165+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: cluster 2026-03-09T17:26:41.364122+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 4.7 KiB/s wr, 144 op/s 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: cluster 2026-03-09T17:26:41.364122+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 4.7 KiB/s wr, 144 op/s 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: audit 2026-03-09T17:26:41.587736+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1192227753' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: audit 2026-03-09T17:26:41.587736+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1192227753' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: audit 2026-03-09T17:26:42.132992+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: audit 2026-03-09T17:26:42.132992+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: cluster 2026-03-09T17:26:42.162440+0000 mon.a (mon.0) 717 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: cluster 2026-03-09T17:26:42.162440+0000 mon.a (mon.0) 717 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: cluster 2026-03-09T17:26:42.162531+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T17:26:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:42 vm00 bash[20770]: cluster 2026-03-09T17:26:42.162531+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T17:26:44.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:44 vm00 bash[28333]: cluster 2026-03-09T17:26:43.139657+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T17:26:44.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:44 vm00 bash[28333]: cluster 2026-03-09T17:26:43.139657+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:44 vm00 bash[28333]: cluster 2026-03-09T17:26:43.364407+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:44 vm00 bash[28333]: cluster 2026-03-09T17:26:43.364407+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:44 vm00 bash[28333]: audit 2026-03-09T17:26:43.939870+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:44 vm00 bash[28333]: audit 2026-03-09T17:26:43.939870+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:44 vm00 bash[20770]: cluster 2026-03-09T17:26:43.139657+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:44 vm00 bash[20770]: cluster 2026-03-09T17:26:43.139657+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:44 vm00 bash[20770]: cluster 2026-03-09T17:26:43.364407+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:44 vm00 bash[20770]: cluster 2026-03-09T17:26:43.364407+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:44 vm00 bash[20770]: audit 2026-03-09T17:26:43.939870+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:44 vm00 bash[20770]: audit 2026-03-09T17:26:43.939870+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:44 vm02 bash[23351]: cluster 2026-03-09T17:26:43.139657+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T17:26:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:44 vm02 bash[23351]: cluster 2026-03-09T17:26:43.139657+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T17:26:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:44 vm02 bash[23351]: cluster 2026-03-09T17:26:43.364407+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-09T17:26:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:44 vm02 bash[23351]: cluster 2026-03-09T17:26:43.364407+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-09T17:26:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:44 vm02 bash[23351]: audit 2026-03-09T17:26:43.939870+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:44 vm02 bash[23351]: audit 2026-03-09T17:26:43.939870+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:45.004 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.192+0000 7ff45daea640 1 -- 192.168.123.102:0/1828572361 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff45810a470 msgr2=0x7ff45810a8f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.192+0000 7ff45daea640 1 --2- 192.168.123.102:0/1828572361 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff45810a470 0x7ff45810a8f0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7ff444009fc0 tx=0x7ff44402f3d0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 -- 192.168.123.102:0/1828572361 shutdown_connections 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 --2- 192.168.123.102:0/1828572361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff45810ae30 0x7ff4581116e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 --2- 192.168.123.102:0/1828572361 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff45810a470 0x7ff45810a8f0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 --2- 192.168.123.102:0/1828572361 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff458074180 0x7ff458074580 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 -- 192.168.123.102:0/1828572361 >> 192.168.123.102:0/1828572361 conn(0x7ff45806f920 msgr2=0x7ff458071d60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 -- 192.168.123.102:0/1828572361 shutdown_connections 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 -- 192.168.123.102:0/1828572361 wait complete. 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 Processor -- start 2026-03-09T17:26:45.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 -- start start 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff458074180 0x7ff458119cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff45810a470 0x7ff45811a230 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff45810ae30 0x7ff458116900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff45811ccf0 con 0x7ff45810ae30 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7ff45811cb70 con 0x7ff45810a470 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff45daea640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7ff45811ce70 con 0x7ff458074180 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff457fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff45810ae30 0x7ff458116900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff457fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff45810ae30 0x7ff458116900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:33294/0 (socket says 192.168.123.102:33294) 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff457fff640 1 -- 192.168.123.102:0/3080102276 learned_addr learned my addr 192.168.123.102:0/3080102276 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:26:45.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff4577fe640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff458074180 0x7ff458119cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:45.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff456ffd640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff45810a470 0x7ff45811a230 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:45.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff456ffd640 1 -- 192.168.123.102:0/3080102276 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff458074180 msgr2=0x7ff458119cf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:45.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff456ffd640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff458074180 0x7ff458119cf0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff456ffd640 1 -- 192.168.123.102:0/3080102276 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff45810ae30 msgr2=0x7ff458116900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:45.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff456ffd640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff45810ae30 0x7ff458116900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff456ffd640 1 -- 192.168.123.102:0/3080102276 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff4581171c0 con 0x7ff45810a470 2026-03-09T17:26:45.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff456ffd640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff45810a470 0x7ff45811a230 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7ff44402f8e0 tx=0x7ff444002d10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:45.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff454ff9640 1 -- 192.168.123.102:0/3080102276 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff444004d40 con 0x7ff45810a470 2026-03-09T17:26:45.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.196+0000 7ff454ff9640 1 -- 192.168.123.102:0/3080102276 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7ff444007a10 con 0x7ff45810a470 2026-03-09T17:26:45.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.200+0000 7ff454ff9640 1 -- 192.168.123.102:0/3080102276 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff4440388c0 con 0x7ff45810a470 2026-03-09T17:26:45.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.200+0000 7ff45daea640 1 -- 192.168.123.102:0/3080102276 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff4581b9d10 con 0x7ff45810a470 2026-03-09T17:26:45.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.200+0000 7ff45daea640 1 -- 192.168.123.102:0/3080102276 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff4581ba110 con 0x7ff45810a470 2026-03-09T17:26:45.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.200+0000 7ff45daea640 1 -- 192.168.123.102:0/3080102276 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff458074580 con 0x7ff45810a470 2026-03-09T17:26:45.211 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.204+0000 7ff454ff9640 1 -- 192.168.123.102:0/3080102276 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 16) ==== 100051+0+0 (secure 0 0 0) 0x7ff4440045a0 con 0x7ff45810a470 2026-03-09T17:26:45.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.204+0000 7ff454ff9640 1 --2- 192.168.123.102:0/3080102276 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff42c077620 0x7ff42c079ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:45.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.204+0000 7ff454ff9640 1 -- 192.168.123.102:0/3080102276 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(63..63 src has 1..63) ==== 5951+0+0 (secure 0 0 0) 0x7ff4440be6a0 con 0x7ff45810a470 2026-03-09T17:26:45.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.204+0000 7ff4577fe640 1 --2- 192.168.123.102:0/3080102276 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff42c077620 0x7ff42c079ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:45.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.204+0000 7ff454ff9640 1 -- 192.168.123.102:0/3080102276 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff44403d070 con 0x7ff45810a470 2026-03-09T17:26:45.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.208+0000 7ff4577fe640 1 --2- 192.168.123.102:0/3080102276 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff42c077620 0x7ff42c079ae0 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7ff4580750d0 tx=0x7ff448009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:45.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.320+0000 7ff45daea640 1 -- 192.168.123.102:0/3080102276 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}) -- 0x7ff4580630c0 con 0x7ff42c077620 2026-03-09T17:26:45.331 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.328+0000 7ff454ff9640 1 -- 192.168.123.102:0/3080102276 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+31 (secure 0 0 0) 0x7ff4580630c0 con 0x7ff42c077620 2026-03-09T17:26:45.331 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled prometheus update... 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 -- 192.168.123.102:0/3080102276 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff42c077620 msgr2=0x7ff42c079ae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 --2- 192.168.123.102:0/3080102276 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff42c077620 0x7ff42c079ae0 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7ff4580750d0 tx=0x7ff448009290 comp rx=0 tx=0).stop 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 -- 192.168.123.102:0/3080102276 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff45810a470 msgr2=0x7ff45811a230 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff45810a470 0x7ff45811a230 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7ff44402f8e0 tx=0x7ff444002d10 comp rx=0 tx=0).stop 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 -- 192.168.123.102:0/3080102276 shutdown_connections 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 --2- 192.168.123.102:0/3080102276 >> v2:192.168.123.100:6800/3114914985 conn(0x7ff42c077620 0x7ff42c079ae0 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff45810ae30 0x7ff458116900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff45810a470 0x7ff45811a230 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 --2- 192.168.123.102:0/3080102276 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff458074180 0x7ff458119cf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 -- 192.168.123.102:0/3080102276 >> 192.168.123.102:0/3080102276 conn(0x7ff45806f920 msgr2=0x7ff4580724b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 -- 192.168.123.102:0/3080102276 shutdown_connections 2026-03-09T17:26:45.334 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:45.332+0000 7ff4327fc640 1 -- 192.168.123.102:0/3080102276 wait complete. 2026-03-09T17:26:45.401 DEBUG:teuthology.orchestra.run.vm02:prometheus.a> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@prometheus.a.service 2026-03-09T17:26:45.402 INFO:tasks.cephadm:Adding node-exporter.a on vm00 2026-03-09T17:26:45.402 INFO:tasks.cephadm:Adding node-exporter.b on vm02 2026-03-09T17:26:45.402 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch apply node-exporter '2;vm00=a;vm02=b' 2026-03-09T17:26:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.325466+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.325466+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: cephadm 2026-03-09T17:26:45.326313+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm02=a;count:1 2026-03-09T17:26:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: cephadm 2026-03-09T17:26:45.326313+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm02=a;count:1 2026-03-09T17:26:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.331064+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.331064+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: cluster 2026-03-09T17:26:45.364809+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 40 KiB/s rd, 2.7 KiB/s wr, 95 op/s 2026-03-09T17:26:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: cluster 2026-03-09T17:26:45.364809+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 40 KiB/s rd, 2.7 KiB/s wr, 95 op/s 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.407258+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.407258+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.413191+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.413191+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.413905+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.413905+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.414339+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.414339+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.418699+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: audit 2026-03-09T17:26:45.418699+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: cephadm 2026-03-09T17:26:45.577359+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm02 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: cephadm 2026-03-09T17:26:45.577359+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm02 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: cluster 2026-03-09T17:26:46.204389+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T17:26:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:46 vm02 bash[23351]: cluster 2026-03-09T17:26:46.204389+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T17:26:46.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.325466+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.325466+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: cephadm 2026-03-09T17:26:45.326313+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm02=a;count:1 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: cephadm 2026-03-09T17:26:45.326313+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm02=a;count:1 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.331064+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.331064+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: cluster 2026-03-09T17:26:45.364809+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 40 KiB/s rd, 2.7 KiB/s wr, 95 op/s 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: cluster 2026-03-09T17:26:45.364809+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 40 KiB/s rd, 2.7 KiB/s wr, 95 op/s 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.407258+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.407258+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.413191+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.413191+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.413905+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.413905+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.414339+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.414339+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.418699+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: audit 2026-03-09T17:26:45.418699+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: cephadm 2026-03-09T17:26:45.577359+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm02 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: cephadm 2026-03-09T17:26:45.577359+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm02 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: cluster 2026-03-09T17:26:46.204389+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:46 vm00 bash[28333]: cluster 2026-03-09T17:26:46.204389+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.325466+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.325466+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: cephadm 2026-03-09T17:26:45.326313+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm02=a;count:1 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: cephadm 2026-03-09T17:26:45.326313+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm02=a;count:1 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.331064+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.331064+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: cluster 2026-03-09T17:26:45.364809+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 40 KiB/s rd, 2.7 KiB/s wr, 95 op/s 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: cluster 2026-03-09T17:26:45.364809+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 40 KiB/s rd, 2.7 KiB/s wr, 95 op/s 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.407258+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.407258+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.413191+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.413191+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.413905+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.413905+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.414339+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.414339+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.418699+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: audit 2026-03-09T17:26:45.418699+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: cephadm 2026-03-09T17:26:45.577359+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm02 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: cephadm 2026-03-09T17:26:45.577359+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm02 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: cluster 2026-03-09T17:26:46.204389+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T17:26:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:46 vm00 bash[20770]: cluster 2026-03-09T17:26:46.204389+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T17:26:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:48 vm02 bash[23351]: cluster 2026-03-09T17:26:47.365138+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T17:26:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:48 vm02 bash[23351]: cluster 2026-03-09T17:26:47.365138+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T17:26:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:48 vm00 bash[28333]: cluster 2026-03-09T17:26:47.365138+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T17:26:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:48 vm00 bash[28333]: cluster 2026-03-09T17:26:47.365138+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T17:26:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:48 vm00 bash[20770]: cluster 2026-03-09T17:26:47.365138+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T17:26:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:48 vm00 bash[20770]: cluster 2026-03-09T17:26:47.365138+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T17:26:50.074 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:26:50.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- 192.168.123.102:0/3958831012 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90073b70 msgr2=0x7f5b90073ff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:50.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 --2- 192.168.123.102:0/3958831012 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90073b70 0x7f5b90073ff0 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f5b8800b600 tx=0x7f5b880305c0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- 192.168.123.102:0/3958831012 shutdown_connections 2026-03-09T17:26:50.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 --2- 192.168.123.102:0/3958831012 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5b90074530 0x7f5b9007b180 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 --2- 192.168.123.102:0/3958831012 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90073b70 0x7f5b90073ff0 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 --2- 192.168.123.102:0/3958831012 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b9010a850 0x7f5b9010ac50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- 192.168.123.102:0/3958831012 >> 192.168.123.102:0/3958831012 conn(0x7f5b9006f810 msgr2=0x7f5b90071c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- 192.168.123.102:0/3958831012 shutdown_connections 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- 192.168.123.102:0/3958831012 wait complete. 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 Processor -- start 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- start start 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b90074530 0x7f5b90083800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5b9010a850 0x7f5b90083d40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90084d90 0x7f5b90085240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f5b9007dbc0 con 0x7f5b90074530 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f5b9007da40 con 0x7f5b90084d90 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f5b9007dd40 con 0x7f5b9010a850 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90084d90 0x7f5b90085240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90084d90 0x7f5b90085240 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:59322/0 (socket says 192.168.123.102:59322) 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 -- 192.168.123.102:0/2252092307 learned_addr learned my addr 192.168.123.102:0/2252092307 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b95956640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5b9010a850 0x7f5b90083d40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96157640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b90074530 0x7f5b90083800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 -- 192.168.123.102:0/2252092307 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5b9010a850 msgr2=0x7f5b90083d40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5b9010a850 0x7f5b90083d40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 -- 192.168.123.102:0/2252092307 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b90074530 msgr2=0x7f5b90083800 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b90074530 0x7f5b90083800 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 -- 192.168.123.102:0/2252092307 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5b90085880 con 0x7f5b90084d90 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b96958640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90084d90 0x7f5b90085240 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f5b8000b520 tx=0x7f5b8000b9f0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:50.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b877fe640 1 -- 192.168.123.102:0/2252092307 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5b80013020 con 0x7f5b90084d90 2026-03-09T17:26:50.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5b90085b70 con 0x7f5b90084d90 2026-03-09T17:26:50.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.624+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5b90086180 con 0x7f5b90084d90 2026-03-09T17:26:50.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.628+0000 7f5b877fe640 1 -- 192.168.123.102:0/2252092307 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f5b80004480 con 0x7f5b90084d90 2026-03-09T17:26:50.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.628+0000 7f5b877fe640 1 -- 192.168.123.102:0/2252092307 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5b8000fa90 con 0x7f5b90084d90 2026-03-09T17:26:50.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.628+0000 7f5b877fe640 1 -- 192.168.123.102:0/2252092307 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 16) ==== 100051+0+0 (secure 0 0 0) 0x7f5b80020020 con 0x7f5b90084d90 2026-03-09T17:26:50.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.628+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5b60005180 con 0x7f5b90084d90 2026-03-09T17:26:50.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.628+0000 7f5b877fe640 1 --2- 192.168.123.102:0/2252092307 >> v2:192.168.123.100:6800/3114914985 conn(0x7f5b70077660 0x7f5b70079b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:50.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.628+0000 7f5b877fe640 1 -- 192.168.123.102:0/2252092307 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 5951+0+0 (secure 0 0 0) 0x7f5b80099dd0 con 0x7f5b90084d90 2026-03-09T17:26:50.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.628+0000 7f5b96157640 1 --2- 192.168.123.102:0/2252092307 >> v2:192.168.123.100:6800/3114914985 conn(0x7f5b70077660 0x7f5b70079b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:50.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.628+0000 7f5b96157640 1 --2- 192.168.123.102:0/2252092307 >> v2:192.168.123.100:6800/3114914985 conn(0x7f5b70077660 0x7f5b70079b20 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f5b90074fb0 tx=0x7f5b8c009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:50.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.632+0000 7f5b877fe640 1 -- 192.168.123.102:0/2252092307 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5b80066b40 con 0x7f5b90084d90 2026-03-09T17:26:50.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.740+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 --> v2:192.168.123.100:6800/3114914985 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm02=b", "target": ["mon-mgr", ""]}) -- 0x7f5b60002bf0 con 0x7f5b70077660 2026-03-09T17:26:50.752 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled node-exporter update... 2026-03-09T17:26:50.752 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.748+0000 7f5b877fe640 1 -- 192.168.123.102:0/2252092307 <== mgr.14150 v2:192.168.123.100:6800/3114914985 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+34 (secure 0 0 0) 0x7f5b60002bf0 con 0x7f5b70077660 2026-03-09T17:26:50.755 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 >> v2:192.168.123.100:6800/3114914985 conn(0x7f5b70077660 msgr2=0x7f5b70079b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:50.755 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 --2- 192.168.123.102:0/2252092307 >> v2:192.168.123.100:6800/3114914985 conn(0x7f5b70077660 0x7f5b70079b20 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f5b90074fb0 tx=0x7f5b8c009290 comp rx=0 tx=0).stop 2026-03-09T17:26:50.755 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90084d90 msgr2=0x7f5b90085240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90084d90 0x7f5b90085240 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f5b8000b520 tx=0x7f5b8000b9f0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 shutdown_connections 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 --2- 192.168.123.102:0/2252092307 >> v2:192.168.123.100:6800/3114914985 conn(0x7f5b70077660 0x7f5b70079b20 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5b90084d90 0x7f5b90085240 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5b9010a850 0x7f5b90083d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 --2- 192.168.123.102:0/2252092307 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b90074530 0x7f5b90083800 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 >> 192.168.123.102:0/2252092307 conn(0x7f5b9006f810 msgr2=0x7f5b900718e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 shutdown_connections 2026-03-09T17:26:50.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:50.752+0000 7f5b983e2640 1 -- 192.168.123.102:0/2252092307 wait complete. 2026-03-09T17:26:50.764 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:50 vm02 bash[23351]: cluster 2026-03-09T17:26:49.365584+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 566 B/s rd, 141 B/s wr, 1 op/s 2026-03-09T17:26:50.764 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:50 vm02 bash[23351]: cluster 2026-03-09T17:26:49.365584+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 566 B/s rd, 141 B/s wr, 1 op/s 2026-03-09T17:26:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:50 vm00 bash[28333]: cluster 2026-03-09T17:26:49.365584+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 566 B/s rd, 141 B/s wr, 1 op/s 2026-03-09T17:26:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:50 vm00 bash[28333]: cluster 2026-03-09T17:26:49.365584+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 566 B/s rd, 141 B/s wr, 1 op/s 2026-03-09T17:26:50.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:50 vm00 bash[20770]: cluster 2026-03-09T17:26:49.365584+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 566 B/s rd, 141 B/s wr, 1 op/s 2026-03-09T17:26:50.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:50 vm00 bash[20770]: cluster 2026-03-09T17:26:49.365584+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 566 B/s rd, 141 B/s wr, 1 op/s 2026-03-09T17:26:50.901 DEBUG:teuthology.orchestra.run.vm00:node-exporter.a> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@node-exporter.a.service 2026-03-09T17:26:50.902 DEBUG:teuthology.orchestra.run.vm02:node-exporter.b> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@node-exporter.b.service 2026-03-09T17:26:50.904 INFO:tasks.cephadm:Adding alertmanager.a on vm00 2026-03-09T17:26:50.904 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch apply alertmanager '1;vm00=a' 2026-03-09T17:26:51.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:26:51.911 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:51 vm02 bash[23351]: audit 2026-03-09T17:26:50.743998+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:51.911 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:51 vm02 bash[23351]: audit 2026-03-09T17:26:50.743998+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:51.911 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:51 vm02 bash[23351]: cephadm 2026-03-09T17:26:50.745076+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm02=b;count:2 2026-03-09T17:26:51.911 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:51 vm02 bash[23351]: cephadm 2026-03-09T17:26:50.745076+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm02=b;count:2 2026-03-09T17:26:51.911 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:51 vm02 bash[23351]: audit 2026-03-09T17:26:50.751475+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:51.911 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:51 vm02 bash[23351]: audit 2026-03-09T17:26:50.751475+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:51.911 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:51.911 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:51.911 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:51.911 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:51.911 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:51.911 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:51.911 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:51.912 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:51 vm00 bash[28333]: audit 2026-03-09T17:26:50.743998+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:51 vm00 bash[28333]: audit 2026-03-09T17:26:50.743998+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:51 vm00 bash[28333]: cephadm 2026-03-09T17:26:50.745076+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm02=b;count:2 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:51 vm00 bash[28333]: cephadm 2026-03-09T17:26:50.745076+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm02=b;count:2 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:51 vm00 bash[28333]: audit 2026-03-09T17:26:50.751475+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:51 vm00 bash[28333]: audit 2026-03-09T17:26:50.751475+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:51 vm00 bash[20770]: audit 2026-03-09T17:26:50.743998+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:51 vm00 bash[20770]: audit 2026-03-09T17:26:50.743998+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:51 vm00 bash[20770]: cephadm 2026-03-09T17:26:50.745076+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm02=b;count:2 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:51 vm00 bash[20770]: cephadm 2026-03-09T17:26:50.745076+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm02=b;count:2 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:51 vm00 bash[20770]: audit 2026-03-09T17:26:50.751475+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:51 vm00 bash[20770]: audit 2026-03-09T17:26:50.751475+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:52.183 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:51 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.183 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:26:52.635 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 systemd[1]: Started Ceph prometheus.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:26:52.635 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.310Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T17:26:52.635 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.310Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T17:26:52.635 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.310Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm02 (none))" 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.310Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.311Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.314Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.315Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.316Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.316Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.316Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.317Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.483µs 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.317Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.317Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.317Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=49.944µs wal_replay_duration=236.502µs wbl_replay_duration=290ns total_replay_duration=438.601µs 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.318Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.318Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.319Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.331Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=12.65964ms db_storage=751ns remote_storage=1.012µs web_handler=431ns query_engine=490ns scrape=1.208749ms scrape_sd=70.723µs notify=481ns notify_sd=611ns rules=11.076461ms tracing=5.009µs 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.331Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T17:26:52.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:26:52 vm02 bash[49957]: ts=2026-03-09T17:26:52.331Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: cluster 2026-03-09T17:26:51.366038+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 124 B/s wr, 2 op/s 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: cluster 2026-03-09T17:26:51.366038+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 124 B/s wr, 2 op/s 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:51.438893+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:51.438893+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:52.217406+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:52.217406+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:52.223776+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:52.223776+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:52.236130+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:52.236130+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:52.241657+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:52 vm00 bash[28333]: audit 2026-03-09T17:26:52.241657+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: cluster 2026-03-09T17:26:51.366038+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 124 B/s wr, 2 op/s 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: cluster 2026-03-09T17:26:51.366038+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 124 B/s wr, 2 op/s 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:51.438893+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:51.438893+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:52.217406+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:52.217406+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:52.223776+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:52.223776+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:52.236130+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:52.236130+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:52.241657+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T17:26:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:52 vm00 bash[20770]: audit 2026-03-09T17:26:52.241657+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: cluster 2026-03-09T17:26:51.366038+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 124 B/s wr, 2 op/s 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: cluster 2026-03-09T17:26:51.366038+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 124 B/s wr, 2 op/s 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:51.438893+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:51.438893+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:52.217406+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:52.217406+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:52.223776+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:52.223776+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:52.236130+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:52.236130+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:52.241657+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T17:26:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:52 vm02 bash[23351]: audit 2026-03-09T17:26:52.241657+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T17:26:53.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:53 vm00 bash[21037]: ignoring --setuser ceph since I am not root 2026-03-09T17:26:53.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:53 vm00 bash[21037]: ignoring --setgroup ceph since I am not root 2026-03-09T17:26:53.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:53 vm00 bash[21037]: debug 2026-03-09T17:26:53.371+0000 7fc6535e9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T17:26:53.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:53 vm00 bash[21037]: debug 2026-03-09T17:26:53.407+0000 7fc6535e9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:26:53.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:53 vm00 bash[21037]: debug 2026-03-09T17:26:53.531+0000 7fc6535e9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:26:53.542 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:53 vm02 bash[24073]: ignoring --setuser ceph since I am not root 2026-03-09T17:26:53.542 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:53 vm02 bash[24073]: ignoring --setgroup ceph since I am not root 2026-03-09T17:26:53.542 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:53 vm02 bash[24073]: debug 2026-03-09T17:26:53.376+0000 7f5ac358a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T17:26:53.542 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:53 vm02 bash[24073]: debug 2026-03-09T17:26:53.412+0000 7f5ac358a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:26:53.848 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:53 vm02 bash[24073]: debug 2026-03-09T17:26:53.540+0000 7f5ac358a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:26:54.135 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:53 vm02 bash[24073]: debug 2026-03-09T17:26:53.844+0000 7f5ac358a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:26:54.248 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:53 vm00 bash[21037]: debug 2026-03-09T17:26:53.823+0000 7fc6535e9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:26:54.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:54 vm00 bash[28333]: audit 2026-03-09T17:26:53.247301+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T17:26:54.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:54 vm00 bash[28333]: audit 2026-03-09T17:26:53.247301+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T17:26:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:54 vm00 bash[28333]: cluster 2026-03-09T17:26:53.254698+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T17:26:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:54 vm00 bash[28333]: cluster 2026-03-09T17:26:53.254698+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T17:26:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:54 vm00 bash[20770]: audit 2026-03-09T17:26:53.247301+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T17:26:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:54 vm00 bash[20770]: audit 2026-03-09T17:26:53.247301+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T17:26:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:54 vm00 bash[20770]: cluster 2026-03-09T17:26:53.254698+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T17:26:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:54 vm00 bash[20770]: cluster 2026-03-09T17:26:53.254698+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T17:26:54.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: debug 2026-03-09T17:26:54.323+0000 7fc6535e9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:26:54.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: debug 2026-03-09T17:26:54.415+0000 7fc6535e9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:26:54.579 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:54 vm02 bash[23351]: audit 2026-03-09T17:26:53.247301+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T17:26:54.579 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:54 vm02 bash[23351]: audit 2026-03-09T17:26:53.247301+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.100:0/93899313' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T17:26:54.579 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:54 vm02 bash[23351]: cluster 2026-03-09T17:26:53.254698+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T17:26:54.579 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:54 vm02 bash[23351]: cluster 2026-03-09T17:26:53.254698+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T17:26:54.580 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: debug 2026-03-09T17:26:54.360+0000 7f5ac358a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:26:54.580 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: debug 2026-03-09T17:26:54.452+0000 7f5ac358a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:26:54.817 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:26:54.817 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:26:54.817 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: from numpy import show_config as show_numpy_config 2026-03-09T17:26:54.817 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: debug 2026-03-09T17:26:54.547+0000 7fc6535e9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:26:54.817 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: debug 2026-03-09T17:26:54.687+0000 7fc6535e9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:26:54.817 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: debug 2026-03-09T17:26:54.731+0000 7fc6535e9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:26:54.817 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: debug 2026-03-09T17:26:54.771+0000 7fc6535e9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:26:54.867 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:26:54.867 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:26:54.867 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: from numpy import show_config as show_numpy_config 2026-03-09T17:26:54.867 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: debug 2026-03-09T17:26:54.584+0000 7f5ac358a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:26:54.867 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: debug 2026-03-09T17:26:54.736+0000 7f5ac358a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:26:54.867 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: debug 2026-03-09T17:26:54.780+0000 7f5ac358a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:26:54.867 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: debug 2026-03-09T17:26:54.820+0000 7f5ac358a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:26:55.135 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: debug 2026-03-09T17:26:54.864+0000 7f5ac358a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:26:55.135 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:54 vm02 bash[24073]: debug 2026-03-09T17:26:54.920+0000 7f5ac358a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:26:55.288 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: debug 2026-03-09T17:26:54.815+0000 7fc6535e9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:26:55.288 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:54 vm00 bash[21037]: debug 2026-03-09T17:26:54.867+0000 7fc6535e9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:26:55.585 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:55 vm00 bash[21037]: debug 2026-03-09T17:26:55.319+0000 7fc6535e9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:26:55.585 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:55 vm00 bash[21037]: debug 2026-03-09T17:26:55.359+0000 7fc6535e9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:26:55.585 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:55 vm00 bash[21037]: debug 2026-03-09T17:26:55.395+0000 7fc6535e9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:26:55.585 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:55 vm00 bash[21037]: debug 2026-03-09T17:26:55.539+0000 7fc6535e9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:26:55.654 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:55 vm02 bash[24073]: debug 2026-03-09T17:26:55.384+0000 7f5ac358a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:26:55.654 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:55 vm02 bash[24073]: debug 2026-03-09T17:26:55.424+0000 7f5ac358a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:26:55.654 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:55 vm02 bash[24073]: debug 2026-03-09T17:26:55.460+0000 7f5ac358a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:26:55.654 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:55 vm02 bash[24073]: debug 2026-03-09T17:26:55.608+0000 7f5ac358a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:26:55.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:55 vm00 bash[21037]: debug 2026-03-09T17:26:55.583+0000 7fc6535e9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:26:55.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:55 vm00 bash[21037]: debug 2026-03-09T17:26:55.627+0000 7fc6535e9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:26:55.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:55 vm00 bash[21037]: debug 2026-03-09T17:26:55.739+0000 7fc6535e9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:26:55.964 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:55 vm02 bash[24073]: debug 2026-03-09T17:26:55.652+0000 7f5ac358a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:26:55.965 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:55 vm02 bash[24073]: debug 2026-03-09T17:26:55.692+0000 7f5ac358a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:26:55.965 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:55 vm02 bash[24073]: debug 2026-03-09T17:26:55.804+0000 7f5ac358a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:26:56.173 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:55 vm00 bash[21037]: debug 2026-03-09T17:26:55.899+0000 7fc6535e9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:26:56.173 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: debug 2026-03-09T17:26:56.087+0000 7fc6535e9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:26:56.173 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: debug 2026-03-09T17:26:56.127+0000 7fc6535e9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:26:56.228 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:55 vm02 bash[24073]: debug 2026-03-09T17:26:55.964+0000 7f5ac358a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:26:56.228 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: debug 2026-03-09T17:26:56.144+0000 7f5ac358a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:26:56.228 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: debug 2026-03-09T17:26:56.180+0000 7f5ac358a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:26:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: debug 2026-03-09T17:26:56.171+0000 7fc6535e9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:26:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: debug 2026-03-09T17:26:56.331+0000 7fc6535e9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:26:56.600 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:26:56.630 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: debug 2026-03-09T17:26:56.224+0000 7f5ac358a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:26:56.630 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: debug 2026-03-09T17:26:56.388+0000 7f5ac358a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- 192.168.123.102:0/4258887099 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6e1810a370 msgr2=0x7f6e1810a7d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 --2- 192.168.123.102:0/4258887099 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6e1810a370 0x7f6e1810a7d0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f6e0c009a30 tx=0x7f6e0c02f260 comp rx=0 tx=0).stop 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- 192.168.123.102:0/4258887099 shutdown_connections 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 --2- 192.168.123.102:0/4258887099 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e1810ad10 0x7f6e181115c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 --2- 192.168.123.102:0/4258887099 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6e1810a370 0x7f6e1810a7d0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 --2- 192.168.123.102:0/4258887099 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e181095b0 0x7f6e181099b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- 192.168.123.102:0/4258887099 >> 192.168.123.102:0/4258887099 conn(0x7f6e180782c0 msgr2=0x7f6e1807a700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- 192.168.123.102:0/4258887099 shutdown_connections 2026-03-09T17:26:56.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- 192.168.123.102:0/4258887099 wait complete. 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 Processor -- start 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- start start 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e181095b0 0x7f6e18074840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6e1810a370 0x7f6e180712a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e1810ad10 0x7f6e180717e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6e181142d0 con 0x7f6e1810ad10 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f6e18114150 con 0x7f6e1810a370 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e17577640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f6e18114450 con 0x7f6e181095b0 2026-03-09T17:26:56.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e1810ad10 0x7f6e180717e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e1810ad10 0x7f6e180717e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.102:51678/0 (socket says 192.168.123.102:51678) 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16d76640 1 -- 192.168.123.102:0/2835766824 learned_addr learned my addr 192.168.123.102:0/2835766824 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16575640 1 --2- 192.168.123.102:0/2835766824 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e181095b0 0x7f6e18074840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16d76640 1 -- 192.168.123.102:0/2835766824 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e181095b0 msgr2=0x7f6e18074840 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16d76640 1 --2- 192.168.123.102:0/2835766824 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e181095b0 0x7f6e18074840 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16d76640 1 -- 192.168.123.102:0/2835766824 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6e1810a370 msgr2=0x7f6e180712a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16d76640 1 --2- 192.168.123.102:0/2835766824 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6e1810a370 0x7f6e180712a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.816+0000 7f6e16d76640 1 -- 192.168.123.102:0/2835766824 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6e18071d50 con 0x7f6e1810ad10 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6e16d76640 1 --2- 192.168.123.102:0/2835766824 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e1810ad10 0x7f6e180717e0 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f6e0800ef30 tx=0x7f6e0800c560 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6e08019070 con 0x7f6e1810ad10 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f6e080092d0 con 0x7f6e1810ad10 2026-03-09T17:26:56.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6e08004880 con 0x7f6e1810ad10 2026-03-09T17:26:56.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6e1807ba40 con 0x7f6e1810ad10 2026-03-09T17:26:56.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6e1807bf50 con 0x7f6e1810ad10 2026-03-09T17:26:56.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6ddc005180 con 0x7f6e1810ad10 2026-03-09T17:26:56.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 18) ==== 99714+0+0 (secure 0 0 0) 0x7f6e08004b40 con 0x7f6e1810ad10 2026-03-09T17:26:56.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.820+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f6e08099300 con 0x7f6e1810ad10 2026-03-09T17:26:56.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:56.824+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6e08010040 con 0x7f6e1810ad10 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:56 vm00 bash[28333]: cluster 2026-03-09T17:26:56.573587+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:56 vm00 bash[28333]: cluster 2026-03-09T17:26:56.573587+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:56 vm00 bash[28333]: cluster 2026-03-09T17:26:56.573893+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:56 vm00 bash[28333]: cluster 2026-03-09T17:26:56.573893+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:56 vm00 bash[20770]: cluster 2026-03-09T17:26:56.573587+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:56 vm00 bash[20770]: cluster 2026-03-09T17:26:56.573587+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:56 vm00 bash[20770]: cluster 2026-03-09T17:26:56.573893+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:56 vm00 bash[20770]: cluster 2026-03-09T17:26:56.573893+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: debug 2026-03-09T17:26:56.571+0000 7fc6535e9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: [09/Mar/2026:17:26:56] ENGINE Bus STARTING 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: CherryPy Checker: 2026-03-09T17:26:56.847 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: The Application mounted at '' has an empty config. 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:56 vm02 bash[23351]: cluster 2026-03-09T17:26:56.573587+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:56 vm02 bash[23351]: cluster 2026-03-09T17:26:56.573587+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:56 vm02 bash[23351]: cluster 2026-03-09T17:26:56.573893+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:56 vm02 bash[23351]: cluster 2026-03-09T17:26:56.573893+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: debug 2026-03-09T17:26:56.656+0000 7f5ac358a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: [09/Mar/2026:17:26:56] ENGINE Bus STARTING 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: CherryPy Checker: 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: The Application mounted at '' has an empty config. 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: [09/Mar/2026:17:26:56] ENGINE Serving on http://:::9283 2026-03-09T17:26:56.885 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:26:56 vm02 bash[24073]: [09/Mar/2026:17:26:56] ENGINE Bus STARTED 2026-03-09T17:26:57.287 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: [09/Mar/2026:17:26:56] ENGINE Serving on http://:::9283 2026-03-09T17:26:57.288 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:26:56 vm00 bash[21037]: [09/Mar/2026:17:26:56] ENGINE Bus STARTED 2026-03-09T17:26:57.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.672+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mgrmap(e 19) ==== 99806+0+0 (secure 0 0 0) 0x7f6e0805d8f0 con 0x7f6e1810ad10 2026-03-09T17:26:57.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.672+0000 7f6dfb7fe640 1 --2- 192.168.123.102:0/2835766824 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6df007ddf0 0x7f6df00801e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:26:57.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.672+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 --> v2:192.168.123.100:6800/2673235927 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm00=a", "target": ["mon-mgr", ""]}) -- 0x7f6df0080720 con 0x7f6df007ddf0 2026-03-09T17:26:57.679 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.676+0000 7f6e16575640 1 --2- 192.168.123.102:0/2835766824 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6df007ddf0 0x7f6df00801e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:26:57.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.696+0000 7f6e16575640 1 --2- 192.168.123.102:0/2835766824 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6df007ddf0 0x7f6df00801e0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f6e04004640 tx=0x7f6e04009210 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:26:57.711 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled alertmanager update... 2026-03-09T17:26:57.711 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.708+0000 7f6dfb7fe640 1 -- 192.168.123.102:0/2835766824 <== mgr.14505 v2:192.168.123.100:6800/2673235927 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+33 (secure 0 0 0) 0x7f6df0080720 con 0x7f6df007ddf0 2026-03-09T17:26:57.713 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6df007ddf0 msgr2=0x7f6df00801e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:57.713 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 --2- 192.168.123.102:0/2835766824 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6df007ddf0 0x7f6df00801e0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f6e04004640 tx=0x7f6e04009210 comp rx=0 tx=0).stop 2026-03-09T17:26:57.713 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e1810ad10 msgr2=0x7f6e180717e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:26:57.713 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 --2- 192.168.123.102:0/2835766824 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e1810ad10 0x7f6e180717e0 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f6e0800ef30 tx=0x7f6e0800c560 comp rx=0 tx=0).stop 2026-03-09T17:26:57.714 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 shutdown_connections 2026-03-09T17:26:57.714 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 --2- 192.168.123.102:0/2835766824 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6df007ddf0 0x7f6df00801e0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:57.714 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 --2- 192.168.123.102:0/2835766824 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e1810ad10 0x7f6e180717e0 unknown :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:57.714 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 --2- 192.168.123.102:0/2835766824 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6e1810a370 0x7f6e180712a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:57.714 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 --2- 192.168.123.102:0/2835766824 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e181095b0 0x7f6e18074840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:26:57.714 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 >> 192.168.123.102:0/2835766824 conn(0x7f6e180782c0 msgr2=0x7f6e18079d50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:26:57.714 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 shutdown_connections 2026-03-09T17:26:57.714 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:26:57.712+0000 7f6e17577640 1 -- 192.168.123.102:0/2835766824 wait complete. 2026-03-09T17:26:57.774 DEBUG:teuthology.orchestra.run.vm00:alertmanager.a> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@alertmanager.a.service 2026-03-09T17:26:57.775 INFO:tasks.cephadm:Adding grafana.a on vm02 2026-03-09T17:26:57.775 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph orch apply grafana '1;vm02=a' 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.637409+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.637409+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.639401+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0656276s), standbys: x 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.639401+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0656276s), standbys: x 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.656730+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.656730+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.657103+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.657103+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.657474+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.657474+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.657867+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.657867+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.665697+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.665697+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.666267+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.666267+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.666913+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.666913+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.667611+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.667611+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.668299+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.668299+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.668984+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.668984+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.669655+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.669655+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.669863+0000 mon.a (mon.0) 739 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.669863+0000 mon.a (mon.0) 739 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.670034+0000 mon.a (mon.0) 740 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.670034+0000 mon.a (mon.0) 740 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.670120+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.670120+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.670964+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.670964+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.671191+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.671191+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.671710+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.671710+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.671925+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.671925+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.672226+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.672226+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:26:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.673025+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.673025+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.678120+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.678120+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.679202+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.679202+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.680692+0000 mon.a (mon.0) 741 : cluster [INF] Manager daemon y is now available 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: cluster 2026-03-09T17:26:56.680692+0000 mon.a (mon.0) 741 : cluster [INF] Manager daemon y is now available 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.706146+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.706146+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.711399+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.711399+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.733406+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.733406+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.740145+0000 mon.c (mon.2) 42 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.740145+0000 mon.c (mon.2) 42 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.740504+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.740504+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.764801+0000 mon.c (mon.2) 43 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.764801+0000 mon.c (mon.2) 43 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.765146+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:57 vm00 bash[28333]: audit 2026-03-09T17:26:56.765146+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.637409+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.637409+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.639401+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0656276s), standbys: x 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.639401+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0656276s), standbys: x 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.656730+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.656730+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.657103+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.657103+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.657474+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.657474+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.657867+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.657867+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.665697+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.665697+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.666267+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.666267+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.666913+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.666913+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.667611+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.667611+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.668299+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.668299+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.668984+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.668984+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.669655+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.669655+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.669863+0000 mon.a (mon.0) 739 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.669863+0000 mon.a (mon.0) 739 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.670034+0000 mon.a (mon.0) 740 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.670034+0000 mon.a (mon.0) 740 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:26:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.670120+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.670120+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.670964+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.670964+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.671191+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.671191+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.671710+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.671710+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.671925+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.671925+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.672226+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.672226+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.673025+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.673025+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.678120+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.678120+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.679202+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.679202+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.680692+0000 mon.a (mon.0) 741 : cluster [INF] Manager daemon y is now available 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: cluster 2026-03-09T17:26:56.680692+0000 mon.a (mon.0) 741 : cluster [INF] Manager daemon y is now available 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.706146+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.706146+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.711399+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.711399+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.733406+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.733406+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.740145+0000 mon.c (mon.2) 42 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.740145+0000 mon.c (mon.2) 42 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.740504+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.740504+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.764801+0000 mon.c (mon.2) 43 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.764801+0000 mon.c (mon.2) 43 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.765146+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:57 vm00 bash[20770]: audit 2026-03-09T17:26:56.765146+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.637409+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T17:26:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.637409+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T17:26:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.639401+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0656276s), standbys: x 2026-03-09T17:26:58.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.639401+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0656276s), standbys: x 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.656730+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.656730+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.657103+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.657103+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.657474+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.657474+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.657867+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.657867+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.665697+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.665697+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.666267+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.666267+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.666913+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.666913+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.667611+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.667611+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.668299+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.668299+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.668984+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.668984+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.669655+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.669655+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.669863+0000 mon.a (mon.0) 739 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.669863+0000 mon.a (mon.0) 739 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.670034+0000 mon.a (mon.0) 740 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.670034+0000 mon.a (mon.0) 740 : cluster [DBG] Standby manager daemon x started 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.670120+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.670120+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.670964+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.670964+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.671191+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.671191+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.671710+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.671710+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.671925+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.671925+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.672226+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.672226+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.673025+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.673025+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.678120+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.678120+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.679202+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.679202+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.102:0/16908758' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.680692+0000 mon.a (mon.0) 741 : cluster [INF] Manager daemon y is now available 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: cluster 2026-03-09T17:26:56.680692+0000 mon.a (mon.0) 741 : cluster [INF] Manager daemon y is now available 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.706146+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.706146+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.711399+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.711399+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.733406+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.733406+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.740145+0000 mon.c (mon.2) 42 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.740145+0000 mon.c (mon.2) 42 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.740504+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.740504+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.764801+0000 mon.c (mon.2) 43 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.764801+0000 mon.c (mon.2) 43 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.765146+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:58.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:57 vm02 bash[23351]: audit 2026-03-09T17:26:56.765146+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T17:26:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cluster 2026-03-09T17:26:57.670007+0000 mon.a (mon.0) 745 : cluster [DBG] mgrmap e19: y(active, since 1.09625s), standbys: x 2026-03-09T17:26:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cluster 2026-03-09T17:26:57.670007+0000 mon.a (mon.0) 745 : cluster [DBG] mgrmap e19: y(active, since 1.09625s), standbys: x 2026-03-09T17:26:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:57.704970+0000 mgr.y (mgr.14505) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T17:26:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:57.704970+0000 mgr.y (mgr.14505) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T17:26:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: audit 2026-03-09T17:26:57.709400+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: audit 2026-03-09T17:26:57.709400+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:57.824694+0000 mgr.y (mgr.14505) 4 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Bus STARTING 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:57.824694+0000 mgr.y (mgr.14505) 4 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Bus STARTING 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:57.926061+0000 mgr.y (mgr.14505) 5 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:57.926061+0000 mgr.y (mgr.14505) 5 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:58.036412+0000 mgr.y (mgr.14505) 6 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:58.036412+0000 mgr.y (mgr.14505) 6 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:58.036459+0000 mgr.y (mgr.14505) 7 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Bus STARTED 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:58.036459+0000 mgr.y (mgr.14505) 7 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Bus STARTED 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:58.036852+0000 mgr.y (mgr.14505) 8 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Client ('192.168.123.100', 38252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:58 vm00 bash[28333]: cephadm 2026-03-09T17:26:58.036852+0000 mgr.y (mgr.14505) 8 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Client ('192.168.123.100', 38252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cluster 2026-03-09T17:26:57.670007+0000 mon.a (mon.0) 745 : cluster [DBG] mgrmap e19: y(active, since 1.09625s), standbys: x 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cluster 2026-03-09T17:26:57.670007+0000 mon.a (mon.0) 745 : cluster [DBG] mgrmap e19: y(active, since 1.09625s), standbys: x 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:57.704970+0000 mgr.y (mgr.14505) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:57.704970+0000 mgr.y (mgr.14505) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: audit 2026-03-09T17:26:57.709400+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: audit 2026-03-09T17:26:57.709400+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:57.824694+0000 mgr.y (mgr.14505) 4 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Bus STARTING 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:57.824694+0000 mgr.y (mgr.14505) 4 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Bus STARTING 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:57.926061+0000 mgr.y (mgr.14505) 5 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:57.926061+0000 mgr.y (mgr.14505) 5 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:58.036412+0000 mgr.y (mgr.14505) 6 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:58.036412+0000 mgr.y (mgr.14505) 6 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:58.036459+0000 mgr.y (mgr.14505) 7 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Bus STARTED 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:58.036459+0000 mgr.y (mgr.14505) 7 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Bus STARTED 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:58.036852+0000 mgr.y (mgr.14505) 8 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Client ('192.168.123.100', 38252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:26:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:58 vm00 bash[20770]: cephadm 2026-03-09T17:26:58.036852+0000 mgr.y (mgr.14505) 8 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Client ('192.168.123.100', 38252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cluster 2026-03-09T17:26:57.670007+0000 mon.a (mon.0) 745 : cluster [DBG] mgrmap e19: y(active, since 1.09625s), standbys: x 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cluster 2026-03-09T17:26:57.670007+0000 mon.a (mon.0) 745 : cluster [DBG] mgrmap e19: y(active, since 1.09625s), standbys: x 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:57.704970+0000 mgr.y (mgr.14505) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:57.704970+0000 mgr.y (mgr.14505) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: audit 2026-03-09T17:26:57.709400+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: audit 2026-03-09T17:26:57.709400+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:57.824694+0000 mgr.y (mgr.14505) 4 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Bus STARTING 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:57.824694+0000 mgr.y (mgr.14505) 4 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Bus STARTING 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:57.926061+0000 mgr.y (mgr.14505) 5 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:57.926061+0000 mgr.y (mgr.14505) 5 : cephadm [INF] [09/Mar/2026:17:26:57] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:58.036412+0000 mgr.y (mgr.14505) 6 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:58.036412+0000 mgr.y (mgr.14505) 6 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:58.036459+0000 mgr.y (mgr.14505) 7 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Bus STARTED 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:58.036459+0000 mgr.y (mgr.14505) 7 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Bus STARTED 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:58.036852+0000 mgr.y (mgr.14505) 8 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Client ('192.168.123.100', 38252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:26:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:58 vm02 bash[23351]: cephadm 2026-03-09T17:26:58.036852+0000 mgr.y (mgr.14505) 8 : cephadm [INF] [09/Mar/2026:17:26:58] ENGINE Client ('192.168.123.100', 38252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:27:00.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:59 vm00 bash[28333]: cluster 2026-03-09T17:26:58.646857+0000 mgr.y (mgr.14505) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:00.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:59 vm00 bash[28333]: cluster 2026-03-09T17:26:58.646857+0000 mgr.y (mgr.14505) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:00.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:59 vm00 bash[28333]: cluster 2026-03-09T17:26:58.727679+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-09T17:27:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:26:59 vm00 bash[28333]: cluster 2026-03-09T17:26:58.727679+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-09T17:27:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:59 vm00 bash[20770]: cluster 2026-03-09T17:26:58.646857+0000 mgr.y (mgr.14505) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:59 vm00 bash[20770]: cluster 2026-03-09T17:26:58.646857+0000 mgr.y (mgr.14505) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:59 vm00 bash[20770]: cluster 2026-03-09T17:26:58.727679+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-09T17:27:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:26:59 vm00 bash[20770]: cluster 2026-03-09T17:26:58.727679+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-09T17:27:00.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:59 vm02 bash[23351]: cluster 2026-03-09T17:26:58.646857+0000 mgr.y (mgr.14505) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:00.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:59 vm02 bash[23351]: cluster 2026-03-09T17:26:58.646857+0000 mgr.y (mgr.14505) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:00.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:59 vm02 bash[23351]: cluster 2026-03-09T17:26:58.727679+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-09T17:27:00.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:26:59 vm02 bash[23351]: cluster 2026-03-09T17:26:58.727679+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-09T17:27:01.731 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:27:01.865 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:27:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:01 vm00 bash[20770]: cluster 2026-03-09T17:27:00.647200+0000 mgr.y (mgr.14505) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:01 vm00 bash[20770]: cluster 2026-03-09T17:27:00.647200+0000 mgr.y (mgr.14505) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:01 vm00 bash[20770]: cluster 2026-03-09T17:27:00.734492+0000 mon.a (mon.0) 748 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T17:27:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:01 vm00 bash[20770]: cluster 2026-03-09T17:27:00.734492+0000 mon.a (mon.0) 748 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T17:27:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:01 vm00 bash[28333]: cluster 2026-03-09T17:27:00.647200+0000 mgr.y (mgr.14505) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:01 vm00 bash[28333]: cluster 2026-03-09T17:27:00.647200+0000 mgr.y (mgr.14505) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:01 vm00 bash[28333]: cluster 2026-03-09T17:27:00.734492+0000 mon.a (mon.0) 748 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T17:27:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:01 vm00 bash[28333]: cluster 2026-03-09T17:27:00.734492+0000 mon.a (mon.0) 748 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.040+0000 7fa49214c640 1 -- 192.168.123.102:0/1815240500 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a4b70 msgr2=0x7fa4840a4f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.040+0000 7fa49214c640 1 --2- 192.168.123.102:0/1815240500 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a4b70 0x7fa4840a4f70 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fa488009960 tx=0x7fa48802f180 comp rx=0 tx=0).stop 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- 192.168.123.102:0/1815240500 shutdown_connections 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 --2- 192.168.123.102:0/1815240500 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa4840a6600 0x7fa4840aaec0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 --2- 192.168.123.102:0/1815240500 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa4840a5c60 0x7fa4840a60c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 --2- 192.168.123.102:0/1815240500 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a4b70 0x7fa4840a4f70 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- 192.168.123.102:0/1815240500 >> 192.168.123.102:0/1815240500 conn(0x7fa4840a01c0 msgr2=0x7fa4840a2620 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- 192.168.123.102:0/1815240500 shutdown_connections 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- 192.168.123.102:0/1815240500 wait complete. 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 Processor -- start 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- start start 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa4840a4b70 0x7fa484144d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a5c60 0x7fa484145260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa4840a6600 0x7fa48414c2a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa4840b85d0 con 0x7fa4840a6600 2026-03-09T17:27:02.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fa4840b8450 con 0x7fa4840a4b70 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fa4840b8750 con 0x7fa4840a5c60 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a5c60 0x7fa484145260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a5c60 0x7fa484145260 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.102:45076/0 (socket says 192.168.123.102:45076) 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 -- 192.168.123.102:0/2905035509 learned_addr learned my addr 192.168.123.102:0/2905035509 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 -- 192.168.123.102:0/2905035509 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa4840a4b70 msgr2=0x7fa484144d20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 --2- 192.168.123.102:0/2905035509 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa4840a4b70 0x7fa484144d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 -- 192.168.123.102:0/2905035509 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa4840a6600 msgr2=0x7fa48414c2a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 --2- 192.168.123.102:0/2905035509 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa4840a6600 0x7fa48414c2a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 -- 192.168.123.102:0/2905035509 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa48414c9d0 con 0x7fa4840a5c60 2026-03-09T17:27:02.046 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa490949640 1 --2- 192.168.123.102:0/2905035509 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a5c60 0x7fa484145260 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fa48c02e5b0 tx=0x7fa48c0747f0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:02.047 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa47a7fc640 1 -- 192.168.123.102:0/2905035509 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa48c076840 con 0x7fa4840a5c60 2026-03-09T17:27:02.047 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- 192.168.123.102:0/2905035509 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa48414ccc0 con 0x7fa4840a5c60 2026-03-09T17:27:02.047 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa49214c640 1 -- 192.168.123.102:0/2905035509 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fa48414d240 con 0x7fa4840a5c60 2026-03-09T17:27:02.047 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa47a7fc640 1 -- 192.168.123.102:0/2905035509 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa48c076e50 con 0x7fa4840a5c60 2026-03-09T17:27:02.048 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.044+0000 7fa47a7fc640 1 -- 192.168.123.102:0/2905035509 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa48c07eda0 con 0x7fa4840a5c60 2026-03-09T17:27:02.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.048+0000 7fa47a7fc640 1 -- 192.168.123.102:0/2905035509 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fa48c088430 con 0x7fa4840a5c60 2026-03-09T17:27:02.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.048+0000 7fa47a7fc640 1 --2- 192.168.123.102:0/2905035509 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa4740776f0 0x7fa474079bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:02.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.048+0000 7fa47a7fc640 1 -- 192.168.123.102:0/2905035509 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fa48c0ffe20 con 0x7fa4840a5c60 2026-03-09T17:27:02.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.048+0000 7fa49214c640 1 -- 192.168.123.102:0/2905035509 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa454005180 con 0x7fa4840a5c60 2026-03-09T17:27:02.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.048+0000 7fa49114a640 1 --2- 192.168.123.102:0/2905035509 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa4740776f0 0x7fa474079bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:02.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.048+0000 7fa49114a640 1 --2- 192.168.123.102:0/2905035509 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa4740776f0 0x7fa474079bb0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fa488009880 tx=0x7fa488009510 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:02.053 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.052+0000 7fa47a7fc640 1 -- 192.168.123.102:0/2905035509 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa48c07a030 con 0x7fa4840a5c60 2026-03-09T17:27:02.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:01 vm02 bash[23351]: cluster 2026-03-09T17:27:00.647200+0000 mgr.y (mgr.14505) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:02.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:01 vm02 bash[23351]: cluster 2026-03-09T17:27:00.647200+0000 mgr.y (mgr.14505) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:02.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:01 vm02 bash[23351]: cluster 2026-03-09T17:27:00.734492+0000 mon.a (mon.0) 748 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T17:27:02.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:01 vm02 bash[23351]: cluster 2026-03-09T17:27:00.734492+0000 mon.a (mon.0) 748 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T17:27:02.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.176+0000 7fa49214c640 1 -- 192.168.123.102:0/2905035509 --> v2:192.168.123.100:6800/2673235927 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}) -- 0x7fa454002bf0 con 0x7fa4740776f0 2026-03-09T17:27:02.204 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled grafana update... 2026-03-09T17:27:02.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.200+0000 7fa47a7fc640 1 -- 192.168.123.102:0/2905035509 <== mgr.14505 v2:192.168.123.100:6800/2673235927 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+28 (secure 0 0 0) 0x7fa454002bf0 con 0x7fa4740776f0 2026-03-09T17:27:02.206 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 -- 192.168.123.102:0/2905035509 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa4740776f0 msgr2=0x7fa474079bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:02.206 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 --2- 192.168.123.102:0/2905035509 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa4740776f0 0x7fa474079bb0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fa488009880 tx=0x7fa488009510 comp rx=0 tx=0).stop 2026-03-09T17:27:02.206 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 -- 192.168.123.102:0/2905035509 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a5c60 msgr2=0x7fa484145260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:02.206 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 --2- 192.168.123.102:0/2905035509 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a5c60 0x7fa484145260 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fa48c02e5b0 tx=0x7fa48c0747f0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.207 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 -- 192.168.123.102:0/2905035509 shutdown_connections 2026-03-09T17:27:02.207 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 --2- 192.168.123.102:0/2905035509 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa4740776f0 0x7fa474079bb0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.207 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 --2- 192.168.123.102:0/2905035509 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa4840a6600 0x7fa48414c2a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.207 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 --2- 192.168.123.102:0/2905035509 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa4840a5c60 0x7fa484145260 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.207 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 --2- 192.168.123.102:0/2905035509 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa4840a4b70 0x7fa484144d20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:02.207 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 -- 192.168.123.102:0/2905035509 >> 192.168.123.102:0/2905035509 conn(0x7fa4840a01c0 msgr2=0x7fa4840a6ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:02.207 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 -- 192.168.123.102:0/2905035509 shutdown_connections 2026-03-09T17:27:02.207 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:02.204+0000 7fa45bfff640 1 -- 192.168.123.102:0/2905035509 wait complete. 2026-03-09T17:27:02.303 DEBUG:teuthology.orchestra.run.vm02:grafana.a> sudo journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@grafana.a.service 2026-03-09T17:27:02.304 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T17:27:02.304 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T17:27:03.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:01.449527+0000 mgr.y (mgr.14505) 11 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:01.449527+0000 mgr.y (mgr.14505) 11 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.165389+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.165389+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.174587+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.174587+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.180886+0000 mgr.y (mgr.14505) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.180886+0000 mgr.y (mgr.14505) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: cephadm 2026-03-09T17:27:02.181903+0000 mgr.y (mgr.14505) 13 : cephadm [INF] Saving service grafana spec with placement vm02=a;count:1 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: cephadm 2026-03-09T17:27:02.181903+0000 mgr.y (mgr.14505) 13 : cephadm [INF] Saving service grafana spec with placement vm02=a;count:1 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.192640+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.192640+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.352841+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.352841+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.360551+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.360551+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.815208+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.815208+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.821401+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.821401+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.823325+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.823325+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.823567+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.823567+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.990602+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.990602+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.995408+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.995408+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.998818+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.998818+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.999008+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:02.999008+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:03.000019+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:03.000019+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:03.000832+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:03.000832+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:03.150735+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:03.150735+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:03.158465+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[20770]: audit 2026-03-09T17:27:03.158465+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:01.449527+0000 mgr.y (mgr.14505) 11 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:01.449527+0000 mgr.y (mgr.14505) 11 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.165389+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.165389+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.174587+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.174587+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.459 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.180886+0000 mgr.y (mgr.14505) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.180886+0000 mgr.y (mgr.14505) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: cephadm 2026-03-09T17:27:02.181903+0000 mgr.y (mgr.14505) 13 : cephadm [INF] Saving service grafana spec with placement vm02=a;count:1 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: cephadm 2026-03-09T17:27:02.181903+0000 mgr.y (mgr.14505) 13 : cephadm [INF] Saving service grafana spec with placement vm02=a;count:1 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.192640+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.192640+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.352841+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.352841+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.360551+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.360551+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.815208+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.815208+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.821401+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.821401+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.823325+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.823325+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.823567+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.823567+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.990602+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.990602+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.995408+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.995408+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.998818+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.998818+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.999008+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:02.999008+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:03.000019+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:03.000019+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:03.000832+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:03.000832+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:03.150735+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:03.150735+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:03.158465+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 bash[28333]: audit 2026-03-09T17:27:03.158465+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.460 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:01.449527+0000 mgr.y (mgr.14505) 11 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:01.449527+0000 mgr.y (mgr.14505) 11 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.165389+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.165389+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.174587+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.174587+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.180886+0000 mgr.y (mgr.14505) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.180886+0000 mgr.y (mgr.14505) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: cephadm 2026-03-09T17:27:02.181903+0000 mgr.y (mgr.14505) 13 : cephadm [INF] Saving service grafana spec with placement vm02=a;count:1 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: cephadm 2026-03-09T17:27:02.181903+0000 mgr.y (mgr.14505) 13 : cephadm [INF] Saving service grafana spec with placement vm02=a;count:1 2026-03-09T17:27:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.192640+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.192640+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.352841+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.352841+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.360551+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.360551+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.815208+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.815208+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.821401+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.821401+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.823325+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.823325+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.823567+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.823567+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.990602+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.990602+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.995408+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.995408+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.998818+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.998818+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.999008+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:02.999008+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:03.000019+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:03.000019+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:03.000832+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:03.000832+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:03.150735+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:03.150735+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:03.158465+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:03 vm02 bash[23351]: audit 2026-03-09T17:27:03.158465+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:03.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.710 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.988 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.988 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.988 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.989 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.989 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.989 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.989 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.989 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:03.989 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: Started Ceph node-exporter.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:27:03.989 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 17:27:03 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cluster 2026-03-09T17:27:02.647523+0000 mgr.y (mgr.14505) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cluster 2026-03-09T17:27:02.647523+0000 mgr.y (mgr.14505) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.001685+0000 mgr.y (mgr.14505) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.001685+0000 mgr.y (mgr.14505) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.001757+0000 mgr.y (mgr.14505) 16 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.001757+0000 mgr.y (mgr.14505) 16 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.034819+0000 mgr.y (mgr.14505) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.034819+0000 mgr.y (mgr.14505) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.037292+0000 mgr.y (mgr.14505) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.037292+0000 mgr.y (mgr.14505) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.070581+0000 mgr.y (mgr.14505) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.070581+0000 mgr.y (mgr.14505) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.075354+0000 mgr.y (mgr.14505) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.075354+0000 mgr.y (mgr.14505) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.106780+0000 mgr.y (mgr.14505) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.106780+0000 mgr.y (mgr.14505) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.112946+0000 mgr.y (mgr.14505) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.112946+0000 mgr.y (mgr.14505) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.165814+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.165814+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.192108+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.192108+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.198889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.198889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.201250+0000 mgr.y (mgr.14505) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.201250+0000 mgr.y (mgr.14505) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.961067+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.961067+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.972150+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.972150+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.979795+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:04 vm00 bash[20770]: audit 2026-03-09T17:27:03.979795+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cluster 2026-03-09T17:27:02.647523+0000 mgr.y (mgr.14505) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cluster 2026-03-09T17:27:02.647523+0000 mgr.y (mgr.14505) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.001685+0000 mgr.y (mgr.14505) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.001685+0000 mgr.y (mgr.14505) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.001757+0000 mgr.y (mgr.14505) 16 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.001757+0000 mgr.y (mgr.14505) 16 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:27:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.034819+0000 mgr.y (mgr.14505) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.034819+0000 mgr.y (mgr.14505) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.037292+0000 mgr.y (mgr.14505) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.037292+0000 mgr.y (mgr.14505) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.070581+0000 mgr.y (mgr.14505) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.070581+0000 mgr.y (mgr.14505) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.075354+0000 mgr.y (mgr.14505) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.075354+0000 mgr.y (mgr.14505) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.106780+0000 mgr.y (mgr.14505) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.106780+0000 mgr.y (mgr.14505) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.112946+0000 mgr.y (mgr.14505) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.112946+0000 mgr.y (mgr.14505) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.165814+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.165814+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.192108+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.192108+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.198889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.198889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.201250+0000 mgr.y (mgr.14505) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.201250+0000 mgr.y (mgr.14505) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.961067+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.961067+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.972150+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.972150+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.979795+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:04 vm00 bash[28333]: audit 2026-03-09T17:27:03.979795+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.289 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:03 vm00 bash[55703]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T17:27:04.426 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.427 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cluster 2026-03-09T17:27:02.647523+0000 mgr.y (mgr.14505) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cluster 2026-03-09T17:27:02.647523+0000 mgr.y (mgr.14505) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.001685+0000 mgr.y (mgr.14505) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.001685+0000 mgr.y (mgr.14505) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.001757+0000 mgr.y (mgr.14505) 16 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.001757+0000 mgr.y (mgr.14505) 16 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.034819+0000 mgr.y (mgr.14505) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.034819+0000 mgr.y (mgr.14505) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.037292+0000 mgr.y (mgr.14505) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.037292+0000 mgr.y (mgr.14505) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.conf 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.070581+0000 mgr.y (mgr.14505) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.070581+0000 mgr.y (mgr.14505) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.075354+0000 mgr.y (mgr.14505) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.075354+0000 mgr.y (mgr.14505) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.106780+0000 mgr.y (mgr.14505) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.106780+0000 mgr.y (mgr.14505) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.112946+0000 mgr.y (mgr.14505) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.112946+0000 mgr.y (mgr.14505) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/config/ceph.client.admin.keyring 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.165814+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.165814+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.192108+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.192108+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.198889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.198889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.201250+0000 mgr.y (mgr.14505) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.201250+0000 mgr.y (mgr.14505) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.961067+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.961067+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.972150+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.972150+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.979795+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[23351]: audit 2026-03-09T17:27:03.979795+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:04.427 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.427 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.428 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.428 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.735 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.735 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.736 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.736 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.736 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.736 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.737 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.737 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:04.737 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:05.135 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:04 vm02 systemd[1]: Started Ceph node-exporter.b for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:27:05.135 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:04 vm02 bash[50727]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.980706+0000 mgr.y (mgr.14505) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm02 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: cephadm 2026-03-09T17:27:03.980706+0000 mgr.y (mgr.14505) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm02 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.768081+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.768081+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.772699+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.772699+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.777262+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.777262+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.780925+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.780925+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.786383+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[20770]: audit 2026-03-09T17:27:04.786383+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.980706+0000 mgr.y (mgr.14505) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm02 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: cephadm 2026-03-09T17:27:03.980706+0000 mgr.y (mgr.14505) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm02 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.768081+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.768081+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.772699+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.772699+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.777262+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.777262+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.780925+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.780925+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.786383+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:05 vm00 bash[28333]: audit 2026-03-09T17:27:04.786383+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.417 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:05 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.980706+0000 mgr.y (mgr.14505) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm02 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: cephadm 2026-03-09T17:27:03.980706+0000 mgr.y (mgr.14505) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm02 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.768081+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.768081+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.772699+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.772699+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.777262+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.777262+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.780925+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.780925+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.786383+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:05 vm02 bash[23351]: audit 2026-03-09T17:27:04.786383+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:05.787 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[55703]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T17:27:06.185 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[55703]: 2abcce694348: Pulling fs layer 2026-03-09T17:27:06.185 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[55703]: 455fd88e5221: Pulling fs layer 2026-03-09T17:27:06.185 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:05 vm00 bash[55703]: 324153f2810a: Pulling fs layer 2026-03-09T17:27:06.506 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[23351]: cluster 2026-03-09T17:27:04.647984+0000 mgr.y (mgr.14505) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T17:27:06.507 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[23351]: cluster 2026-03-09T17:27:04.647984+0000 mgr.y (mgr.14505) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T17:27:06.507 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[23351]: cephadm 2026-03-09T17:27:04.791244+0000 mgr.y (mgr.14505) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T17:27:06.507 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[23351]: cephadm 2026-03-09T17:27:04.791244+0000 mgr.y (mgr.14505) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T17:27:06.507 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[50727]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:06 vm00 bash[28333]: cluster 2026-03-09T17:27:04.647984+0000 mgr.y (mgr.14505) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:06 vm00 bash[28333]: cluster 2026-03-09T17:27:04.647984+0000 mgr.y (mgr.14505) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:06 vm00 bash[28333]: cephadm 2026-03-09T17:27:04.791244+0000 mgr.y (mgr.14505) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:06 vm00 bash[28333]: cephadm 2026-03-09T17:27:04.791244+0000 mgr.y (mgr.14505) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[20770]: cluster 2026-03-09T17:27:04.647984+0000 mgr.y (mgr.14505) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[20770]: cluster 2026-03-09T17:27:04.647984+0000 mgr.y (mgr.14505) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[20770]: cephadm 2026-03-09T17:27:04.791244+0000 mgr.y (mgr.14505) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[20770]: cephadm 2026-03-09T17:27:04.791244+0000 mgr.y (mgr.14505) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T17:27:06.520 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:27:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:27:06.520 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 455fd88e5221: Verifying Checksum 2026-03-09T17:27:06.520 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 455fd88e5221: Download complete 2026-03-09T17:27:06.520 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 2abcce694348: Verifying Checksum 2026-03-09T17:27:06.520 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 2abcce694348: Download complete 2026-03-09T17:27:06.520 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 2abcce694348: Pull complete 2026-03-09T17:27:06.776 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 324153f2810a: Verifying Checksum 2026-03-09T17:27:06.776 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 324153f2810a: Download complete 2026-03-09T17:27:06.776 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 455fd88e5221: Pull complete 2026-03-09T17:27:06.776 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: 324153f2810a: Pull complete 2026-03-09T17:27:06.776 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T17:27:06.776 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T17:27:06.885 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[50727]: 2abcce694348: Pulling fs layer 2026-03-09T17:27:06.885 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[50727]: 455fd88e5221: Pulling fs layer 2026-03-09T17:27:06.885 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[50727]: 324153f2810a: Pulling fs layer 2026-03-09T17:27:06.984 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.774Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.774Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T17:27:07.034 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T17:27:07.035 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:06 vm00 bash[55703]: ts=2026-03-09T17:27:06.775Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.139+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/1257745241 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c941033e0 msgr2=0x7f4c9410f910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.139+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/1257745241 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c941033e0 0x7f4c9410f910 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f4c88009fc0 tx=0x7f4c8802f3b0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c93fff640 1 --2- 192.168.123.100:0/1257745241 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c94108a40 0x7f4c94108e20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/1257745241 shutdown_connections 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/1257745241 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c941033e0 0x7f4c9410f910 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/1257745241 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c94102a40 0x7f4c94102ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/1257745241 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c94108a40 0x7f4c94108e20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/1257745241 >> 192.168.123.100:0/1257745241 conn(0x7f4c940fe740 msgr2=0x7f4c94100b60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/1257745241 shutdown_connections 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/1257745241 wait complete. 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 Processor -- start 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- start start 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c94102a40 0x7f4c94116580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c941033e0 0x7f4c94116ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c94108a40 0x7f4c941116a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4c94076db0 con 0x7f4c94102a40 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f4c94076c30 con 0x7f4c941033e0 2026-03-09T17:27:07.144 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f4c94076f30 con 0x7f4c94108a40 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c941033e0 0x7f4c94116ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c941033e0 0x7f4c94116ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:49838/0 (socket says 192.168.123.100:49838) 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 -- 192.168.123.100:0/3813581916 learned_addr learned my addr 192.168.123.100:0/3813581916 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c98c39640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c94108a40 0x7f4c941116a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 -- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c94108a40 msgr2=0x7f4c941116a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c94108a40 0x7f4c941116a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 -- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c94102a40 msgr2=0x7f4c94116580 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c93fff640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c94102a40 0x7f4c94116580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c94102a40 0x7f4c94116580 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 -- 192.168.123.100:0/3813581916 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4c94111f60 con 0x7f4c941033e0 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c98c39640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c94108a40 0x7f4c941116a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:27:07.145 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c937fe640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c941033e0 0x7f4c94116ac0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f4c800027e0 tx=0x7f4c80002cb0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:07.146 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c93fff640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c94102a40 0x7f4c94116580 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:27:07.146 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c917fa640 1 -- 192.168.123.100:0/3813581916 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4c8000ed40 con 0x7f4c941033e0 2026-03-09T17:27:07.146 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c917fa640 1 -- 192.168.123.100:0/3813581916 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f4c800108a0 con 0x7f4c941033e0 2026-03-09T17:27:07.146 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c917fa640 1 -- 192.168.123.100:0/3813581916 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4c8000f6a0 con 0x7f4c941033e0 2026-03-09T17:27:07.146 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4c94112250 con 0x7f4c941033e0 2026-03-09T17:27:07.146 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f4c9411c470 con 0x7f4c941033e0 2026-03-09T17:27:07.147 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.143+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4c941124b0 con 0x7f4c941033e0 2026-03-09T17:27:07.147 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.147+0000 7f4c917fa640 1 -- 192.168.123.100:0/3813581916 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f4c80010430 con 0x7f4c941033e0 2026-03-09T17:27:07.148 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.147+0000 7f4c917fa640 1 --2- 192.168.123.100:0/3813581916 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c680776c0 0x7f4c68079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:07.148 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.147+0000 7f4c93fff640 1 --2- 192.168.123.100:0/3813581916 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c680776c0 0x7f4c68079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:07.148 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.147+0000 7f4c917fa640 1 -- 192.168.123.100:0/3813581916 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f4c8009a2e0 con 0x7f4c941033e0 2026-03-09T17:27:07.148 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.147+0000 7f4c93fff640 1 --2- 192.168.123.100:0/3813581916 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c680776c0 0x7f4c68079b80 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f4c84005fd0 tx=0x7f4c84005e20 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:07.150 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.147+0000 7f4c917fa640 1 -- 192.168.123.100:0/3813581916 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4c80014070 con 0x7f4c941033e0 2026-03-09T17:27:07.279 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.275+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f4c94112bd0 con 0x7f4c941033e0 2026-03-09T17:27:07.284 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c917fa640 1 -- 192.168.123.100:0/3813581916 <== mon.1 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) ==== 170+0+59 (secure 0 0 0) 0x7f4c800670c0 con 0x7f4c941033e0 2026-03-09T17:27:07.284 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-09T17:27:07.284 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDrAq9pwiGyEBAA4AjgzZHQToBqy8+TCjGNbQ== 2026-03-09T17:27:07.286 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c680776c0 msgr2=0x7f4c68079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:07.286 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/3813581916 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c680776c0 0x7f4c68079b80 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f4c84005fd0 tx=0x7f4c84005e20 comp rx=0 tx=0).stop 2026-03-09T17:27:07.287 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c941033e0 msgr2=0x7f4c94116ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:07.287 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c941033e0 0x7f4c94116ac0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f4c800027e0 tx=0x7f4c80002cb0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.287 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 shutdown_connections 2026-03-09T17:27:07.287 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/3813581916 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c680776c0 0x7f4c68079b80 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.287 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c94108a40 0x7f4c941116a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.287 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c941033e0 0x7f4c94116ac0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.287 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 --2- 192.168.123.100:0/3813581916 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c94102a40 0x7f4c94116580 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:07.287 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.283+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 >> 192.168.123.100:0/3813581916 conn(0x7f4c940fe740 msgr2=0x7f4c940feb20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:07.288 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.287+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 shutdown_connections 2026-03-09T17:27:07.288 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:07.287+0000 7f4c9a6c3640 1 -- 192.168.123.100:0/3813581916 wait complete. 2026-03-09T17:27:07.385 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[50727]: 455fd88e5221: Verifying Checksum 2026-03-09T17:27:07.385 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:06 vm02 bash[50727]: 455fd88e5221: Download complete 2026-03-09T17:27:07.385 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: 2abcce694348: Verifying Checksum 2026-03-09T17:27:07.385 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: 2abcce694348: Download complete 2026-03-09T17:27:07.385 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: 2abcce694348: Pull complete 2026-03-09T17:27:07.385 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: 324153f2810a: Verifying Checksum 2026-03-09T17:27:07.385 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: 324153f2810a: Download complete 2026-03-09T17:27:07.385 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: 455fd88e5221: Pull complete 2026-03-09T17:27:07.430 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:27:07.431 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T17:27:07.431 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T17:27:07.449 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: cluster 2026-03-09T17:27:06.648289+0000 mgr.y (mgr.14505) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: cluster 2026-03-09T17:27:06.648289+0000 mgr.y (mgr.14505) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: audit 2026-03-09T17:27:06.721688+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: audit 2026-03-09T17:27:06.721688+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: audit 2026-03-09T17:27:07.279403+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3813581916' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: audit 2026-03-09T17:27:07.279403+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3813581916' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: audit 2026-03-09T17:27:07.280018+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: audit 2026-03-09T17:27:07.280018+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: audit 2026-03-09T17:27:07.282177+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:07.725 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[23351]: audit 2026-03-09T17:27:07.282177+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: 324153f2810a: Pull complete 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.572Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.572Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T17:27:07.725 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T17:27:07.726 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:07 vm02 bash[50727]: ts=2026-03-09T17:27:07.573Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: cluster 2026-03-09T17:27:06.648289+0000 mgr.y (mgr.14505) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: cluster 2026-03-09T17:27:06.648289+0000 mgr.y (mgr.14505) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: audit 2026-03-09T17:27:06.721688+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: audit 2026-03-09T17:27:06.721688+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: audit 2026-03-09T17:27:07.279403+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3813581916' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: audit 2026-03-09T17:27:07.279403+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3813581916' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: audit 2026-03-09T17:27:07.280018+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: audit 2026-03-09T17:27:07.280018+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: audit 2026-03-09T17:27:07.282177+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:07 vm00 bash[20770]: audit 2026-03-09T17:27:07.282177+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: cluster 2026-03-09T17:27:06.648289+0000 mgr.y (mgr.14505) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: cluster 2026-03-09T17:27:06.648289+0000 mgr.y (mgr.14505) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: audit 2026-03-09T17:27:06.721688+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: audit 2026-03-09T17:27:06.721688+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: audit 2026-03-09T17:27:07.279403+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3813581916' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: audit 2026-03-09T17:27:07.279403+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3813581916' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: audit 2026-03-09T17:27:07.280018+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: audit 2026-03-09T17:27:07.280018+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: audit 2026-03-09T17:27:07.282177+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:07 vm00 bash[28333]: audit 2026-03-09T17:27:07.282177+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.038 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.038 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.038 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.038 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.038 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.039 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.039 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:08 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.339 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.340 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 systemd[1]: Started Ceph alertmanager.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:27:09.652 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:09 vm00 bash[21037]: [09/Mar/2026:17:27:09] ENGINE Bus STOPPING 2026-03-09T17:27:09.652 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 bash[56160]: ts=2026-03-09T17:27:09.390Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T17:27:09.653 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 bash[56160]: ts=2026-03-09T17:27:09.390Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T17:27:09.653 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 bash[56160]: ts=2026-03-09T17:27:09.391Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-09T17:27:09.653 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 bash[56160]: ts=2026-03-09T17:27:09.392Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T17:27:09.653 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 bash[56160]: ts=2026-03-09T17:27:09.407Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T17:27:09.653 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 bash[56160]: ts=2026-03-09T17:27:09.412Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T17:27:09.653 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 bash[56160]: ts=2026-03-09T17:27:09.416Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T17:27:09.653 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:09 vm00 bash[56160]: ts=2026-03-09T17:27:09.416Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T17:27:09.885 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:09 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:09.908 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:09 vm00 bash[21037]: [09/Mar/2026:17:27:09] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T17:27:09.908 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:09 vm00 bash[21037]: [09/Mar/2026:17:27:09] ENGINE Bus STOPPED 2026-03-09T17:27:09.908 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:09 vm00 bash[21037]: [09/Mar/2026:17:27:09] ENGINE Bus STARTING 2026-03-09T17:27:10.275 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:09 vm00 bash[21037]: [09/Mar/2026:17:27:09] ENGINE Serving on http://:::9283 2026-03-09T17:27:10.275 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:09 vm00 bash[21037]: [09/Mar/2026:17:27:09] ENGINE Bus STARTED 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: cluster 2026-03-09T17:27:08.648694+0000 mgr.y (mgr.14505) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: cluster 2026-03-09T17:27:08.648694+0000 mgr.y (mgr.14505) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.274246+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.274246+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.280147+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.280147+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.289435+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.289435+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.295959+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.295959+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: cephadm 2026-03-09T17:27:09.302463+0000 mgr.y (mgr.14505) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: cephadm 2026-03-09T17:27:09.302463+0000 mgr.y (mgr.14505) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.330906+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.330906+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.335621+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.335621+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.339053+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.339053+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.345872+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:10 vm00 bash[28333]: audit 2026-03-09T17:27:09.345872+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: cluster 2026-03-09T17:27:08.648694+0000 mgr.y (mgr.14505) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: cluster 2026-03-09T17:27:08.648694+0000 mgr.y (mgr.14505) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.274246+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.274246+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.280147+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.280147+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.289435+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.289435+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.295959+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.295959+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: cephadm 2026-03-09T17:27:09.302463+0000 mgr.y (mgr.14505) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: cephadm 2026-03-09T17:27:09.302463+0000 mgr.y (mgr.14505) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.330906+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.330906+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.335621+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.335621+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.339053+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.339053+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.345872+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:10 vm00 bash[20770]: audit 2026-03-09T17:27:09.345872+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: cluster 2026-03-09T17:27:08.648694+0000 mgr.y (mgr.14505) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: cluster 2026-03-09T17:27:08.648694+0000 mgr.y (mgr.14505) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.274246+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.274246+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.280147+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.280147+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.289435+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.289435+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.295959+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.295959+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: cephadm 2026-03-09T17:27:09.302463+0000 mgr.y (mgr.14505) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: cephadm 2026-03-09T17:27:09.302463+0000 mgr.y (mgr.14505) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.330906+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.330906+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.335621+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.335621+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.339053+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.339053+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.345872+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:10 vm02 bash[23351]: audit 2026-03-09T17:27:09.345872+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:11.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:11 vm00 bash[28333]: audit 2026-03-09T17:27:09.339445+0000 mgr.y (mgr.14505) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:11 vm00 bash[28333]: audit 2026-03-09T17:27:09.339445+0000 mgr.y (mgr.14505) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:11 vm00 bash[28333]: cephadm 2026-03-09T17:27:09.355910+0000 mgr.y (mgr.14505) 31 : cephadm [INF] Deploying daemon grafana.a on vm02 2026-03-09T17:27:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:11 vm00 bash[28333]: cephadm 2026-03-09T17:27:09.355910+0000 mgr.y (mgr.14505) 31 : cephadm [INF] Deploying daemon grafana.a on vm02 2026-03-09T17:27:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:11 vm00 bash[20770]: audit 2026-03-09T17:27:09.339445+0000 mgr.y (mgr.14505) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:11 vm00 bash[20770]: audit 2026-03-09T17:27:09.339445+0000 mgr.y (mgr.14505) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:11 vm00 bash[20770]: cephadm 2026-03-09T17:27:09.355910+0000 mgr.y (mgr.14505) 31 : cephadm [INF] Deploying daemon grafana.a on vm02 2026-03-09T17:27:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:11 vm00 bash[20770]: cephadm 2026-03-09T17:27:09.355910+0000 mgr.y (mgr.14505) 31 : cephadm [INF] Deploying daemon grafana.a on vm02 2026-03-09T17:27:11.538 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:11 vm00 bash[56160]: ts=2026-03-09T17:27:11.393Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.001008871s 2026-03-09T17:27:11.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:11 vm02 bash[23351]: audit 2026-03-09T17:27:09.339445+0000 mgr.y (mgr.14505) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:11.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:11 vm02 bash[23351]: audit 2026-03-09T17:27:09.339445+0000 mgr.y (mgr.14505) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T17:27:11.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:11 vm02 bash[23351]: cephadm 2026-03-09T17:27:09.355910+0000 mgr.y (mgr.14505) 31 : cephadm [INF] Deploying daemon grafana.a on vm02 2026-03-09T17:27:11.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:11 vm02 bash[23351]: cephadm 2026-03-09T17:27:09.355910+0000 mgr.y (mgr.14505) 31 : cephadm [INF] Deploying daemon grafana.a on vm02 2026-03-09T17:27:11.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:27:12.111 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.b/config 2026-03-09T17:27:12.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/704991822 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4113ba0 msgr2=0x7f7ca4115f90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:12.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 --2- 192.168.123.102:0/704991822 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4113ba0 0x7f7ca4115f90 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f7c94009f90 tx=0x7f7c9402f390 comp rx=0 tx=0).stop 2026-03-09T17:27:12.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/704991822 shutdown_connections 2026-03-09T17:27:12.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 --2- 192.168.123.102:0/704991822 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4113ba0 0x7f7ca4115f90 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 --2- 192.168.123.102:0/704991822 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ca4077f40 0x7f7ca4113660 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 --2- 192.168.123.102:0/704991822 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ca4077620 0x7f7ca4077a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/704991822 >> 192.168.123.102:0/704991822 conn(0x7f7ca41009e0 msgr2=0x7f7ca4102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:12.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/704991822 shutdown_connections 2026-03-09T17:27:12.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/704991822 wait complete. 2026-03-09T17:27:12.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 Processor -- start 2026-03-09T17:27:12.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 -- start start 2026-03-09T17:27:12.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca8fc9640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4077620 0x7f7ca41a0fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:12.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.300+0000 7f7ca2575640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4077620 0x7f7ca41a0fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:12.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2575640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4077620 0x7f7ca41a0fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36606/0 (socket says 192.168.123.102:36606) 2026-03-09T17:27:12.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2575640 1 -- 192.168.123.102:0/635575425 learned_addr learned my addr 192.168.123.102:0/635575425 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca8fc9640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ca4077f40 0x7f7ca41a14e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca8fc9640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ca4113ba0 0x7f7ca41a5870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/635575425 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7ca4118a30 con 0x7f7ca4077f40 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/635575425 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f7ca41188b0 con 0x7f7ca4077620 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/635575425 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f7ca4118bb0 con 0x7f7ca4113ba0 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2d76640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ca4113ba0 0x7f7ca41a5870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca1d74640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ca4077f40 0x7f7ca41a14e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2d76640 1 -- 192.168.123.102:0/635575425 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4077620 msgr2=0x7f7ca41a0fa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2d76640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4077620 0x7f7ca41a0fa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2d76640 1 -- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ca4077f40 msgr2=0x7f7ca41a14e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2d76640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ca4077f40 0x7f7ca41a14e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2d76640 1 -- 192.168.123.102:0/635575425 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7ca41a5ff0 con 0x7f7ca4113ba0 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca1d74640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ca4077f40 0x7f7ca41a14e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2575640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4077620 0x7f7ca41a0fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca2d76640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ca4113ba0 0x7f7ca41a5870 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f7ca4076e10 tx=0x7f7c94004290 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:12.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7c8b7fe640 1 -- 192.168.123.102:0/635575425 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7c94004590 con 0x7f7ca4113ba0 2026-03-09T17:27:12.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/635575425 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7ca41a6280 con 0x7f7ca4113ba0 2026-03-09T17:27:12.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7c8b7fe640 1 -- 192.168.123.102:0/635575425 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f7c94004730 con 0x7f7ca4113ba0 2026-03-09T17:27:12.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/635575425 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f7ca41adb20 con 0x7f7ca4113ba0 2026-03-09T17:27:12.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7c8b7fe640 1 -- 192.168.123.102:0/635575425 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7c94007aa0 con 0x7f7ca4113ba0 2026-03-09T17:27:12.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7c8b7fe640 1 -- 192.168.123.102:0/635575425 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f7c94007c40 con 0x7f7ca4113ba0 2026-03-09T17:27:12.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.304+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/635575425 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7ca407aba0 con 0x7f7ca4113ba0 2026-03-09T17:27:12.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.308+0000 7f7c8b7fe640 1 --2- 192.168.123.102:0/635575425 >> v2:192.168.123.100:6800/2673235927 conn(0x7f7c70077630 0x7f7c70079af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:12.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.308+0000 7f7c8b7fe640 1 -- 192.168.123.102:0/635575425 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f7c940be160 con 0x7f7ca4113ba0 2026-03-09T17:27:12.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.308+0000 7f7ca2575640 1 --2- 192.168.123.102:0/635575425 >> v2:192.168.123.100:6800/2673235927 conn(0x7f7c70077630 0x7f7c70079af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:12.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.308+0000 7f7ca2575640 1 --2- 192.168.123.102:0/635575425 >> v2:192.168.123.100:6800/2673235927 conn(0x7f7c70077630 0x7f7c70079af0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f7c980042a0 tx=0x7f7c98009340 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:12.312 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.308+0000 7f7c8b7fe640 1 -- 192.168.123.102:0/635575425 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7ca407aba0 con 0x7f7ca4113ba0 2026-03-09T17:27:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:12 vm02 bash[23351]: cluster 2026-03-09T17:27:10.648935+0000 mgr.y (mgr.14505) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:12 vm02 bash[23351]: cluster 2026-03-09T17:27:10.648935+0000 mgr.y (mgr.14505) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:12 vm02 bash[23351]: audit 2026-03-09T17:27:11.728199+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:12 vm02 bash[23351]: audit 2026-03-09T17:27:11.728199+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:12 vm02 bash[23351]: audit 2026-03-09T17:27:11.766393+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:12 vm02 bash[23351]: audit 2026-03-09T17:27:11.766393+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:12.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.456+0000 7f7ca8fc9640 1 -- 192.168.123.102:0/635575425 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f7ca4077a00 con 0x7f7ca4113ba0 2026-03-09T17:27:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:12 vm00 bash[28333]: cluster 2026-03-09T17:27:10.648935+0000 mgr.y (mgr.14505) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:12 vm00 bash[28333]: cluster 2026-03-09T17:27:10.648935+0000 mgr.y (mgr.14505) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:12 vm00 bash[28333]: audit 2026-03-09T17:27:11.728199+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:12 vm00 bash[28333]: audit 2026-03-09T17:27:11.728199+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:12 vm00 bash[28333]: audit 2026-03-09T17:27:11.766393+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:12 vm00 bash[28333]: audit 2026-03-09T17:27:11.766393+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:12 vm00 bash[20770]: cluster 2026-03-09T17:27:10.648935+0000 mgr.y (mgr.14505) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:12 vm00 bash[20770]: cluster 2026-03-09T17:27:10.648935+0000 mgr.y (mgr.14505) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:12 vm00 bash[20770]: audit 2026-03-09T17:27:11.728199+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:12 vm00 bash[20770]: audit 2026-03-09T17:27:11.728199+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:12 vm00 bash[20770]: audit 2026-03-09T17:27:11.766393+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:12 vm00 bash[20770]: audit 2026-03-09T17:27:11.766393+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:12.577 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c8b7fe640 1 -- 192.168.123.102:0/635575425 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v17) ==== 170+0+59 (secure 0 0 0) 0x7f7c9403d070 con 0x7f7ca4113ba0 2026-03-09T17:27:12.577 INFO:teuthology.orchestra.run.vm02.stdout:[client.1] 2026-03-09T17:27:12.578 INFO:teuthology.orchestra.run.vm02.stdout: key = AQDwAq9p3wCCGxAAU4O/HAwaWHBrDsbqJU0Wjw== 2026-03-09T17:27:12.579 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 -- 192.168.123.102:0/635575425 >> v2:192.168.123.100:6800/2673235927 conn(0x7f7c70077630 msgr2=0x7f7c70079af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:12.579 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 --2- 192.168.123.102:0/635575425 >> v2:192.168.123.100:6800/2673235927 conn(0x7f7c70077630 0x7f7c70079af0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f7c980042a0 tx=0x7f7c98009340 comp rx=0 tx=0).stop 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 -- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ca4113ba0 msgr2=0x7f7ca41a5870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ca4113ba0 0x7f7ca41a5870 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f7ca4076e10 tx=0x7f7c94004290 comp rx=0 tx=0).stop 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 -- 192.168.123.102:0/635575425 shutdown_connections 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 --2- 192.168.123.102:0/635575425 >> v2:192.168.123.100:6800/2673235927 conn(0x7f7c70077630 0x7f7c70079af0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ca4113ba0 0x7f7ca41a5870 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ca4077f40 0x7f7ca41a14e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 --2- 192.168.123.102:0/635575425 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7ca4077620 0x7f7ca41a0fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 -- 192.168.123.102:0/635575425 >> 192.168.123.102:0/635575425 conn(0x7f7ca41009e0 msgr2=0x7f7ca4102dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 -- 192.168.123.102:0/635575425 shutdown_connections 2026-03-09T17:27:12.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-09T17:27:12.576+0000 7f7c897fa640 1 -- 192.168.123.102:0/635575425 wait complete. 2026-03-09T17:27:12.689 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-09T17:27:12.689 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T17:27:12.689 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T17:27:12.704 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T17:27:12.704 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T17:27:12.704 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph mgr dump --format=json 2026-03-09T17:27:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:13 vm02 bash[23351]: audit 2026-03-09T17:27:11.450804+0000 mgr.y (mgr.14505) 33 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:13 vm02 bash[23351]: audit 2026-03-09T17:27:11.450804+0000 mgr.y (mgr.14505) 33 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:13 vm02 bash[23351]: audit 2026-03-09T17:27:12.460940+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.102:0/635575425' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:13 vm02 bash[23351]: audit 2026-03-09T17:27:12.460940+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.102:0/635575425' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:13 vm02 bash[23351]: audit 2026-03-09T17:27:12.461379+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:13 vm02 bash[23351]: audit 2026-03-09T17:27:12.461379+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:13 vm02 bash[23351]: audit 2026-03-09T17:27:12.533951+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:13.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:13 vm02 bash[23351]: audit 2026-03-09T17:27:12.533951+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:13 vm00 bash[28333]: audit 2026-03-09T17:27:11.450804+0000 mgr.y (mgr.14505) 33 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:13 vm00 bash[28333]: audit 2026-03-09T17:27:11.450804+0000 mgr.y (mgr.14505) 33 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:13 vm00 bash[28333]: audit 2026-03-09T17:27:12.460940+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.102:0/635575425' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:13 vm00 bash[28333]: audit 2026-03-09T17:27:12.460940+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.102:0/635575425' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:13 vm00 bash[28333]: audit 2026-03-09T17:27:12.461379+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:13 vm00 bash[28333]: audit 2026-03-09T17:27:12.461379+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:13 vm00 bash[28333]: audit 2026-03-09T17:27:12.533951+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:13 vm00 bash[28333]: audit 2026-03-09T17:27:12.533951+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:13 vm00 bash[20770]: audit 2026-03-09T17:27:11.450804+0000 mgr.y (mgr.14505) 33 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:13 vm00 bash[20770]: audit 2026-03-09T17:27:11.450804+0000 mgr.y (mgr.14505) 33 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:13 vm00 bash[20770]: audit 2026-03-09T17:27:12.460940+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.102:0/635575425' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:13 vm00 bash[20770]: audit 2026-03-09T17:27:12.460940+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.102:0/635575425' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:13 vm00 bash[20770]: audit 2026-03-09T17:27:12.461379+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:13 vm00 bash[20770]: audit 2026-03-09T17:27:12.461379+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:13 vm00 bash[20770]: audit 2026-03-09T17:27:12.533951+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:13 vm00 bash[20770]: audit 2026-03-09T17:27:12.533951+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:27:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:14 vm00 bash[28333]: cluster 2026-03-09T17:27:12.649240+0000 mgr.y (mgr.14505) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:14 vm00 bash[28333]: cluster 2026-03-09T17:27:12.649240+0000 mgr.y (mgr.14505) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:14 vm00 bash[20770]: cluster 2026-03-09T17:27:12.649240+0000 mgr.y (mgr.14505) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:14 vm00 bash[20770]: cluster 2026-03-09T17:27:12.649240+0000 mgr.y (mgr.14505) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:14.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:14 vm02 bash[23351]: cluster 2026-03-09T17:27:12.649240+0000 mgr.y (mgr.14505) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:14.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:14 vm02 bash[23351]: cluster 2026-03-09T17:27:12.649240+0000 mgr.y (mgr.14505) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T17:27:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:15 vm00 bash[20770]: cluster 2026-03-09T17:27:14.649731+0000 mgr.y (mgr.14505) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:15 vm00 bash[20770]: cluster 2026-03-09T17:27:14.649731+0000 mgr.y (mgr.14505) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:15 vm00 bash[28333]: cluster 2026-03-09T17:27:14.649731+0000 mgr.y (mgr.14505) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:15 vm00 bash[28333]: cluster 2026-03-09T17:27:14.649731+0000 mgr.y (mgr.14505) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:15.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:15 vm02 bash[23351]: cluster 2026-03-09T17:27:14.649731+0000 mgr.y (mgr.14505) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:15.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:15 vm02 bash[23351]: cluster 2026-03-09T17:27:14.649731+0000 mgr.y (mgr.14505) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T17:27:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:27:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:27:17.357 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:17.696 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- 192.168.123.100:0/3898668949 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0106d70 msgr2=0x7f57f0107700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:17.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 --2- 192.168.123.100:0/3898668949 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0106d70 0x7f57f0107700 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7f57e000b0a0 tx=0x7f57e002f4a0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- 192.168.123.100:0/3898668949 shutdown_connections 2026-03-09T17:27:17.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 --2- 192.168.123.100:0/3898668949 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0106d70 0x7f57f0107700 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 --2- 192.168.123.100:0/3898668949 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f57f0101a90 0x7f57f0106660 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 --2- 192.168.123.100:0/3898668949 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57f0101170 0x7f57f0101550 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- 192.168.123.100:0/3898668949 >> 192.168.123.100:0/3898668949 conn(0x7f57f00780d0 msgr2=0x7f57f00ff450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:17.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- 192.168.123.100:0/3898668949 shutdown_connections 2026-03-09T17:27:17.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- 192.168.123.100:0/3898668949 wait complete. 2026-03-09T17:27:17.698 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 Processor -- start 2026-03-09T17:27:17.698 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- start start 2026-03-09T17:27:17.698 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f57f0101170 0x7f57f01a0f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:17.698 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0101a90 0x7f57f01a14b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57eeffd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f57f0101170 0x7f57f01a0f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57eeffd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f57f0101170 0x7f57f01a0f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:52970/0 (socket says 192.168.123.100:52970) 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57ee7fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0101a90 0x7f57f01a14b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57ee7fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0101a90 0x7f57f01a14b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33904/0 (socket says 192.168.123.100:33904) 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57f0106d70 0x7f57f01a5840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f57f0118ae0 con 0x7f57f0101a90 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f57f0118960 con 0x7f57f0106d70 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57f5347640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f57f0118c60 con 0x7f57f0101170 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57eeffd640 1 -- 192.168.123.100:0/1503936758 learned_addr learned my addr 192.168.123.100:0/1503936758 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:17.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.695+0000 7f57ef7fe640 1 --2- 192.168.123.100:0/1503936758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57f0106d70 0x7f57f01a5840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:17.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57ee7fc640 1 -- 192.168.123.100:0/1503936758 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f57f0101170 msgr2=0x7f57f01a0f70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:17.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57ee7fc640 1 --2- 192.168.123.100:0/1503936758 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f57f0101170 0x7f57f01a0f70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57ee7fc640 1 -- 192.168.123.100:0/1503936758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57f0106d70 msgr2=0x7f57f01a5840 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:17.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57ee7fc640 1 --2- 192.168.123.100:0/1503936758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57f0106d70 0x7f57f01a5840 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57ee7fc640 1 -- 192.168.123.100:0/1503936758 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f57f01a5fc0 con 0x7f57f0101a90 2026-03-09T17:27:17.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57ee7fc640 1 --2- 192.168.123.100:0/1503936758 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0101a90 0x7f57f01a14b0 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f57e400da20 tx=0x7f57e400def0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:17.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57cffff640 1 -- 192.168.123.100:0/1503936758 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f57e4014070 con 0x7f57f0101a90 2026-03-09T17:27:17.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57cffff640 1 -- 192.168.123.100:0/1503936758 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f57e4004540 con 0x7f57f0101a90 2026-03-09T17:27:17.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f57f01a62b0 con 0x7f57f0101a90 2026-03-09T17:27:17.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f57f01adaf0 con 0x7f57f0101a90 2026-03-09T17:27:17.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57cffff640 1 -- 192.168.123.100:0/1503936758 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f57e4005020 con 0x7f57f0101a90 2026-03-09T17:27:17.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57cffff640 1 -- 192.168.123.100:0/1503936758 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f57e40040d0 con 0x7f57f0101a90 2026-03-09T17:27:17.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f57b4005180 con 0x7f57f0101a90 2026-03-09T17:27:17.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57cffff640 1 --2- 192.168.123.100:0/1503936758 >> v2:192.168.123.100:6800/2673235927 conn(0x7f57c8077710 0x7f57c8079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:17.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.699+0000 7f57cffff640 1 -- 192.168.123.100:0/1503936758 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f57e4099b40 con 0x7f57f0101a90 2026-03-09T17:27:17.707 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.703+0000 7f57eeffd640 1 --2- 192.168.123.100:0/1503936758 >> v2:192.168.123.100:6800/2673235927 conn(0x7f57c8077710 0x7f57c8079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:17.707 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.703+0000 7f57cffff640 1 -- 192.168.123.100:0/1503936758 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f57e40669a0 con 0x7f57f0101a90 2026-03-09T17:27:17.707 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.703+0000 7f57eeffd640 1 --2- 192.168.123.100:0/1503936758 >> v2:192.168.123.100:6800/2673235927 conn(0x7f57c8077710 0x7f57c8079bd0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f57d8004640 tx=0x7f57d80092c0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:17.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.823+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) -- 0x7f57b4005470 con 0x7f57f0101a90 2026-03-09T17:27:17.827 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.823+0000 7f57cffff640 1 -- 192.168.123.100:0/1503936758 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v21) ==== 74+0+192038 (secure 0 0 0) 0x7f57e406b850 con 0x7f57f0101a90 2026-03-09T17:27:17.827 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:27:17.837 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 >> v2:192.168.123.100:6800/2673235927 conn(0x7f57c8077710 msgr2=0x7f57c8079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:17.837 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 --2- 192.168.123.100:0/1503936758 >> v2:192.168.123.100:6800/2673235927 conn(0x7f57c8077710 0x7f57c8079bd0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f57d8004640 tx=0x7f57d80092c0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0101a90 msgr2=0x7f57f01a14b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:17.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 --2- 192.168.123.100:0/1503936758 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0101a90 0x7f57f01a14b0 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f57e400da20 tx=0x7f57e400def0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 shutdown_connections 2026-03-09T17:27:17.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 --2- 192.168.123.100:0/1503936758 >> v2:192.168.123.100:6800/2673235927 conn(0x7f57c8077710 0x7f57c8079bd0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 --2- 192.168.123.100:0/1503936758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57f0106d70 0x7f57f01a5840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 --2- 192.168.123.100:0/1503936758 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f57f0101a90 0x7f57f01a14b0 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 --2- 192.168.123.100:0/1503936758 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f57f0101170 0x7f57f01a0f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:17.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 >> 192.168.123.100:0/1503936758 conn(0x7f57f00780d0 msgr2=0x7f57f00ff260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:17.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 shutdown_connections 2026-03-09T17:27:17.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:17.835+0000 7f57f5347640 1 -- 192.168.123.100:0/1503936758 wait complete. 2026-03-09T17:27:17.918 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":21,"flags":0,"active_gid":14505,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":2673235927}]},"active_addr":"192.168.123.100:6800/2673235927","active_change":"2026-03-09T17:26:56.573753+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24409,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.100:8443/","prometheus":"http://192.168.123.100:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":65,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":3618734699}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1946101275}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":532878360}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1929488338}]}]} 2026-03-09T17:27:17.920 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T17:27:17.920 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T17:27:17.920 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd dump --format=json 2026-03-09T17:27:18.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:17 vm02 bash[23351]: cluster 2026-03-09T17:27:16.650038+0000 mgr.y (mgr.14505) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:18.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:17 vm02 bash[23351]: cluster 2026-03-09T17:27:16.650038+0000 mgr.y (mgr.14505) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:17 vm00 bash[28333]: cluster 2026-03-09T17:27:16.650038+0000 mgr.y (mgr.14505) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:17 vm00 bash[28333]: cluster 2026-03-09T17:27:16.650038+0000 mgr.y (mgr.14505) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:17 vm00 bash[20770]: cluster 2026-03-09T17:27:16.650038+0000 mgr.y (mgr.14505) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:17 vm00 bash[20770]: cluster 2026-03-09T17:27:16.650038+0000 mgr.y (mgr.14505) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:18.516 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.516 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:27:18.517 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 systemd[1]: Started Ceph grafana.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:27:18.767 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.760953293Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-09T17:27:18Z 2026-03-09T17:27:18.767 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761323276Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-09T17:27:18.767 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761387635Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-09T17:27:18.767 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.76144344Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761479587Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761541813Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761578793Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.76162563Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761662178Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761712222Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761745755Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761791641Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761825203Z level=info msg=Target target=[all] 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761875809Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761909551Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761956779Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.761989701Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.762036989Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=settings t=2026-03-09T17:27:18.762071253Z level=info msg="App mode production" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=sqlstore t=2026-03-09T17:27:18.762282588Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=sqlstore t=2026-03-09T17:27:18.762340337Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.762747687Z level=info msg="Starting DB migrations" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.763490035Z level=info msg="Executing migration" id="create migration_log table" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.764040243Z level=info msg="Migration successfully executed" id="create migration_log table" duration=548.917µs 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.765639222Z level=info msg="Executing migration" id="create user table" 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.766065167Z level=info msg="Migration successfully executed" id="create user table" duration=427.59µs 2026-03-09T17:27:18.768 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.767291239Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:17.825356+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.100:0/1503936758' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:17.825356+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.100:0/1503936758' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.563863+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.563863+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.571555+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.571555+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.577113+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.577113+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.582928+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.582928+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.595997+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.020 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:18 vm02 bash[23351]: audit 2026-03-09T17:27:18.595997+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.767841918Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=550.589µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.76939955Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.769831016Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=431.466µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.771099357Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.771553355Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=454.079µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.772591115Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.773015518Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=424.524µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.774393363Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.775362655Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=969.05µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.77632315Z level=info msg="Executing migration" id="create user table v2" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.776819908Z level=info msg="Migration successfully executed" id="create user table v2" duration=496.668µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.777978143Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.778476034Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=497.871µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.77945896Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.779916627Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=460.842µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.781575968Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.781900784Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=321.351µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.782858305Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.783270825Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=412.812µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.784206614Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.784768064Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=561.37µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.786086447Z level=info msg="Executing migration" id="Update user table charset" 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.786178931Z level=info msg="Migration successfully executed" id="Update user table charset" duration=93.043µs 2026-03-09T17:27:19.020 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.787138153Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.787668054Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=529.951µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.788581782Z level=info msg="Executing migration" id="Add missing user data" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.788784761Z level=info msg="Migration successfully executed" id="Add missing user data" duration=203.019µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.789916676Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.790449382Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=532.587µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.791709126Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.79215533Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=446.275µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.793034163Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.793575915Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=541.412µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.794592996Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.797387819Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=2.793191ms 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.798642835Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.79925017Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=608.768µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.80053429Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.800725218Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=191.348µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.801915122Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.802367677Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=450.553µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.80367468Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.804155979Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=481.57µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.805837823Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.806365018Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=527.516µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.807722175Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.808304403Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=581.958µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.809585327Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.810083088Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=497.981µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.811713916Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.812244347Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=531.012µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.813543625Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.813659693Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=116.618µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.814685169Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.815186726Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=501.587µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.816180133Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.816612Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=432.036µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.818283524Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.818811521Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=527.797µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.819876401Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.820299882Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=423.792µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.82185529Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.823099345Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.243734ms 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.82424192Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.824756633Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=514.963µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.8260392Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.826539886Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=501.778µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.827668014Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.828145656Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=477.863µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.829632827Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.830139603Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=503.931µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.831211126Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.831800037Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=590.344µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.833565036Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.833964662Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=426.738µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.836748165Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.837305215Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=558.283µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.838549772Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.838863419Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=314.058µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.849229622Z level=info msg="Executing migration" id="create star table" 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.849734456Z level=info msg="Migration successfully executed" id="create star table" duration=507.338µs 2026-03-09T17:27:19.021 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.851377837Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.85182887Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=452.055µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.853904039Z level=info msg="Executing migration" id="create org table v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.85452527Z level=info msg="Migration successfully executed" id="create org table v1" duration=619.267µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.856159314Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.856600719Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=441.996µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.857914745Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.85825432Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=339.585µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.859539081Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.859906689Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=367.467µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.861406091Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.861766715Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=360.254µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.862925149Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.863284582Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=359.372µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.864492689Z level=info msg="Executing migration" id="Update org table charset" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.864503219Z level=info msg="Migration successfully executed" id="Update org table charset" duration=11.05µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.865682503Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.865693393Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=11.512µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.867000546Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.867097657Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=97.282µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.868180692Z level=info msg="Executing migration" id="create dashboard table" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.868743224Z level=info msg="Migration successfully executed" id="create dashboard table" duration=562.292µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.869963604Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.870594363Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=630.318µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.871874997Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.872313586Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=440.543µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.873796849Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.874148375Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=349.433µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.875432516Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.875796205Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=364.752µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.877055789Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.877605748Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=550.028µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.879139845Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.880897741Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=1.757645ms 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.882030427Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.882529139Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=498.612µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.883559465Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.884049861Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=490.296µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.885526802Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.886059478Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=532.565µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.88739341Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.88773557Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=339.524µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.888690334Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.889288352Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=597.947µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.890771375Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.89099428Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=223.006µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.892115086Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.893080901Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=965.614µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.89771961Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.898850715Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.127918ms 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.900409037Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.901371466Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=962.239µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.902731888Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.903308175Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=576.217µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.904296614Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.905127756Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=833.417µs 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.906628131Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-09T17:27:19.022 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.907168301Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=539.949µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.908248489Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.908763963Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=515.394µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.910102204Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.910257854Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=156.021µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.911700972Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.911858416Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=157.836µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.912965044Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.91378193Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=816.776µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.915031466Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.91583546Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=805.797µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.917296931Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.918108718Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=811.828µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.919122392Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.91990762Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=785.138µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.920942544Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.921185328Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=242.944µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.922371345Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.922897398Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=502.9µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.924523477Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.925066111Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=542.704µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.926297994Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.926459305Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=161.693µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.927561135Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.928091516Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=530.271µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.929643587Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.930095581Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=451.955µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.931395731Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.933141985Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.746344ms 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.934324765Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.934794683Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=469.878µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.936332629Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.93683631Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=503.601µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.93808837Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.938669706Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=581.197µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.940037602Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.940382187Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=357.058µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.941907869Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.942354833Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=446.755µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.943388997Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.944244866Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=853.955µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.945385177Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.9459376Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=552.412µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.94721105Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.947451009Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=239.989µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.948490372Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.9487195Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=229.119µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.949950581Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.950476624Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=523.378µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.951691354Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.952570658Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=879.184µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.954049972Z level=info msg="Executing migration" id="create data_source table" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.954598857Z level=info msg="Migration successfully executed" id="create data_source table" duration=548.786µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.955890953Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.956409822Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=520.062µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.957771508Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.958700394Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=926.101µs 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.960463449Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-09T17:27:19.023 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.961128642Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=663.881µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.962317835Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.962854037Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=536.272µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.964159858Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.966446371Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=2.288728ms 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.9676979Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.968276662Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=578.791µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.969430829Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.969989614Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=558.865µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.971347682Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.972021672Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=673.809µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.973355344Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.973785568Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=429.111µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.974997242Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.976169733Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.171971ms 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.977648337Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.978547228Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=898.83µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.979563868Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.979710362Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=146.344µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.980690924Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.980995123Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=304.138µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.98211173Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.983213008Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.101289ms 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.984446553Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.984677796Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=231.744µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.985802337Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.986039351Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=237.223µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.98703971Z level=info msg="Executing migration" id="Add uid column" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.987942467Z level=info msg="Migration successfully executed" id="Add uid column" duration=899.592µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.989343647Z level=info msg="Executing migration" id="Update uid value" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.989627346Z level=info msg="Migration successfully executed" id="Update uid value" duration=283.579µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.990701073Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.991349475Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=648.381µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.99234165Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.992909151Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=569.674µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.994422579Z level=info msg="Executing migration" id="create api_key table" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.99503825Z level=info msg="Migration successfully executed" id="create api_key table" duration=615.651µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.996367594Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.996944001Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=576.377µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.998212793Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:18 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:18.99878341Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=570.567µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.000404529Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.000979514Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=576.758µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.002412132Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.002980013Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=566.869µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.004179274Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.004719904Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=540.882µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.006369718Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.006985119Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=614.979µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.00805629Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.010323197Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.262589ms 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.011706252Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.012247844Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=542.223µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.013820063Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.014382685Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=561.038µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.015469766Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.015963178Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=493.483µs 2026-03-09T17:27:19.024 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.017468993Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.018298503Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=831.865µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.021510697Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.021903851Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=393.616µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.022897689Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.023346558Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=448.798µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.024751064Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.024950376Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=199.853µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.026195783Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.027217733Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.022191ms 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.028230726Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.029221899Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=992.315µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.030413336Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.030641252Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=228.096µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.032034596Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.03318161Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.146672ms 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.034391751Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.035426916Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.037188ms 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.036430673Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.037000718Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=568.062µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.038435309Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.038871824Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=436.466µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.039882072Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.040546024Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=663.912µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.041744654Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.042361527Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=616.773µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.043969542Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.04455698Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=587.408µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.045834718Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.046416796Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=581.888µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.047807816Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.048032357Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=224.531µs 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.049754355Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-09T17:27:19.268 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.049914854Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=44.563µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.050997026Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.052203051Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.205893ms 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.053428381Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.054947029Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.517807ms 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.056557349Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.056775257Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=218.278µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.058407608Z level=info msg="Executing migration" id="create quota table v1" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.059042023Z level=info msg="Migration successfully executed" id="create quota table v1" duration=634.397µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.060278013Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.060855603Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=577.619µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.062135095Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.062147358Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=12.975µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.063585254Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.064165629Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=580.274µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.065502838Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.066176517Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=678.007µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.067521662Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.06871956Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.197829ms 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.070344086Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.070357642Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=16.962µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.071584154Z level=info msg="Executing migration" id="create session table" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.072268663Z level=info msg="Migration successfully executed" id="create session table" duration=680.161µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.07359885Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.073805375Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=206.726µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.074924818Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.075110434Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=182.651µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.076546499Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.077120703Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=572.57µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.078484691Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.079102967Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=618.195µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.08038371Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.080396024Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=12.833µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.081944659Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.08195617Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=12.072µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.083040997Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.08419281Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.151652ms 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.085356274Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.086481117Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.124683ms 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.087517414Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.087689666Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=174.637µs 2026-03-09T17:27:19.269 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.089016425Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.089214706Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=201.748µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.090427994Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.090914833Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=486.69µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.092210465Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.092225503Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=14.286µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.093794165Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.094928305Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.133989ms 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.096014244Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.096277266Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=262.992µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.097428899Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.098757471Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.328122ms 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.099901369Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.101172806Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.271326ms 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.102905415Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.103086392Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=181.067µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.104541081Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.105176048Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=632.642µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.106421355Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.107000858Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=579.523µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.108571765Z level=info msg="Executing migration" id="create alert table v1" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.109298913Z level=info msg="Migration successfully executed" id="create alert table v1" duration=725.145µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.110460254Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.111013478Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=553.384µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.112515024Z level=info msg="Executing migration" id="add index alert state" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.11310105Z level=info msg="Migration successfully executed" id="add index alert state" duration=586.186µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.114279111Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.114826724Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=547.813µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.116507507Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.117112608Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=606.183µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.118631006Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.119217702Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=586.245µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.120528531Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.121092285Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=563.753µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.122361578Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.12589392Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=3.532202ms 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.12742439Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.127926719Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=502.489µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.12899194Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.12956465Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=575.485µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.13084344Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.131430778Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=583.871µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.132898601Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.133366666Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=468.395µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.134550939Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.135053329Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=502.3µs 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.136011971Z level=info msg="Executing migration" id="Add column is_default" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.137323732Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.311772ms 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.138642397Z level=info msg="Executing migration" id="Add column frequency" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.139812554Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.170107ms 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.140781204Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-09T17:27:19.270 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.141992929Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.205784ms 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.143183274Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.144389578Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.206104ms 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.145743018Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.146308975Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=565.967µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.14748313Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.147495142Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=12.213µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.14851593Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.148527071Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=11.361µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.149741872Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.150294154Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=552.242µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.151810628Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.15235194Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=541.302µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.153537846Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.154009127Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=472.453µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.154968922Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.155446674Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=477.673µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.156628453Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.157178651Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=549.878µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.158267066Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.159598013Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.331979ms 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.160561955Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.161844211Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.279532ms 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.163279524Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.163499715Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=220.18µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.164534348Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.165059651Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=525.394µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.166195645Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.166730284Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=534.749µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.168017829Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.169325293Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.307314ms 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.170427373Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.170594926Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=167.513µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.171659716Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.172229201Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=569.565µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.17361462Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.174233627Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=618.886µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.175432217Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.175620258Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=188.07µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.176643451Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.177241228Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=597.668µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.178592383Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.179119258Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=526.774µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.180244631Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.180782206Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=537.455µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.182015781Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.182551282Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=535.311µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.184018916Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.184574243Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=555.268µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.185725165Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.186282406Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=557.171µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.187394805Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.187406908Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=16.099µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.188663296Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.19004063Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.377144ms 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.191345329Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.191877533Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=532.355µs 2026-03-09T17:27:19.271 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.192805227Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.194096801Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.28981ms 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.195493952Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.195924377Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=430.185µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.196832274Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.197433798Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=601.433µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.198618743Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.199274489Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=654.464µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.200660368Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.204748459Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=4.08794ms 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.205951848Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.206268861Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=316.902µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.207398021Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.207746853Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=348.762µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.209287964Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.209433557Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=142.417µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.210631254Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.210878707Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=246.401µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.211937185Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.212054805Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=117.701µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.212945309Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.214153537Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.211604ms 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.217565244Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.218955742Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.395127ms 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.220046772Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.220658355Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=610.982µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.221849892Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.222392245Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=542.343µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.22390308Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.224177782Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=273.642µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.22538547Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.226882739Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.496949ms 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.228018441Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.228639823Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=623.616µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.230282603Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.230542778Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=260.046µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.231682429Z level=info msg="Executing migration" id="Move region to single row" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.232011404Z level=info msg="Migration successfully executed" id="Move region to single row" duration=328.874µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.233251492Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.233808823Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=556.31µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.235190625Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.235739642Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=549.115µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.236764147Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.237337108Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=572.86µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.238409052Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.238888397Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=475.738µs 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.240124618Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-09T17:27:19.272 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.240595317Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=468.155µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.24169876Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.242170322Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=471.533µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.24316491Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.243266059Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=101.26µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.244756264Z level=info msg="Executing migration" id="create test_data table" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.245267359Z level=info msg="Migration successfully executed" id="create test_data table" duration=510.785µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.246375521Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.246785027Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=409.245µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.247886465Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.248359098Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=472.343µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.24981555Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.250294646Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=480.389µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.251453552Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.251625442Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=168.214µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.252601748Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.252852596Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=250.638µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.253915594Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.254029215Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=113.682µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.255274322Z level=info msg="Executing migration" id="create team table" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.255681243Z level=info msg="Migration successfully executed" id="create team table" duration=406.661µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.256815483Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.257375681Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=560.367µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.258545287Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.259084655Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=540.24µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.260507944Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.261968214Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.46017ms 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.263151385Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.26332524Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=173.895µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.264357399Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.264839099Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=481.651µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.266394426Z level=info msg="Executing migration" id="create team member table" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.266820833Z level=info msg="Migration successfully executed" id="create team member table" duration=426.427µs 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.267991863Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-09T17:27:19.273 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.268630095Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=638.372µs 2026-03-09T17:27:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:17.825356+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.100:0/1503936758' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:17.825356+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.100:0/1503936758' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.563863+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.563863+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.571555+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.571555+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.577113+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.577113+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.582928+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.582928+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.595997+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:18 vm00 bash[28333]: audit 2026-03-09T17:27:18.595997+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:17.825356+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.100:0/1503936758' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:17.825356+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.100:0/1503936758' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.563863+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.563863+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.571555+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.571555+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.577113+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.577113+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.582928+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.582928+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.595997+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:18 vm00 bash[20770]: audit 2026-03-09T17:27:18.595997+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.27141503Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.272038645Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=623.795µs 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.273745034Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.274333614Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=588.57µs 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.2756637Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.277239226Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.575436ms 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.278466388Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.279987832Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.521836ms 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.281580459Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.283084199Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.503409ms 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.284214712Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.284779699Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=564.956µs 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.286133859Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.286676212Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=542.585µs 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.288223475Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.288770117Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=546.521µs 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.290134727Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.291883615Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.748248ms 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.293236023Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.293745956Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=509.893µs 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.29549756Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.296029604Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=530.431µs 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.297317512Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.297814701Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=496.878µs 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.299168992Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-09T17:27:19.519 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.29969248Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=523.448µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.301276492Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.301671259Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=394.727µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.303049845Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.303258425Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=208.72µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.304779197Z level=info msg="Executing migration" id="create tag table" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.305357067Z level=info msg="Migration successfully executed" id="create tag table" duration=578.21µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.306621029Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.307198138Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=577.088µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.308503087Z level=info msg="Executing migration" id="create login attempt table" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.308941727Z level=info msg="Migration successfully executed" id="create login attempt table" duration=462.685µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.310575621Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.31108343Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=507.678µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.312403317Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.31302037Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=625.668µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.314191158Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.318327509Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=4.135879ms 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.319987462Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.320450998Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=463.817µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.321660169Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.322166424Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=506.437µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.323417643Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.323666027Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=246.65µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.324654485Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.325077686Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=423.23µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.326989057Z level=info msg="Executing migration" id="create user auth table" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.327415896Z level=info msg="Migration successfully executed" id="create user auth table" duration=424.895µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.328489923Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.329218384Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=729.523µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.330808346Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.330941024Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=133.019µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.332186272Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.33383898Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.652678ms 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.335025518Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.336616342Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.587537ms 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.337734541Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.339388422Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.65347ms 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.340600337Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.342197702Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.597375ms 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.343332022Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.343840143Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=514.212µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.345019096Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.346618505Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.59916ms 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.34812421Z level=info msg="Executing migration" id="create server_lock table" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.348583378Z level=info msg="Migration successfully executed" id="create server_lock table" duration=458.729µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.349764777Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.350266975Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=502.348µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.351497034Z level=info msg="Executing migration" id="create user auth token table" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.351984194Z level=info msg="Migration successfully executed" id="create user auth token table" duration=487.221µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.353796301Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.354329538Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=533.257µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.35568478Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.35620384Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=519.11µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.357470297Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.358020936Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=553.896µs 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.359692922Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.361408128Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.715036ms 2026-03-09T17:27:19.520 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.362492093Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.363009972Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=517.959µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.364286257Z level=info msg="Executing migration" id="create cache_data table" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.364743512Z level=info msg="Migration successfully executed" id="create cache_data table" duration=457.294µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.3663377Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.366852293Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=514.521µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.368222383Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.368689095Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=466.492µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.370177317Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.370698672Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=521.074µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.372381216Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.372492635Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=114.113µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.373676147Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.373828Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=152.014µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.374830474Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.375284954Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=454.088µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.376848896Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.377380519Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=531.553µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.378677904Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.379225278Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=547.183µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.380481536Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.380599467Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=119.845µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.381799409Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.382312418Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=513.049µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.383579657Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.384081865Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=502.189µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.385070773Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.385591055Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=520.192µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.387081391Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.387900652Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=820.673µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.388974218Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.390837481Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=1.863162ms 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.392091254Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.392609983Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=516.755µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.394106601Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.394243877Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=137.325µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.39528263Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.395790548Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=509.292µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.396744932Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.397299499Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=556.9µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.398768855Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.399292053Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=525.252µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.400218424Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.40032322Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=104.927µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.401393811Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.401893675Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=499.534µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.403273354Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.403767227Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=495.997µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.404723124Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.405249527Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=526.364µs 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.406603929Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-09T17:27:19.521 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.407125894Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=521.715µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.408385989Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.410279809Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=1.893269ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.411517883Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.412053093Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=535.361µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.413490149Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.414020591Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=529.5µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.414967831Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.422454784Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=7.483907ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.424237466Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.431550344Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=7.31381ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.432992509Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.433553387Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=561.499µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.434767256Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.435292728Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=525.533µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.43782439Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.439562999Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.738429ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.440732375Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.44239388Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.658399ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.443706423Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.444200787Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=494.413µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.445540852Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.446041408Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=500.445µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.4476963Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.448197437Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=501.076µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.449478752Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.450024311Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=539.248µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.451613201Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.451731101Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=118.191µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.452690455Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.454529832Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.839177ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.455644074Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.457406639Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.762103ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.458595561Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.460370318Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.773535ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.461822513Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.462313951Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=491.428µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.467140232Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.467721118Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=580.265µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.468686882Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.470960351Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=2.271505ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.472475644Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.474199816Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.724083ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.475379681Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.475853878Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=474.026µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.477112981Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.478785066Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.671975ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.480260113Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.481978013Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.717679ms 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.483058713Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.483165051Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=106.409µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.484152849Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.484714979Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=562.04µs 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.48641129Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-09T17:27:19.522 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.486943264Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=531.904µs 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.488285101Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.488819219Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=531.923µs 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.490118738Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.490232892Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=114.474µs 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.491527962Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.49337226Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.843986ms 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.494477986Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.496269284Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.791158ms 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.497460641Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.49925281Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.7921ms 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.500488139Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.502347073Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.859705ms 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.503558838Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.505399037Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.838096ms 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.50657244Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.506673229Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=101.069µs 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.507929006Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.508358478Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=429.552µs 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.509691871Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.511583417Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=1.891335ms 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.512579097Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.512679665Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=100.738µs 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.513844532Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.515743011Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=1.898118ms 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.517097883Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-09T17:27:19.523 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.517649373Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=552.463µs 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.518977626Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.521148735Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.17205ms 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.522419849Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.522858579Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=438.74µs 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.524781493Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.525331531Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=550.088µs 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.526779218Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.528649113Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.870155ms 2026-03-09T17:27:19.771 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.529962567Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.530375328Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=412.442µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.532426121Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.532948267Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=521.956µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.534483838Z level=info msg="Executing migration" id="create alert_image table" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.5349024Z level=info msg="Migration successfully executed" id="create alert_image table" duration=418.282µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.536424193Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.536936931Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=512.707µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.538100326Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.538226161Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=125.665µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.539081901Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.53954687Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=465.089µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.54069703Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.541192385Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=495.357µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.542156087Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.542406725Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.543315955Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.543634229Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=318.144µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.544417093Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.544925082Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=509.543µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.545918759Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.547829881Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=1.909008ms 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.548689488Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.549259723Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=569.634µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.550527303Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.551080927Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=553.234µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.552374364Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.552772288Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=396.991µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.553945601Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.554502862Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=557.252µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.555740326Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.556226776Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=486.58µs 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.557476871Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-09T17:27:19.772 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.557510263Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=33.873µs 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.558594269Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.558736415Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=142.387µs 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.559679958Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.559929525Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=237.485µs 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.560791576Z level=info msg="Executing migration" id="create data_keys table" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.561328831Z level=info msg="Migration successfully executed" id="create data_keys table" duration=537.174µs 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.562534645Z level=info msg="Executing migration" id="create secrets table" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.562982652Z level=info msg="Migration successfully executed" id="create secrets table" duration=446.715µs 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.564168017Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.573399471Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=9.22967ms 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.574647303Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.576756615Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.10852ms 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.577896355Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.578050333Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=154.089µs 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.578986283Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.58923611Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=10.246461ms 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.590578228Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.599813639Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=9.232475ms 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.601151329Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.601646815Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=495.586µs 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.603023358Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.603614632Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=591.054µs 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.605292419Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-09T17:27:19.773 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.605505367Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=213.209µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.60665224Z level=info msg="Executing migration" id="create permission table" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.607158716Z level=info msg="Migration successfully executed" id="create permission table" duration=506.647µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.608461562Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.609004216Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=540.49µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.610214238Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.610729731Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=514.141µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.611974097Z level=info msg="Executing migration" id="create role table" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.612422615Z level=info msg="Migration successfully executed" id="create role table" duration=449.26µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.613563647Z level=info msg="Executing migration" id="add column display_name" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.61571537Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.151492ms 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.616616794Z level=info msg="Executing migration" id="add column group_name" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.618724072Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.10755ms 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.619880684Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.620409783Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=529.341µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.621658618Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.622175394Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=515.173µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.623662573Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.624210426Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=547.584µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.625500197Z level=info msg="Executing migration" id="create team role table" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.625929249Z level=info msg="Migration successfully executed" id="create team role table" duration=428.691µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.627169818Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.62772237Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=552.592µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.62904896Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.629597215Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=547.824µs 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.630820812Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-09T17:27:19.774 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.631320646Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=499.783µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.632384033Z level=info msg="Executing migration" id="create user role table" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.632788859Z level=info msg="Migration successfully executed" id="create user role table" duration=404.475µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.633815799Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.634303661Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=487.792µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.635393267Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.635878224Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=484.605µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.637310281Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.637835302Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=523.578µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.638859806Z level=info msg="Executing migration" id="create builtin role table" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.639283588Z level=info msg="Migration successfully executed" id="create builtin role table" duration=423.751µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.640320887Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.640814811Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=492.702µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.641903275Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.642404972Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=501.577µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.643521369Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.645814805Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.293386ms 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.646810637Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.647322363Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=512.157µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.648434813Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.648954213Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=519.221µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.650117006Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.650621349Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=504.283µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.651469704Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.651968086Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=498.322µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.652827362Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.653243329Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=415.817µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.654432151Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.654958134Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=526.064µs 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.656118454Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.658391972Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.274221ms 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.659392081Z level=info msg="Executing migration" id="permission kind migration" 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.66163296Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.240558ms 2026-03-09T17:27:19.775 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.662675397Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.664852618Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.176668ms 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.665931795Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.668149108Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.217255ms 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.669092942Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.66962067Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=527.426µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.670790035Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.671349821Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=559.776µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.672963867Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.673495541Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=525.733µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.6745543Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.675010672Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=456.403µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.675810588Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.676314039Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=503.451µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.677394378Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.677480659Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=86.571µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.67848684Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.678573892Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=87.473µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.679417609Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.679700357Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=282.808µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.680465648Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.680801555Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=336.377µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.681779063Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.682110933Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=331.779µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.683251475Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.683420461Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=164.938µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.684182325Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.684473128Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=290.993µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.685378321Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.685809316Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=430.635µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.687002516Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.68756117Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=558.524µs 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.688821656Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.691164344Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.342478ms 2026-03-09T17:27:19.776 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.69222713Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.692345082Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=118.21µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.695387857Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.695991806Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=603.729µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.697532947Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.698065012Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=532.165µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.69928965Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.699807188Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=518.809µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.701061953Z level=info msg="Executing migration" id="add correlation config column" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.703560532Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.498039ms 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.704679956Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.705270119Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=567.21µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.7063894Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.706924891Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=535.771µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.707911585Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.714643567Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.728516ms 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.716022174Z level=info msg="Executing migration" id="create correlation v2" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.716615463Z level=info msg="Migration successfully executed" id="create correlation v2" duration=593.299µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.717778145Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.718306452Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=528.417µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.719580875Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.720135281Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=556.49µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.721423799Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.72192709Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=502.078µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.723155836Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.723333138Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=177.301µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.724171204Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.72459731Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=426.317µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.72561883Z level=info msg="Executing migration" id="add provisioning column" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.727984161Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.36516ms 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.728809683Z level=info msg="Executing migration" id="create entity_events table" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.729242613Z level=info msg="Migration successfully executed" id="create entity_events table" duration=432.82µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.73026314Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.730743528Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=480.508µs 2026-03-09T17:27:19.777 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.731965753Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.73221645Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.732990808Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.733233191Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.734193786Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.734609193Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=415.316µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.735384031Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.735861844Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=476.521µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.737003127Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.737518029Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=514.642µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.738662909Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.739194453Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=530.954µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.740342429Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.740847863Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=505.394µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.741877648Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.742382301Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=504.864µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.743193427Z level=info msg="Executing migration" id="Drop public config table" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.743598664Z level=info msg="Migration successfully executed" id="Drop public config table" duration=405.227µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.744817542Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.745353655Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=536.012µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.746343574Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.746852485Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=508.882µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.747803554Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.748332091Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=529.159µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.749657448Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.75017199Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=514.531µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.75144494Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.758884644Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=7.439495ms 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.760103843Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.762546969Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.443226ms 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.764235254Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.766638406Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.40219ms 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.767830984Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.768020448Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=189.595µs 2026-03-09T17:27:19.778 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.768912115Z level=info msg="Executing migration" id="add share column" 2026-03-09T17:27:19.779 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.771205321Z level=info msg="Migration successfully executed" id="add share column" duration=2.292966ms 2026-03-09T17:27:19.787 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:19 vm00 bash[56160]: ts=2026-03-09T17:27:19.394Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001765654s 2026-03-09T17:27:20.025 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:19 vm02 bash[23351]: cluster 2026-03-09T17:27:18.650544+0000 mgr.y (mgr.14505) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:20.025 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:19 vm02 bash[23351]: cluster 2026-03-09T17:27:18.650544+0000 mgr.y (mgr.14505) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.774317328Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.774571753Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=256.238µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.775864179Z level=info msg="Executing migration" id="create file table" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.776397035Z level=info msg="Migration successfully executed" id="create file table" duration=532.505µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.777624509Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.778220312Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=595.823µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.779496398Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.780068978Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=572.61µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.781308414Z level=info msg="Executing migration" id="create file_meta table" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.781770508Z level=info msg="Migration successfully executed" id="create file_meta table" duration=460.802µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.78339861Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.78398715Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=587.487µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.785657733Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.785830416Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=172.934µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.78709554Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.7872667Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=171.489µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.788553845Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.788988218Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=432.88µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.790117579Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.790351316Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=234.088µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.791406287Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.792176076Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=770.029µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.793283877Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.795867606Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.58433ms 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.797458479Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.79767885Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=220.683µs 2026-03-09T17:27:20.025 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.798688819Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.799518318Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=829.089µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.800817056Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.801169724Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=351.957µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.802508898Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.802786908Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=277.93µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.803756009Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.80414756Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=391.22µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.805350308Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.808163617Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.813419ms 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.809331199Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.812450769Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=3.117727ms 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.813812224Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.814322478Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=510.404µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.815216257Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.840269073Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=25.048607ms 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.841575433Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.842136682Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=561.71µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.843496674Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.844016396Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=519.58µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.845215517Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.852860335Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=7.643416ms 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.854770024Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.857300975Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.53095ms 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.858449572Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.858656719Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=207.427µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.859625208Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.859793283Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=168.275µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.860702132Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.860898048Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=195.895µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.862326177Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.862499141Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=172.954µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.863461188Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.863642427Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=181.259µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.86459144Z level=info msg="Executing migration" id="create folder table" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.865116592Z level=info msg="Migration successfully executed" id="create folder table" duration=524.079µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.866427262Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.867033745Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=606.743µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.868484187Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.869311142Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=828.548µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.870561529Z level=info msg="Executing migration" id="Update folder title length" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.87064231Z level=info msg="Migration successfully executed" id="Update folder title length" duration=81.232µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.871871768Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.872456981Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=584.743µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.873662494Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.874244622Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=582.218µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.87529171Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.875866354Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=574.163µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.877494226Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.877792845Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=298.148µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.878724265Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.878909672Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=184.425µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.879929988Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.880457634Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=527.655µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.88186236Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.882409173Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=546.691µs 2026-03-09T17:27:20.026 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.883368886Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.883891403Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=522.556µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.884838883Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.885420329Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=582.66µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.88679574Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.887312887Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=517.427µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.888256622Z level=info msg="Executing migration" id="create anon_device table" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.888698998Z level=info msg="Migration successfully executed" id="create anon_device table" duration=442.195µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.889830272Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.890413803Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=584.242µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.892021307Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.892564021Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=542.023µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.894379405Z level=info msg="Executing migration" id="create signing_key table" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.894849603Z level=info msg="Migration successfully executed" id="create signing_key table" duration=469.898µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.89644752Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.896994453Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=546.842µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.898492001Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.902218256Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=3.726256ms 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.904487346Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.904711865Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=224.93µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.90612634Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.908955788Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.829279ms 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.910068708Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.910492009Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=422.66µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.911495905Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.912050903Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=556.3µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.913305538Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.913841189Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=535.741µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.915314261Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.91584876Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=534.548µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.916816761Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.917381686Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=564.827µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.918839861Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.919373078Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=533.207µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.920336239Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.920800536Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=464.107µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.92236027Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.922788772Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=429.093µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.924277284Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.924579659Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=305.622µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.925902942Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.925944389Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=41.908µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.926908181Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.929565227Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.656916ms 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.930619818Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.933153995Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.533917ms 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.934717106Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.934937869Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=222.026µs 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=migrator t=2026-03-09T17:27:19.935811912Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.172342095s 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=sqlstore t=2026-03-09T17:27:19.936431019Z level=info msg="Created default organization" 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=secrets t=2026-03-09T17:27:19.937831407Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=plugin.store t=2026-03-09T17:27:19.945766617Z level=info msg="Loading plugins..." 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=local.finder t=2026-03-09T17:27:19.981608406Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-09T17:27:20.027 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=plugin.store t=2026-03-09T17:27:19.981628654Z level=info msg="Plugins loaded" count=55 duration=35.862628ms 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=query_data t=2026-03-09T17:27:19.983211863Z level=info msg="Query Service initialization" 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=live.push_http t=2026-03-09T17:27:19.986429147Z level=info msg="Live Push Gateway initialization" 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=ngalert.migration t=2026-03-09T17:27:19.988746006Z level=info msg=Starting 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=ngalert.migration t=2026-03-09T17:27:19.989108363Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=ngalert.migration orgID=1 t=2026-03-09T17:27:19.989475048Z level=info msg="Migrating alerts for organisation" 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=ngalert.migration orgID=1 t=2026-03-09T17:27:19.989992697Z level=info msg="Alerts found to migrate" alerts=0 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:19 vm02 bash[51223]: logger=ngalert.migration t=2026-03-09T17:27:19.991258702Z level=info msg="Completed alerting migration" 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=ngalert.state.manager t=2026-03-09T17:27:20.010523098Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=infra.usagestats.collector t=2026-03-09T17:27:20.011882559Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=provisioning.datasources t=2026-03-09T17:27:20.013389276Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T17:27:20.028 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=provisioning.datasources t=2026-03-09T17:27:20.01956107Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-09T17:27:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:19 vm00 bash[20770]: cluster 2026-03-09T17:27:18.650544+0000 mgr.y (mgr.14505) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:19 vm00 bash[20770]: cluster 2026-03-09T17:27:18.650544+0000 mgr.y (mgr.14505) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:19 vm00 bash[28333]: cluster 2026-03-09T17:27:18.650544+0000 mgr.y (mgr.14505) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:19 vm00 bash[28333]: cluster 2026-03-09T17:27:18.650544+0000 mgr.y (mgr.14505) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=provisioning.alerting t=2026-03-09T17:27:20.025140547Z level=info msg="starting to provision alerting" 2026-03-09T17:27:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=provisioning.alerting t=2026-03-09T17:27:20.026823454Z level=info msg="finished to provision alerting" 2026-03-09T17:27:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=grafanaStorageLogger t=2026-03-09T17:27:20.027109037Z level=info msg="Storage starting" 2026-03-09T17:27:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=http.server t=2026-03-09T17:27:20.028483006Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-09T17:27:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=http.server t=2026-03-09T17:27:20.028688179Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T17:27:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=ngalert.state.manager t=2026-03-09T17:27:20.028738131Z level=info msg="Warming state cache for startup" 2026-03-09T17:27:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=ngalert.state.manager t=2026-03-09T17:27:20.030088876Z level=info msg="State cache has been initialized" states=0 duration=1.350204ms 2026-03-09T17:27:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=provisioning.dashboard t=2026-03-09T17:27:20.030889283Z level=info msg="starting to provision dashboards" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=sqlstore.transactions t=2026-03-09T17:27:20.040491389Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=ngalert.multiorg.alertmanager t=2026-03-09T17:27:20.041697002Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=ngalert.scheduler t=2026-03-09T17:27:20.041716388Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=ticker t=2026-03-09T17:27:20.041755922Z level=info msg=starting first_tick=2026-03-09T17:27:30Z 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=sqlstore.transactions t=2026-03-09T17:27:20.046973975Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=sqlstore.transactions t=2026-03-09T17:27:20.050983107Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=sqlstore.transactions t=2026-03-09T17:27:20.06179221Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=sqlstore.transactions t=2026-03-09T17:27:20.067110691Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=sqlstore.transactions t=2026-03-09T17:27:20.077190951Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=sqlstore.transactions t=2026-03-09T17:27:20.089623177Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=auth t=2026-03-09T17:27:20.095099642Z level=error msg="Failed to lock and execute cleanup of expired auth token" error="database is locked" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=plugins.update.checker t=2026-03-09T17:27:20.138340049Z level=info msg="Update check succeeded" duration=97.375625ms 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=provisioning.dashboard t=2026-03-09T17:27:20.176737685Z level=info msg="finished to provision dashboards" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=grafana-apiserver t=2026-03-09T17:27:20.269243089Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-09T17:27:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:27:20 vm02 bash[51223]: logger=grafana-apiserver t=2026-03-09T17:27:20.269531677Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-09T17:27:21.845 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:27:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:21 vm02 bash[23351]: cluster 2026-03-09T17:27:20.650827+0000 mgr.y (mgr.14505) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:21 vm02 bash[23351]: cluster 2026-03-09T17:27:20.650827+0000 mgr.y (mgr.14505) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:21 vm02 bash[23351]: audit 2026-03-09T17:27:21.736635+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:22.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:21 vm02 bash[23351]: audit 2026-03-09T17:27:21.736635+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:21 vm00 bash[20770]: cluster 2026-03-09T17:27:20.650827+0000 mgr.y (mgr.14505) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:21 vm00 bash[20770]: cluster 2026-03-09T17:27:20.650827+0000 mgr.y (mgr.14505) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:21 vm00 bash[20770]: audit 2026-03-09T17:27:21.736635+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:21 vm00 bash[20770]: audit 2026-03-09T17:27:21.736635+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:21 vm00 bash[28333]: cluster 2026-03-09T17:27:20.650827+0000 mgr.y (mgr.14505) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:21 vm00 bash[28333]: cluster 2026-03-09T17:27:20.650827+0000 mgr.y (mgr.14505) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:21 vm00 bash[28333]: audit 2026-03-09T17:27:21.736635+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:21 vm00 bash[28333]: audit 2026-03-09T17:27:21.736635+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:22.559 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:22.742 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 -- 192.168.123.100:0/3225583155 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c810f4a0 msgr2=0x7f82c8111890 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:22.742 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3225583155 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c810f4a0 0x7f82c8111890 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f82b800b0a0 tx=0x7f82b802f430 comp rx=0 tx=0).stop 2026-03-09T17:27:22.743 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 -- 192.168.123.100:0/3225583155 shutdown_connections 2026-03-09T17:27:22.743 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3225583155 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c810f4a0 0x7f82c8111890 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.743 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3225583155 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f82c8102090 0x7f82c810ef60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.743 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3225583155 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f82c8101770 0x7f82c8101b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.743 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 -- 192.168.123.100:0/3225583155 >> 192.168.123.100:0/3225583155 conn(0x7f82c80fd5a0 msgr2=0x7f82c80ff9c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:22.743 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 -- 192.168.123.100:0/3225583155 shutdown_connections 2026-03-09T17:27:22.743 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.739+0000 7f82c7fff640 1 -- 192.168.123.100:0/3225583155 wait complete. 2026-03-09T17:27:22.745 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 Processor -- start 2026-03-09T17:27:22.745 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 -- start start 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f82c8101770 0x7f82c810d520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c8102090 0x7f82c810da60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f82c810f4a0 0x7f82c81acbd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f82c81049f0 con 0x7f82c8102090 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f82c8104870 con 0x7f82c810f4a0 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f82c8104b70 con 0x7f82c8101770 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c8102090 0x7f82c810da60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c8102090 0x7f82c810da60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:47640/0 (socket says 192.168.123.100:47640) 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 -- 192.168.123.100:0/3413288172 learned_addr learned my addr 192.168.123.100:0/3413288172 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 -- 192.168.123.100:0/3413288172 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f82c8101770 msgr2=0x7f82c810d520 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 --2- 192.168.123.100:0/3413288172 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f82c8101770 0x7f82c810d520 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 -- 192.168.123.100:0/3413288172 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f82c810f4a0 msgr2=0x7f82c81acbd0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 --2- 192.168.123.100:0/3413288172 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f82c810f4a0 0x7f82c81acbd0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 -- 192.168.123.100:0/3413288172 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f82c806c890 con 0x7f82c8102090 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c67fc640 1 --2- 192.168.123.100:0/3413288172 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c8102090 0x7f82c810da60 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f82bc00cce0 tx=0x7f82bc007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82a7fff640 1 -- 192.168.123.100:0/3413288172 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f82bc013070 con 0x7f82c8102090 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f82c806cb80 con 0x7f82c8102090 2026-03-09T17:27:22.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f82c806d0c0 con 0x7f82c8102090 2026-03-09T17:27:22.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.743+0000 7f82a7fff640 1 -- 192.168.123.100:0/3413288172 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f82bc007e50 con 0x7f82c8102090 2026-03-09T17:27:22.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.747+0000 7f82a7fff640 1 -- 192.168.123.100:0/3413288172 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f82bc00f450 con 0x7f82c8102090 2026-03-09T17:27:22.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.747+0000 7f82a7fff640 1 -- 192.168.123.100:0/3413288172 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f82bc020020 con 0x7f82c8102090 2026-03-09T17:27:22.748 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.747+0000 7f82a7fff640 1 --2- 192.168.123.100:0/3413288172 >> v2:192.168.123.100:6800/2673235927 conn(0x7f82980777c0 0x7f8298079c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:22.748 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.747+0000 7f82a7fff640 1 -- 192.168.123.100:0/3413288172 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f82bc09a7f0 con 0x7f82c8102090 2026-03-09T17:27:22.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.747+0000 7f82c6ffd640 1 --2- 192.168.123.100:0/3413288172 >> v2:192.168.123.100:6800/2673235927 conn(0x7f82980777c0 0x7f8298079c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:22.754 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.747+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f828c005180 con 0x7f82c8102090 2026-03-09T17:27:22.754 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.751+0000 7f82c6ffd640 1 --2- 192.168.123.100:0/3413288172 >> v2:192.168.123.100:6800/2673235927 conn(0x7f82980777c0 0x7f8298079c80 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f82b0009a90 tx=0x7f82b0009450 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:22.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.751+0000 7f82a7fff640 1 -- 192.168.123.100:0/3413288172 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f82bc067650 con 0x7f82c8102090 2026-03-09T17:27:22.886 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.883+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f828c005470 con 0x7f82c8102090 2026-03-09T17:27:22.887 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.883+0000 7f82a7fff640 1 -- 192.168.123.100:0/3413288172 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v65) ==== 74+0+21299 (secure 0 0 0) 0x7f82bc06c500 con 0x7f82c8102090 2026-03-09T17:27:22.887 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:27:22.887 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":65,"fsid":"16190428-1bdc-11f1-aea4-d920f1c7e51e","created":"2026-03-09T17:20:18.987272+0000","modified":"2026-03-09T17:26:56.573625+0000","last_up_change":"2026-03-09T17:26:06.190800+0000","last_in_change":"2026-03-09T17:25:49.789972+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"luminous","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T17:23:15.373179+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-09T17:26:26.069452+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-09T17:26:28.166724+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"57","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-09T17:26:29.051528+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"61","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":61,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T17:26:30.129678+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"59","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T17:26:32.208119+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"61","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"568cb8ad-2652-448a-8223-a18b7a893c0f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6801","nonce":1564530650}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":1564530650}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":1564530650}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":1564530650}]},"public_addr":"192.168.123.100:6801/1564530650","cluster_addr":"192.168.123.100:6802/1564530650","heartbeat_back_addr":"192.168.123.100:6804/1564530650","heartbeat_front_addr":"192.168.123.100:6803/1564530650","state":["exists","up"]},{"osd":1,"uuid":"e9e37873-3fd7-4a71-be36-c91d099132ac","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6805","nonce":1086087815}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":1086087815}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":1086087815}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":1086087815}]},"public_addr":"192.168.123.100:6805/1086087815","cluster_addr":"192.168.123.100:6806/1086087815","heartbeat_back_addr":"192.168.123.100:6808/1086087815","heartbeat_front_addr":"192.168.123.100:6807/1086087815","state":["exists","up"]},{"osd":2,"uuid":"7306de3d-a962-4f03-99cd-7f218259f7e5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":62,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6809","nonce":4038313383}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":4038313383}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":4038313383}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":4038313383}]},"public_addr":"192.168.123.100:6809/4038313383","cluster_addr":"192.168.123.100:6810/4038313383","heartbeat_back_addr":"192.168.123.100:6812/4038313383","heartbeat_front_addr":"192.168.123.100:6811/4038313383","state":["exists","up"]},{"osd":3,"uuid":"814401b5-fa87-447f-8581-6e4a7fde7f2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6813","nonce":652999983}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":652999983}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":652999983}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":652999983}]},"public_addr":"192.168.123.100:6813/652999983","cluster_addr":"192.168.123.100:6814/652999983","heartbeat_back_addr":"192.168.123.100:6816/652999983","heartbeat_front_addr":"192.168.123.100:6815/652999983","state":["exists","up"]},{"osd":4,"uuid":"dae36d57-3a29-4f68-ae98-8c24557509f1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":1924015120}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6801","nonce":1924015120}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6803","nonce":1924015120}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":1924015120}]},"public_addr":"192.168.123.102:6800/1924015120","cluster_addr":"192.168.123.102:6801/1924015120","heartbeat_back_addr":"192.168.123.102:6803/1924015120","heartbeat_front_addr":"192.168.123.102:6802/1924015120","state":["exists","up"]},{"osd":5,"uuid":"af8538f6-82da-4394-a355-4fac49048640","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2433922459}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6805","nonce":2433922459}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6807","nonce":2433922459}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2433922459}]},"public_addr":"192.168.123.102:6804/2433922459","cluster_addr":"192.168.123.102:6805/2433922459","heartbeat_back_addr":"192.168.123.102:6807/2433922459","heartbeat_front_addr":"192.168.123.102:6806/2433922459","state":["exists","up"]},{"osd":6,"uuid":"8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":57,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2053868073}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6809","nonce":2053868073}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6811","nonce":2053868073}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2053868073}]},"public_addr":"192.168.123.102:6808/2053868073","cluster_addr":"192.168.123.102:6809/2053868073","heartbeat_back_addr":"192.168.123.102:6811/2053868073","heartbeat_front_addr":"192.168.123.102:6810/2053868073","state":["exists","up"]},{"osd":7,"uuid":"5e273c6f-f4ee-411b-abb9-572d1556cdc9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":50,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":3460410068}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6813","nonce":3460410068}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6815","nonce":3460410068}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":3460410068}]},"public_addr":"192.168.123.102:6812/3460410068","cluster_addr":"192.168.123.102:6813/3460410068","heartbeat_back_addr":"192.168.123.102:6815/3460410068","heartbeat_front_addr":"192.168.123.102:6814/3460410068","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:22:05.721108+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:22:38.798160+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:23:11.822442+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:23:47.183439+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:24:21.490435+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:24:55.760395+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:25:31.248063+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:26:04.171875+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[{"pgid":"2.1d","mappings":[{"from":7,"to":2}]}],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6800/2312331294":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/2951980884":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/4291408295":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/829810942":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/2563954711":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/327913294":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/2883785981":"2026-03-10T17:20:41.305093+0000","192.168.123.100:6800/1655006894":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/648307942":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/1071752193":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/69665273":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/408944714":"2026-03-10T17:20:31.039632+0000","192.168.123.100:6800/3114914985":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/1156117939":"2026-03-10T17:26:56.573600+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 >> v2:192.168.123.100:6800/2673235927 conn(0x7f82980777c0 msgr2=0x7f8298079c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3413288172 >> v2:192.168.123.100:6800/2673235927 conn(0x7f82980777c0 0x7f8298079c80 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f82b0009a90 tx=0x7f82b0009450 comp rx=0 tx=0).stop 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c8102090 msgr2=0x7f82c810da60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3413288172 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c8102090 0x7f82c810da60 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f82bc00cce0 tx=0x7f82bc007590 comp rx=0 tx=0).stop 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 shutdown_connections 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3413288172 >> v2:192.168.123.100:6800/2673235927 conn(0x7f82980777c0 0x7f8298079c80 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3413288172 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f82c810f4a0 0x7f82c81acbd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3413288172 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f82c8102090 0x7f82c810da60 unknown :-1 s=CLOSED pgs=143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 --2- 192.168.123.100:0/3413288172 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f82c8101770 0x7f82c810d520 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 >> 192.168.123.100:0/3413288172 conn(0x7f82c80fd5a0 msgr2=0x7f82c80ff990 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 shutdown_connections 2026-03-09T17:27:22.890 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:22.887+0000 7f82c7fff640 1 -- 192.168.123.100:0/3413288172 wait complete. 2026-03-09T17:27:23.246 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:23 vm00 bash[20770]: audit 2026-03-09T17:27:21.455225+0000 mgr.y (mgr.14505) 39 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:23.247 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:23 vm00 bash[20770]: audit 2026-03-09T17:27:21.455225+0000 mgr.y (mgr.14505) 39 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:23.272 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T17:27:23.272 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd dump --format=json 2026-03-09T17:27:23.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:23 vm00 bash[28333]: audit 2026-03-09T17:27:21.455225+0000 mgr.y (mgr.14505) 39 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:23 vm00 bash[28333]: audit 2026-03-09T17:27:21.455225+0000 mgr.y (mgr.14505) 39 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:23.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:23 vm02 bash[23351]: audit 2026-03-09T17:27:21.455225+0000 mgr.y (mgr.14505) 39 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:23.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:23 vm02 bash[23351]: audit 2026-03-09T17:27:21.455225+0000 mgr.y (mgr.14505) 39 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: cluster 2026-03-09T17:27:22.652234+0000 mgr.y (mgr.14505) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: cluster 2026-03-09T17:27:22.652234+0000 mgr.y (mgr.14505) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:22.886857+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.100:0/3413288172' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:22.886857+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.100:0/3413288172' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.262231+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.262231+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.272211+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.272211+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.970126+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.970126+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.976226+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.976226+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.977717+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.977717+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.978476+0000 mon.c (mon.2) 53 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.978476+0000 mon.c (mon.2) 53 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.983882+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:24 vm02 bash[23351]: audit 2026-03-09T17:27:23.983882+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: cluster 2026-03-09T17:27:22.652234+0000 mgr.y (mgr.14505) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: cluster 2026-03-09T17:27:22.652234+0000 mgr.y (mgr.14505) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:22.886857+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.100:0/3413288172' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:22.886857+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.100:0/3413288172' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.262231+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.262231+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.272211+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.272211+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.405 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.970126+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.970126+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.976226+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.976226+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.977717+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.977717+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.978476+0000 mon.c (mon.2) 53 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.978476+0000 mon.c (mon.2) 53 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.983882+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:24 vm00 bash[28333]: audit 2026-03-09T17:27:23.983882+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: cluster 2026-03-09T17:27:22.652234+0000 mgr.y (mgr.14505) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: cluster 2026-03-09T17:27:22.652234+0000 mgr.y (mgr.14505) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:22.886857+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.100:0/3413288172' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:22.886857+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.100:0/3413288172' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.262231+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.262231+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.272211+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.272211+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.970126+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.970126+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.976226+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.976226+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.977717+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.977717+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.978476+0000 mon.c (mon.2) 53 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.978476+0000 mon.c (mon.2) 53 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.983882+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.406 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[20770]: audit 2026-03-09T17:27:23.983882+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:24.685 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 systemd[1]: Stopping Ceph alertmanager.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T17:27:24.685 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56160]: ts=2026-03-09T17:27:24.510Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T17:27:24.685 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56906]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-alertmanager-a 2026-03-09T17:27:24.685 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@alertmanager.a.service: Deactivated successfully. 2026-03-09T17:27:24.685 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 systemd[1]: Stopped Ceph alertmanager.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:27:24.685 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 systemd[1]: Started Ceph alertmanager.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:27:25.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56982]: ts=2026-03-09T17:27:24.746Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T17:27:25.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56982]: ts=2026-03-09T17:27:24.746Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T17:27:25.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56982]: ts=2026-03-09T17:27:24.748Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-09T17:27:25.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56982]: ts=2026-03-09T17:27:24.749Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T17:27:25.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56982]: ts=2026-03-09T17:27:24.773Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T17:27:25.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56982]: ts=2026-03-09T17:27:24.775Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T17:27:25.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56982]: ts=2026-03-09T17:27:24.776Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T17:27:25.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:24 vm00 bash[56982]: ts=2026-03-09T17:27:24.776Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T17:27:25.348 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:25 vm02 bash[23351]: cephadm 2026-03-09T17:27:23.996564+0000 mgr.y (mgr.14505) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T17:27:25.348 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:25 vm02 bash[23351]: cephadm 2026-03-09T17:27:23.996564+0000 mgr.y (mgr.14505) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T17:27:25.348 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:25 vm02 bash[23351]: cephadm 2026-03-09T17:27:23.999420+0000 mgr.y (mgr.14505) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T17:27:25.348 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:25 vm02 bash[23351]: cephadm 2026-03-09T17:27:23.999420+0000 mgr.y (mgr.14505) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T17:27:25.348 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:25 vm02 bash[23351]: audit 2026-03-09T17:27:24.613103+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.348 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:25 vm02 bash[23351]: audit 2026-03-09T17:27:24.613103+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.348 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:25 vm02 bash[23351]: audit 2026-03-09T17:27:24.621997+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.348 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:25 vm02 bash[23351]: audit 2026-03-09T17:27:24.621997+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 systemd[1]: Stopping Ceph prometheus.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.321Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.323Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.323Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T17:27:25.349 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[49957]: ts=2026-03-09T17:27:25.323Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:25 vm00 bash[28333]: cephadm 2026-03-09T17:27:23.996564+0000 mgr.y (mgr.14505) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:25 vm00 bash[28333]: cephadm 2026-03-09T17:27:23.996564+0000 mgr.y (mgr.14505) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:25 vm00 bash[28333]: cephadm 2026-03-09T17:27:23.999420+0000 mgr.y (mgr.14505) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:25 vm00 bash[28333]: cephadm 2026-03-09T17:27:23.999420+0000 mgr.y (mgr.14505) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:25 vm00 bash[28333]: audit 2026-03-09T17:27:24.613103+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:25 vm00 bash[28333]: audit 2026-03-09T17:27:24.613103+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:25 vm00 bash[28333]: audit 2026-03-09T17:27:24.621997+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:25 vm00 bash[28333]: audit 2026-03-09T17:27:24.621997+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:25 vm00 bash[20770]: cephadm 2026-03-09T17:27:23.996564+0000 mgr.y (mgr.14505) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:25 vm00 bash[20770]: cephadm 2026-03-09T17:27:23.996564+0000 mgr.y (mgr.14505) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:25 vm00 bash[20770]: cephadm 2026-03-09T17:27:23.999420+0000 mgr.y (mgr.14505) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:25 vm00 bash[20770]: cephadm 2026-03-09T17:27:23.999420+0000 mgr.y (mgr.14505) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:25 vm00 bash[20770]: audit 2026-03-09T17:27:24.613103+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:25 vm00 bash[20770]: audit 2026-03-09T17:27:24.613103+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:25 vm00 bash[20770]: audit 2026-03-09T17:27:24.621997+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:25 vm00 bash[20770]: audit 2026-03-09T17:27:24.621997+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:25.635 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51791]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-prometheus-a 2026-03-09T17:27:25.635 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@prometheus.a.service: Deactivated successfully. 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 systemd[1]: Stopped Ceph prometheus.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 systemd[1]: Started Ceph prometheus.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.515Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.516Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.516Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm02 (none))" 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.516Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.516Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.524Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.524Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.526Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.526Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.527Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.527Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.463µs 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.527Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.527Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.527Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.527Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=19.396µs wal_replay_duration=240.31µs wbl_replay_duration=130ns total_replay_duration=285.253µs 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.528Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.528Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.528Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.552Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=23.092361ms db_storage=891ns remote_storage=1.663µs web_handler=181ns query_engine=821ns scrape=7.711855ms scrape_sd=84.377µs notify=6.963µs notify_sd=54.471µs rules=14.957836ms tracing=5.49µs 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.552Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T17:27:25.636 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 17:27:25 vm02 bash[51866]: ts=2026-03-09T17:27:25.552Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T17:27:25.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE Bus STOPPING 2026-03-09T17:27:26.103 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE Bus STOPPED 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE Bus STARTING 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE Serving on http://:::9283 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE Bus STARTED 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE Bus STOPPING 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE Bus STOPPED 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:25 vm00 bash[21037]: [09/Mar/2026:17:27:25] ENGINE Bus STARTING 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:26 vm00 bash[21037]: [09/Mar/2026:17:27:26] ENGINE Serving on http://:::9283 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:26 vm00 bash[21037]: [09/Mar/2026:17:27:26] ENGINE Bus STARTED 2026-03-09T17:27:26.104 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:26 vm00 bash[21037]: [09/Mar/2026:17:27:26] ENGINE Bus STOPPING 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: cephadm 2026-03-09T17:27:24.625342+0000 mgr.y (mgr.14505) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: cephadm 2026-03-09T17:27:24.625342+0000 mgr.y (mgr.14505) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: cluster 2026-03-09T17:27:24.653560+0000 mgr.y (mgr.14505) 44 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: cluster 2026-03-09T17:27:24.653560+0000 mgr.y (mgr.14505) 44 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: cephadm 2026-03-09T17:27:24.815930+0000 mgr.y (mgr.14505) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm02 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: cephadm 2026-03-09T17:27:24.815930+0000 mgr.y (mgr.14505) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm02 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.409324+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.409324+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.415531+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.415531+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.421149+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.421149+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.422616+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.422616+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.426254+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.426254+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.437398+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.437398+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.438642+0000 mon.c (mon.2) 57 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.438642+0000 mon.c (mon.2) 57 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.442366+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.442366+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.453762+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.453762+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.455126+0000 mon.c (mon.2) 59 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.455126+0000 mon.c (mon.2) 59 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.459607+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.459607+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.502047+0000 mon.c (mon.2) 60 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:26 vm02 bash[23351]: audit 2026-03-09T17:27:25.502047+0000 mon.c (mon.2) 60 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: cephadm 2026-03-09T17:27:24.625342+0000 mgr.y (mgr.14505) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: cephadm 2026-03-09T17:27:24.625342+0000 mgr.y (mgr.14505) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: cluster 2026-03-09T17:27:24.653560+0000 mgr.y (mgr.14505) 44 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: cluster 2026-03-09T17:27:24.653560+0000 mgr.y (mgr.14505) 44 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: cephadm 2026-03-09T17:27:24.815930+0000 mgr.y (mgr.14505) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm02 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: cephadm 2026-03-09T17:27:24.815930+0000 mgr.y (mgr.14505) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm02 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.409324+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.409324+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.415531+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.415531+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.421149+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.421149+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.422616+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.422616+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.426254+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.426254+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.437398+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.437398+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.438642+0000 mon.c (mon.2) 57 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.438642+0000 mon.c (mon.2) 57 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.442366+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.442366+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.453762+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.453762+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.455126+0000 mon.c (mon.2) 59 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.455126+0000 mon.c (mon.2) 59 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.459607+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.459607+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.502047+0000 mon.c (mon.2) 60 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:26 vm00 bash[28333]: audit 2026-03-09T17:27:25.502047+0000 mon.c (mon.2) 60 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: cephadm 2026-03-09T17:27:24.625342+0000 mgr.y (mgr.14505) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: cephadm 2026-03-09T17:27:24.625342+0000 mgr.y (mgr.14505) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: cluster 2026-03-09T17:27:24.653560+0000 mgr.y (mgr.14505) 44 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: cluster 2026-03-09T17:27:24.653560+0000 mgr.y (mgr.14505) 44 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: cephadm 2026-03-09T17:27:24.815930+0000 mgr.y (mgr.14505) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm02 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: cephadm 2026-03-09T17:27:24.815930+0000 mgr.y (mgr.14505) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm02 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.409324+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.409324+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.415531+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.415531+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.421149+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.421149+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.422616+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.422616+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.426254+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.426254+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.437398+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.437398+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.438642+0000 mon.c (mon.2) 57 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.438642+0000 mon.c (mon.2) 57 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.442366+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.442366+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.453762+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.453762+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.455126+0000 mon.c (mon.2) 59 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.455126+0000 mon.c (mon.2) 59 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.459607+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.459607+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.502047+0000 mon.c (mon.2) 60 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:26.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[20770]: audit 2026-03-09T17:27:25.502047+0000 mon.c (mon.2) 60 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:27:27.037 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:26 vm00 bash[21037]: [09/Mar/2026:17:27:26] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T17:27:27.038 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:26 vm00 bash[21037]: [09/Mar/2026:17:27:26] ENGINE Bus STOPPED 2026-03-09T17:27:27.038 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:26 vm00 bash[21037]: [09/Mar/2026:17:27:26] ENGINE Bus STARTING 2026-03-09T17:27:27.038 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:26 vm00 bash[21037]: [09/Mar/2026:17:27:26] ENGINE Serving on http://:::9283 2026-03-09T17:27:27.038 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:26 vm00 bash[21037]: [09/Mar/2026:17:27:26] ENGINE Bus STARTED 2026-03-09T17:27:27.038 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:26 vm00 bash[56982]: ts=2026-03-09T17:27:26.750Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00106201s 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.421460+0000 mgr.y (mgr.14505) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.421460+0000 mgr.y (mgr.14505) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.422791+0000 mgr.y (mgr.14505) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.422791+0000 mgr.y (mgr.14505) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.437818+0000 mgr.y (mgr.14505) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.437818+0000 mgr.y (mgr.14505) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.439034+0000 mgr.y (mgr.14505) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.439034+0000 mgr.y (mgr.14505) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.454400+0000 mgr.y (mgr.14505) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.454400+0000 mgr.y (mgr.14505) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.455640+0000 mgr.y (mgr.14505) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:25.455640+0000 mgr.y (mgr.14505) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:26.772600+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:27 vm02 bash[23351]: audit 2026-03-09T17:27:26.772600+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.421460+0000 mgr.y (mgr.14505) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.421460+0000 mgr.y (mgr.14505) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.422791+0000 mgr.y (mgr.14505) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.422791+0000 mgr.y (mgr.14505) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.437818+0000 mgr.y (mgr.14505) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.437818+0000 mgr.y (mgr.14505) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.439034+0000 mgr.y (mgr.14505) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.439034+0000 mgr.y (mgr.14505) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.454400+0000 mgr.y (mgr.14505) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.454400+0000 mgr.y (mgr.14505) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.455640+0000 mgr.y (mgr.14505) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:25.455640+0000 mgr.y (mgr.14505) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:26.772600+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:27 vm00 bash[28333]: audit 2026-03-09T17:27:26.772600+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.421460+0000 mgr.y (mgr.14505) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.421460+0000 mgr.y (mgr.14505) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.422791+0000 mgr.y (mgr.14505) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.422791+0000 mgr.y (mgr.14505) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.437818+0000 mgr.y (mgr.14505) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.437818+0000 mgr.y (mgr.14505) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.439034+0000 mgr.y (mgr.14505) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.439034+0000 mgr.y (mgr.14505) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.454400+0000 mgr.y (mgr.14505) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.454400+0000 mgr.y (mgr.14505) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.455640+0000 mgr.y (mgr.14505) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:25.455640+0000 mgr.y (mgr.14505) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:26.772600+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:27 vm00 bash[20770]: audit 2026-03-09T17:27:26.772600+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:28 vm02 bash[23351]: cluster 2026-03-09T17:27:26.654126+0000 mgr.y (mgr.14505) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:28 vm02 bash[23351]: cluster 2026-03-09T17:27:26.654126+0000 mgr.y (mgr.14505) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:28.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:28 vm00 bash[28333]: cluster 2026-03-09T17:27:26.654126+0000 mgr.y (mgr.14505) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:28 vm00 bash[28333]: cluster 2026-03-09T17:27:26.654126+0000 mgr.y (mgr.14505) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:28 vm00 bash[20770]: cluster 2026-03-09T17:27:26.654126+0000 mgr.y (mgr.14505) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:28 vm00 bash[20770]: cluster 2026-03-09T17:27:26.654126+0000 mgr.y (mgr.14505) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:28.922 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:29.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 -- 192.168.123.100:0/3597176827 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f59c4113b80 msgr2=0x7f59c4115f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 --2- 192.168.123.100:0/3597176827 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f59c4113b80 0x7f59c4115f70 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f59c0009f90 tx=0x7f59c002f390 comp rx=0 tx=0).stop 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 -- 192.168.123.100:0/3597176827 shutdown_connections 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 --2- 192.168.123.100:0/3597176827 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f59c4113b80 0x7f59c4115f70 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 --2- 192.168.123.100:0/3597176827 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f59c4077f40 0x7f59c4113640 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 --2- 192.168.123.100:0/3597176827 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f59c4077620 0x7f59c4077a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 -- 192.168.123.100:0/3597176827 >> 192.168.123.100:0/3597176827 conn(0x7f59c41009e0 msgr2=0x7f59c4102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 -- 192.168.123.100:0/3597176827 shutdown_connections 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 -- 192.168.123.100:0/3597176827 wait complete. 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.063+0000 7f59cb93f640 1 Processor -- start 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 -- start start 2026-03-09T17:27:29.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f59c4077620 0x7f59c41a0fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f59c4077f40 0x7f59c41a14f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f59c4113b80 0x7f59c41a5880 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f59c41189c0 con 0x7f59c4077f40 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f59c4118840 con 0x7f59c4077620 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f59c4118b40 con 0x7f59c4113b80 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c96b4640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f59c4077620 0x7f59c41a0fb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c96b4640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f59c4077620 0x7f59c41a0fb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:36968/0 (socket says 192.168.123.100:36968) 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c96b4640 1 -- 192.168.123.100:0/2819645441 learned_addr learned my addr 192.168.123.100:0/2819645441 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c8eb3640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f59c4077f40 0x7f59c41a14f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c9eb5640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f59c4113b80 0x7f59c41a5880 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:29.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c8eb3640 1 -- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f59c4113b80 msgr2=0x7f59c41a5880 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c8eb3640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f59c4113b80 0x7f59c41a5880 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c8eb3640 1 -- 192.168.123.100:0/2819645441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f59c4077620 msgr2=0x7f59c41a0fb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c8eb3640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f59c4077620 0x7f59c41a0fb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c8eb3640 1 -- 192.168.123.100:0/2819645441 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f59c41a6000 con 0x7f59c4077f40 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c96b4640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f59c4077620 0x7f59c41a0fb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59c8eb3640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f59c4077f40 0x7f59c41a14f0 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f59b800e9f0 tx=0x7f59b800eec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59b27fc640 1 -- 192.168.123.100:0/2819645441 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f59b800cd80 con 0x7f59c4077f40 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59b27fc640 1 -- 192.168.123.100:0/2819645441 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f59b8004540 con 0x7f59c4077f40 2026-03-09T17:27:29.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f59c41a62f0 con 0x7f59c4077f40 2026-03-09T17:27:29.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59b27fc640 1 -- 192.168.123.100:0/2819645441 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f59b8010690 con 0x7f59c4077f40 2026-03-09T17:27:29.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f59c41adb30 con 0x7f59c4077f40 2026-03-09T17:27:29.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.067+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f598c005180 con 0x7f59c4077f40 2026-03-09T17:27:29.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.071+0000 7f59b27fc640 1 -- 192.168.123.100:0/2819645441 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f59b80040d0 con 0x7f59c4077f40 2026-03-09T17:27:29.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.071+0000 7f59b27fc640 1 --2- 192.168.123.100:0/2819645441 >> v2:192.168.123.100:6800/2673235927 conn(0x7f599c077680 0x7f599c079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:29.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.071+0000 7f59c96b4640 1 --2- 192.168.123.100:0/2819645441 >> v2:192.168.123.100:6800/2673235927 conn(0x7f599c077680 0x7f599c079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:29.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.071+0000 7f59b27fc640 1 -- 192.168.123.100:0/2819645441 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f59b805e0e0 con 0x7f59c4077f40 2026-03-09T17:27:29.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.071+0000 7f59b27fc640 1 -- 192.168.123.100:0/2819645441 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f59b8066040 con 0x7f59c4077f40 2026-03-09T17:27:29.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.071+0000 7f59c96b4640 1 --2- 192.168.123.100:0/2819645441 >> v2:192.168.123.100:6800/2673235927 conn(0x7f599c077680 0x7f599c079b40 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f59b40097c0 tx=0x7f59b4009340 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:29.175 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.171+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f598c005740 con 0x7f59c4077f40 2026-03-09T17:27:29.176 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59b27fc640 1 -- 192.168.123.100:0/2819645441 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v65) ==== 74+0+21299 (secure 0 0 0) 0x7f59b806aef0 con 0x7f59c4077f40 2026-03-09T17:27:29.176 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:27:29.176 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":65,"fsid":"16190428-1bdc-11f1-aea4-d920f1c7e51e","created":"2026-03-09T17:20:18.987272+0000","modified":"2026-03-09T17:26:56.573625+0000","last_up_change":"2026-03-09T17:26:06.190800+0000","last_in_change":"2026-03-09T17:25:49.789972+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"luminous","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T17:23:15.373179+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-09T17:26:26.069452+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-09T17:26:28.166724+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"57","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-09T17:26:29.051528+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"61","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":61,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T17:26:30.129678+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"59","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T17:26:32.208119+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"61","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"568cb8ad-2652-448a-8223-a18b7a893c0f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6801","nonce":1564530650}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":1564530650}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":1564530650}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":1564530650}]},"public_addr":"192.168.123.100:6801/1564530650","cluster_addr":"192.168.123.100:6802/1564530650","heartbeat_back_addr":"192.168.123.100:6804/1564530650","heartbeat_front_addr":"192.168.123.100:6803/1564530650","state":["exists","up"]},{"osd":1,"uuid":"e9e37873-3fd7-4a71-be36-c91d099132ac","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6805","nonce":1086087815}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":1086087815}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":1086087815}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":1086087815}]},"public_addr":"192.168.123.100:6805/1086087815","cluster_addr":"192.168.123.100:6806/1086087815","heartbeat_back_addr":"192.168.123.100:6808/1086087815","heartbeat_front_addr":"192.168.123.100:6807/1086087815","state":["exists","up"]},{"osd":2,"uuid":"7306de3d-a962-4f03-99cd-7f218259f7e5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":62,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6809","nonce":4038313383}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":4038313383}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":4038313383}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":4038313383}]},"public_addr":"192.168.123.100:6809/4038313383","cluster_addr":"192.168.123.100:6810/4038313383","heartbeat_back_addr":"192.168.123.100:6812/4038313383","heartbeat_front_addr":"192.168.123.100:6811/4038313383","state":["exists","up"]},{"osd":3,"uuid":"814401b5-fa87-447f-8581-6e4a7fde7f2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6813","nonce":652999983}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":652999983}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":652999983}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":652999983}]},"public_addr":"192.168.123.100:6813/652999983","cluster_addr":"192.168.123.100:6814/652999983","heartbeat_back_addr":"192.168.123.100:6816/652999983","heartbeat_front_addr":"192.168.123.100:6815/652999983","state":["exists","up"]},{"osd":4,"uuid":"dae36d57-3a29-4f68-ae98-8c24557509f1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":1924015120}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6801","nonce":1924015120}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6803","nonce":1924015120}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":1924015120}]},"public_addr":"192.168.123.102:6800/1924015120","cluster_addr":"192.168.123.102:6801/1924015120","heartbeat_back_addr":"192.168.123.102:6803/1924015120","heartbeat_front_addr":"192.168.123.102:6802/1924015120","state":["exists","up"]},{"osd":5,"uuid":"af8538f6-82da-4394-a355-4fac49048640","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2433922459}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6805","nonce":2433922459}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6807","nonce":2433922459}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2433922459}]},"public_addr":"192.168.123.102:6804/2433922459","cluster_addr":"192.168.123.102:6805/2433922459","heartbeat_back_addr":"192.168.123.102:6807/2433922459","heartbeat_front_addr":"192.168.123.102:6806/2433922459","state":["exists","up"]},{"osd":6,"uuid":"8c0ef3ce-fc46-4d75-a3ec-120ccf82b9a6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":57,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2053868073}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6809","nonce":2053868073}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6811","nonce":2053868073}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2053868073}]},"public_addr":"192.168.123.102:6808/2053868073","cluster_addr":"192.168.123.102:6809/2053868073","heartbeat_back_addr":"192.168.123.102:6811/2053868073","heartbeat_front_addr":"192.168.123.102:6810/2053868073","state":["exists","up"]},{"osd":7,"uuid":"5e273c6f-f4ee-411b-abb9-572d1556cdc9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":50,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":3460410068}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6813","nonce":3460410068}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6815","nonce":3460410068}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":3460410068}]},"public_addr":"192.168.123.102:6812/3460410068","cluster_addr":"192.168.123.102:6813/3460410068","heartbeat_back_addr":"192.168.123.102:6815/3460410068","heartbeat_front_addr":"192.168.123.102:6814/3460410068","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:22:05.721108+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:22:38.798160+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:23:11.822442+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:23:47.183439+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:24:21.490435+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:24:55.760395+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:25:31.248063+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:26:04.171875+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[{"pgid":"2.1d","mappings":[{"from":7,"to":2}]}],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6800/2312331294":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/2951980884":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/4291408295":"2026-03-10T17:20:31.039632+0000","192.168.123.100:0/829810942":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/2563954711":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/327913294":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/2883785981":"2026-03-10T17:20:41.305093+0000","192.168.123.100:6800/1655006894":"2026-03-10T17:20:41.305093+0000","192.168.123.100:0/648307942":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/1071752193":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/69665273":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/408944714":"2026-03-10T17:20:31.039632+0000","192.168.123.100:6800/3114914985":"2026-03-10T17:26:56.573600+0000","192.168.123.100:0/1156117939":"2026-03-10T17:26:56.573600+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T17:27:29.178 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 >> v2:192.168.123.100:6800/2673235927 conn(0x7f599c077680 msgr2=0x7f599c079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:29.178 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 --2- 192.168.123.100:0/2819645441 >> v2:192.168.123.100:6800/2673235927 conn(0x7f599c077680 0x7f599c079b40 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f59b40097c0 tx=0x7f59b4009340 comp rx=0 tx=0).stop 2026-03-09T17:27:29.178 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f59c4077f40 msgr2=0x7f59c41a14f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f59c4077f40 0x7f59c41a14f0 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f59b800e9f0 tx=0x7f59b800eec0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 shutdown_connections 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 --2- 192.168.123.100:0/2819645441 >> v2:192.168.123.100:6800/2673235927 conn(0x7f599c077680 0x7f599c079b40 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f59c4113b80 0x7f59c41a5880 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f59c4077f40 0x7f59c41a14f0 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 --2- 192.168.123.100:0/2819645441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f59c4077620 0x7f59c41a0fb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 >> 192.168.123.100:0/2819645441 conn(0x7f59c41009e0 msgr2=0x7f59c4102dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.175+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 shutdown_connections 2026-03-09T17:27:29.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:29.179+0000 7f59cb93f640 1 -- 192.168.123.100:0/2819645441 wait complete. 2026-03-09T17:27:29.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph tell osd.0 flush_pg_stats 2026-03-09T17:27:29.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph tell osd.1 flush_pg_stats 2026-03-09T17:27:29.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph tell osd.2 flush_pg_stats 2026-03-09T17:27:29.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph tell osd.3 flush_pg_stats 2026-03-09T17:27:29.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph tell osd.4 flush_pg_stats 2026-03-09T17:27:29.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph tell osd.5 flush_pg_stats 2026-03-09T17:27:29.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph tell osd.6 flush_pg_stats 2026-03-09T17:27:29.232 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph tell osd.7 flush_pg_stats 2026-03-09T17:27:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:30 vm00 bash[20770]: cluster 2026-03-09T17:27:28.654708+0000 mgr.y (mgr.14505) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:30 vm00 bash[20770]: cluster 2026-03-09T17:27:28.654708+0000 mgr.y (mgr.14505) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:30 vm00 bash[20770]: audit 2026-03-09T17:27:29.175947+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.100:0/2819645441' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:30 vm00 bash[20770]: audit 2026-03-09T17:27:29.175947+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.100:0/2819645441' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:30 vm00 bash[28333]: cluster 2026-03-09T17:27:28.654708+0000 mgr.y (mgr.14505) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:30 vm00 bash[28333]: cluster 2026-03-09T17:27:28.654708+0000 mgr.y (mgr.14505) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:30 vm00 bash[28333]: audit 2026-03-09T17:27:29.175947+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.100:0/2819645441' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:30 vm00 bash[28333]: audit 2026-03-09T17:27:29.175947+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.100:0/2819645441' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:30 vm02 bash[23351]: cluster 2026-03-09T17:27:28.654708+0000 mgr.y (mgr.14505) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:30 vm02 bash[23351]: cluster 2026-03-09T17:27:28.654708+0000 mgr.y (mgr.14505) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:30 vm02 bash[23351]: audit 2026-03-09T17:27:29.175947+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.100:0/2819645441' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:30 vm02 bash[23351]: audit 2026-03-09T17:27:29.175947+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.100:0/2819645441' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.304496+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.304496+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.310182+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.310182+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.836676+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.836676+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.843170+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.843170+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.844330+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.844330+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.845175+0000 mon.c (mon.2) 63 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.845175+0000 mon.c (mon.2) 63 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.848984+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:31 vm02 bash[23351]: audit 2026-03-09T17:27:30.848984+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.304496+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.304496+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.310182+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.310182+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.836676+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.836676+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.843170+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.843170+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.844330+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.844330+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.845175+0000 mon.c (mon.2) 63 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.845175+0000 mon.c (mon.2) 63 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.848984+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:31 vm00 bash[28333]: audit 2026-03-09T17:27:30.848984+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.304496+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.304496+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.310182+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.310182+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.836676+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.836676+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.843170+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.843170+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.844330+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.844330+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.845175+0000 mon.c (mon.2) 63 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.845175+0000 mon.c (mon.2) 63 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.848984+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:31 vm00 bash[20770]: audit 2026-03-09T17:27:30.848984+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:27:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:32 vm02 bash[23351]: cluster 2026-03-09T17:27:30.655056+0000 mgr.y (mgr.14505) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:32 vm02 bash[23351]: cluster 2026-03-09T17:27:30.655056+0000 mgr.y (mgr.14505) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:32 vm00 bash[28333]: cluster 2026-03-09T17:27:30.655056+0000 mgr.y (mgr.14505) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:32 vm00 bash[28333]: cluster 2026-03-09T17:27:30.655056+0000 mgr.y (mgr.14505) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:32 vm00 bash[20770]: cluster 2026-03-09T17:27:30.655056+0000 mgr.y (mgr.14505) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:32 vm00 bash[20770]: cluster 2026-03-09T17:27:30.655056+0000 mgr.y (mgr.14505) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:33 vm02 bash[23351]: audit 2026-03-09T17:27:31.465543+0000 mgr.y (mgr.14505) 55 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:33.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:33 vm02 bash[23351]: audit 2026-03-09T17:27:31.465543+0000 mgr.y (mgr.14505) 55 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:33.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:33 vm00 bash[28333]: audit 2026-03-09T17:27:31.465543+0000 mgr.y (mgr.14505) 55 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:33.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:33 vm00 bash[28333]: audit 2026-03-09T17:27:31.465543+0000 mgr.y (mgr.14505) 55 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:33 vm00 bash[20770]: audit 2026-03-09T17:27:31.465543+0000 mgr.y (mgr.14505) 55 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:33 vm00 bash[20770]: audit 2026-03-09T17:27:31.465543+0000 mgr.y (mgr.14505) 55 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:33.973 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:33.975 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:33.975 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:33.976 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:33.978 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:33.979 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:33.981 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:33.982 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:34.251 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.247+0000 7f279507d640 1 -- 192.168.123.100:0/2698926631 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f279010b080 msgr2=0x7f2790074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.251 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.247+0000 7f279507d640 1 --2- 192.168.123.100:0/2698926631 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f279010b080 0x7f2790074d30 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7f278800b600 tx=0x7f27880305c0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.247+0000 7f279507d640 1 -- 192.168.123.100:0/2698926631 shutdown_connections 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.247+0000 7f279507d640 1 --2- 192.168.123.100:0/2698926631 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2790075470 0x7f279007be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.247+0000 7f279507d640 1 --2- 192.168.123.100:0/2698926631 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f279010b080 0x7f2790074d30 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.247+0000 7f279507d640 1 --2- 192.168.123.100:0/2698926631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f279010a6d0 0x7f279010aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.247+0000 7f279507d640 1 -- 192.168.123.100:0/2698926631 >> 192.168.123.100:0/2698926631 conn(0x7f279006d9f0 msgr2=0x7f279006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.247+0000 7f279507d640 1 -- 192.168.123.100:0/2698926631 shutdown_connections 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 -- 192.168.123.100:0/2698926631 wait complete. 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 Processor -- start 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 -- start start 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2790075470 0x7f2790085a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f279010a6d0 0x7f2790085fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f279007fb10 0x7f279007ffa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f279007e450 con 0x7f2790075470 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f279007e2d0 con 0x7f279010a6d0 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f279007e5d0 con 0x7f279007fb10 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f279007fb10 0x7f279007ffa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f279007fb10 0x7f279007ffa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:58872/0 (socket says 192.168.123.100:58872) 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 -- 192.168.123.100:0/185327651 learned_addr learned my addr 192.168.123.100:0/185327651 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278ed76640 1 --2- 192.168.123.100:0/185327651 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2790075470 0x7f2790085a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.253 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278e575640 1 --2- 192.168.123.100:0/185327651 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f279010a6d0 0x7f2790085fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 -- 192.168.123.100:0/185327651 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f279010a6d0 msgr2=0x7f2790085fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 --2- 192.168.123.100:0/185327651 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f279010a6d0 0x7f2790085fd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 -- 192.168.123.100:0/185327651 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2790075470 msgr2=0x7f2790085a90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 --2- 192.168.123.100:0/185327651 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2790075470 0x7f2790085a90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 -- 192.168.123.100:0/185327651 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2790080860 con 0x7f279007fb10 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278f577640 1 --2- 192.168.123.100:0/185327651 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f279007fb10 0x7f279007ffa0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f278400e9b0 tx=0x7f278400ee80 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f278400cde0 con 0x7f279007fb10 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 -- 192.168.123.100:0/185327651 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2790137d60 con 0x7f279007fb10 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 -- 192.168.123.100:0/185327651 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f27901382c0 con 0x7f279007fb10 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2784004540 con 0x7f279007fb10 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2784010690 con 0x7f279007fb10 2026-03-09T17:27:34.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f279507d640 1 -- 192.168.123.100:0/185327651 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f279010e770 con 0x7f279007fb10 2026-03-09T17:27:34.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f2784020070 con 0x7f279007fb10 2026-03-09T17:27:34.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f276ffff640 1 --2- 192.168.123.100:0/185327651 >> v2:192.168.123.100:6800/2673235927 conn(0x7f27740778d0 0x7f2774079d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.251+0000 7f278ed76640 1 --2- 192.168.123.100:0/185327651 >> v2:192.168.123.100:6800/2673235927 conn(0x7f27740778d0 0x7f2774079d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.255+0000 7f278ed76640 1 --2- 192.168.123.100:0/185327651 >> v2:192.168.123.100:6800/2673235927 conn(0x7f27740778d0 0x7f2774079d90 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f2780007c10 tx=0x7f27800073d0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.255+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f278409a100 con 0x7f279007fb10 2026-03-09T17:27:34.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.255+0000 7f276ffff640 1 --2- 192.168.123.100:0/185327651 >> v2:192.168.123.102:6800/1924015120 conn(0x7f27740811f0 0x7f2774083650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.255+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 --> v2:192.168.123.102:6800/1924015120 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f2774083ce0 con 0x7f27740811f0 2026-03-09T17:27:34.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.255+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7f278409a4f0 con 0x7f279007fb10 2026-03-09T17:27:34.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.255+0000 7f278e575640 1 --2- 192.168.123.100:0/185327651 >> v2:192.168.123.102:6800/1924015120 conn(0x7f27740811f0 0x7f2774083650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.255+0000 7f278e575640 1 --2- 192.168.123.100:0/185327651 >> v2:192.168.123.102:6800/1924015120 conn(0x7f27740811f0 0x7f2774083650 crc :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.4 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.259 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.255+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 <== osd.4 v2:192.168.123.102:6800/1924015120 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f2774083ce0 con 0x7f27740811f0 2026-03-09T17:27:34.267 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.263+0000 7f279507d640 1 -- 192.168.123.100:0/185327651 --> v2:192.168.123.102:6800/1924015120 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f279010e980 con 0x7f27740811f0 2026-03-09T17:27:34.268 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.263+0000 7f276ffff640 1 -- 192.168.123.100:0/185327651 <== osd.4 v2:192.168.123.102:6800/1924015120 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f279010e980 con 0x7f27740811f0 2026-03-09T17:27:34.268 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 -- 192.168.123.100:0/185327651 >> v2:192.168.123.102:6800/1924015120 conn(0x7f27740811f0 msgr2=0x7f2774083650 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.268 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 --2- 192.168.123.100:0/185327651 >> v2:192.168.123.102:6800/1924015120 conn(0x7f27740811f0 0x7f2774083650 crc :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.268 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 -- 192.168.123.100:0/185327651 >> v2:192.168.123.100:6800/2673235927 conn(0x7f27740778d0 msgr2=0x7f2774079d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.268 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 --2- 192.168.123.100:0/185327651 >> v2:192.168.123.100:6800/2673235927 conn(0x7f27740778d0 0x7f2774079d90 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f2780007c10 tx=0x7f27800073d0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.271 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 -- 192.168.123.100:0/185327651 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f279007fb10 msgr2=0x7f279007ffa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.271 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 --2- 192.168.123.100:0/185327651 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f279007fb10 0x7f279007ffa0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f278400e9b0 tx=0x7f278400ee80 comp rx=0 tx=0).stop 2026-03-09T17:27:34.271 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f278f577640 1 -- 192.168.123.100:0/185327651 reap_dead start 2026-03-09T17:27:34.271 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 -- 192.168.123.100:0/185327651 shutdown_connections 2026-03-09T17:27:34.271 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 -- 192.168.123.100:0/185327651 >> 192.168.123.100:0/185327651 conn(0x7f279006d9f0 msgr2=0x7f27900732a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.272 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 -- 192.168.123.100:0/185327651 shutdown_connections 2026-03-09T17:27:34.272 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.267+0000 7f276dffb640 1 -- 192.168.123.100:0/185327651 wait complete. 2026-03-09T17:27:34.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.339+0000 7fbd19f08640 1 -- 192.168.123.100:0/1237930836 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbd14075470 msgr2=0x7fbd1407be20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.339+0000 7fbd19f08640 1 --2- 192.168.123.100:0/1237930836 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbd14075470 0x7fbd1407be20 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fbd0c00d500 tx=0x7fbd0c030f80 comp rx=0 tx=0).stop 2026-03-09T17:27:34.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 -- 192.168.123.100:0/1237930836 shutdown_connections 2026-03-09T17:27:34.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 --2- 192.168.123.100:0/1237930836 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbd14075470 0x7fbd1407be20 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 --2- 192.168.123.100:0/1237930836 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd1410b080 0x7fbd14074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 --2- 192.168.123.100:0/1237930836 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbd1410a6d0 0x7fbd1410aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 -- 192.168.123.100:0/1237930836 >> 192.168.123.100:0/1237930836 conn(0x7fbd1406d9f0 msgr2=0x7fbd1406de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 -- 192.168.123.100:0/1237930836 shutdown_connections 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 -- 192.168.123.100:0/1237930836 wait complete. 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 Processor -- start 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 -- start start 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd1410a6d0 0x7fbd14085aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbd1410b080 0x7fbd14085fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbd1407fc00 0x7fbd140800b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fbd1407e7d0 con 0x7fbd1410b080 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fbd1407e650 con 0x7fbd1410a6d0 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd19f08640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fbd1407e950 con 0x7fbd1407fc00 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbd1410b080 0x7fbd14085fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbd1410b080 0x7fbd14085fe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:35842/0 (socket says 192.168.123.100:35842) 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 -- 192.168.123.100:0/3034223093 learned_addr learned my addr 192.168.123.100:0/3034223093 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd137fe640 1 --2- 192.168.123.100:0/3034223093 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd1410a6d0 0x7fbd14085aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 -- 192.168.123.100:0/3034223093 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbd1407fc00 msgr2=0x7fbd140800b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 --2- 192.168.123.100:0/3034223093 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbd1407fc00 0x7fbd140800b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 -- 192.168.123.100:0/3034223093 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd1410a6d0 msgr2=0x7fbd14085aa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 --2- 192.168.123.100:0/3034223093 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd1410a6d0 0x7fbd14085aa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 -- 192.168.123.100:0/3034223093 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbd141be950 con 0x7fbd1410b080 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.347+0000 7fbd12ffd640 1 --2- 192.168.123.100:0/3034223093 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbd1410b080 0x7fbd14085fe0 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fbd0800ed30 tx=0x7fbd0800c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.352 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd10ff9640 1 -- 192.168.123.100:0/3034223093 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fbd080040d0 con 0x7fbd1410b080 2026-03-09T17:27:34.353 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd19f08640 1 -- 192.168.123.100:0/3034223093 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbd141bec40 con 0x7fbd1410b080 2026-03-09T17:27:34.353 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd19f08640 1 -- 192.168.123.100:0/3034223093 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fbd141bf180 con 0x7fbd1410b080 2026-03-09T17:27:34.353 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd10ff9640 1 -- 192.168.123.100:0/3034223093 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fbd080026e0 con 0x7fbd1410b080 2026-03-09T17:27:34.353 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd10ff9640 1 -- 192.168.123.100:0/3034223093 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fbd08010650 con 0x7fbd1410b080 2026-03-09T17:27:34.354 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd10ff9640 1 -- 192.168.123.100:0/3034223093 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fbd08020070 con 0x7fbd1410b080 2026-03-09T17:27:34.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd10ff9640 1 --2- 192.168.123.100:0/3034223093 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbcec077790 0x7fbcec079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd137fe640 1 --2- 192.168.123.100:0/3034223093 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbcec077790 0x7fbcec079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.351+0000 7fbd10ff9640 1 -- 192.168.123.100:0/3034223093 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fbd0809ae00 con 0x7fbd1410b080 2026-03-09T17:27:34.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.355+0000 7fbcf27fc640 1 --2- 192.168.123.100:0/3034223093 >> v2:192.168.123.102:6808/2053868073 conn(0x7fbd140639d0 0x7fbd14075d40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.355+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 --> v2:192.168.123.102:6808/2053868073 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fbd14080970 con 0x7fbd140639d0 2026-03-09T17:27:34.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.355+0000 7fbd13fff640 1 --2- 192.168.123.100:0/3034223093 >> v2:192.168.123.102:6808/2053868073 conn(0x7fbd140639d0 0x7fbd14075d40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.355+0000 7fbd137fe640 1 --2- 192.168.123.100:0/3034223093 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbcec077790 0x7fbcec079c50 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7fbd04004560 tx=0x7fbd04009290 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.363 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.359+0000 7fbd13fff640 1 --2- 192.168.123.100:0/3034223093 >> v2:192.168.123.102:6808/2053868073 conn(0x7fbd140639d0 0x7fbd14075d40 crc :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.6 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.371 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.363+0000 7fbd10ff9640 1 -- 192.168.123.100:0/3034223093 <== osd.6 v2:192.168.123.102:6808/2053868073 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7fbd0c007be0 con 0x7fbd140639d0 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.399+0000 7f12f3fff640 1 -- 192.168.123.100:0/2084970540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f12f411c780 msgr2=0x7f12f411eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.399+0000 7f12f3fff640 1 --2- 192.168.123.100:0/2084970540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f12f411c780 0x7f12f411eb70 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f12ec009fc0 tx=0x7f12ec02f3d0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 -- 192.168.123.100:0/2084970540 shutdown_connections 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 --2- 192.168.123.100:0/2084970540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f12f411c780 0x7f12f411eb70 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 --2- 192.168.123.100:0/2084970540 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f410a850 0x7f12f410acb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 --2- 192.168.123.100:0/2084970540 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f12f410a470 0x7f12f41114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 -- 192.168.123.100:0/2084970540 >> 192.168.123.100:0/2084970540 conn(0x7f12f406d9c0 msgr2=0x7f12f406ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 -- 192.168.123.100:0/2084970540 shutdown_connections 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- 192.168.123.100:0/3847938377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a850 msgr2=0x7f6ab410acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 --2- 192.168.123.100:0/3847938377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a850 0x7f6ab410acd0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f6a9c009960 tx=0x7f6a9c02f120 comp rx=0 tx=0).stop 2026-03-09T17:27:34.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- 192.168.123.100:0/3847938377 shutdown_connections 2026-03-09T17:27:34.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 --2- 192.168.123.100:0/3847938377 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ab411c780 0x7f6ab411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 --2- 192.168.123.100:0/3847938377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a850 0x7f6ab410acd0 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 --2- 192.168.123.100:0/3847938377 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ab410a470 0x7f6ab41114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- 192.168.123.100:0/3847938377 >> 192.168.123.100:0/3847938377 conn(0x7f6ab406db00 msgr2=0x7f6ab406df10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 -- 192.168.123.100:0/2084970540 wait complete. 2026-03-09T17:27:34.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 Processor -- start 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- 192.168.123.100:0/3847938377 shutdown_connections 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- 192.168.123.100:0/3847938377 wait complete. 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 Processor -- start 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- start start 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a470 0x7f6ab41af6e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ab410a850 0x7f6ab41afc20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ab411c780 0x7f6ab41a98a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6ab4121530 con 0x7f6ab411c780 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f6ab41213b0 con 0x7f6ab410a470 2026-03-09T17:27:34.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f6ab9c1d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f6ab41216b0 con 0x7f6ab410a850 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 -- start start 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f410a470 0x7f12f4112660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f12f410a850 0x7f12f4112ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f12f411c780 0x7f12f41be1f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f12f4121400 con 0x7f12f410a850 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f12f4121280 con 0x7f12f410a470 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.411+0000 7f12f3fff640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f12f4121580 con 0x7f12f411c780 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f2ffd640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f410a470 0x7f12f4112660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f2ffd640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f410a470 0x7f12f4112660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:58266/0 (socket says 192.168.123.100:58266) 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f2ffd640 1 -- 192.168.123.100:0/809679606 learned_addr learned my addr 192.168.123.100:0/809679606 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f27fc640 1 --2- 192.168.123.100:0/809679606 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f12f410a850 0x7f12f4112ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f37fe640 1 --2- 192.168.123.100:0/809679606 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f12f411c780 0x7f12f41be1f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f37fe640 1 -- 192.168.123.100:0/809679606 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f410a470 msgr2=0x7f12f4112660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f37fe640 1 --2- 192.168.123.100:0/809679606 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f410a470 0x7f12f4112660 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f37fe640 1 -- 192.168.123.100:0/809679606 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f12f410a850 msgr2=0x7f12f4112ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f37fe640 1 --2- 192.168.123.100:0/809679606 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f12f410a850 0x7f12f4112ba0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.416 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f37fe640 1 -- 192.168.123.100:0/809679606 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f12f41be730 con 0x7f12f411c780 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a470 0x7f6ab41af6e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a470 0x7f6ab41af6e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:58258/0 (socket says 192.168.123.100:58258) 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 -- 192.168.123.100:0/1874543358 learned_addr learned my addr 192.168.123.100:0/1874543358 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab3fff640 1 --2- 192.168.123.100:0/1874543358 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ab411c780 0x7f6ab41a98a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 -- 192.168.123.100:0/1874543358 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ab410a850 msgr2=0x7f6ab41afc20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab2ffd640 1 --2- 192.168.123.100:0/1874543358 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ab410a850 0x7f6ab41afc20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 --2- 192.168.123.100:0/1874543358 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ab410a850 0x7f6ab41afc20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 -- 192.168.123.100:0/1874543358 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ab411c780 msgr2=0x7f6ab41a98a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 --2- 192.168.123.100:0/1874543358 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ab411c780 0x7f6ab41a98a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 -- 192.168.123.100:0/1874543358 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6ab41aa070 con 0x7f6ab410a470 2026-03-09T17:27:34.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f37fe640 1 --2- 192.168.123.100:0/809679606 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f12f411c780 0x7f12f41be1f0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f12f406bff0 tx=0x7f12ec002ee0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab3fff640 1 --2- 192.168.123.100:0/1874543358 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ab411c780 0x7f6ab41a98a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:27:34.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab2ffd640 1 --2- 192.168.123.100:0/1874543358 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ab410a850 0x7f6ab41afc20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:27:34.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab37fe640 1 --2- 192.168.123.100:0/1874543358 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a470 0x7f6ab41af6e0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f6aa800d950 tx=0x7f6aa800de20 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6aa8014070 con 0x7f6ab410a470 2026-03-09T17:27:34.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f12ec004300 con 0x7f12f411c780 2026-03-09T17:27:34.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f12f41be900 con 0x7f12f411c780 2026-03-09T17:27:34.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f12f41bede0 con 0x7f12f411c780 2026-03-09T17:27:34.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f12ec033070 con 0x7f12f411c780 2026-03-09T17:27:34.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.419+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f12ec038840 con 0x7f12f411c780 2026-03-09T17:27:34.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.419+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f12f410be20 con 0x7f12f411c780 2026-03-09T17:27:34.424 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab9c1d640 1 -- 192.168.123.100:0/1874543358 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6ab41aa360 con 0x7f6ab410a470 2026-03-09T17:27:34.424 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.415+0000 7f6ab9c1d640 1 -- 192.168.123.100:0/1874543358 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6ab4111f30 con 0x7f6ab410a470 2026-03-09T17:27:34.424 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.423+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f6aa80044e0 con 0x7f6ab410a470 2026-03-09T17:27:34.425 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.423+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6aa8002e30 con 0x7f6ab410a470 2026-03-09T17:27:34.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.423+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f6aa800b840 con 0x7f6ab410a470 2026-03-09T17:27:34.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.423+0000 7f6ab0ff9640 1 --2- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6a8c0776c0 0x7f6a8c079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.430 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.423+0000 7f6ab2ffd640 1 --2- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6a8c0776c0 0x7f6a8c079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.430 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.427+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f6aa8099ff0 con 0x7f6ab410a470 2026-03-09T17:27:34.430 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.427+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f6a80000f80 con 0x7f6ab410a470 2026-03-09T17:27:34.430 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.427+0000 7f6ab2ffd640 1 --2- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6a8c0776c0 0x7f6a8c079b80 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f6ab410c4a0 tx=0x7f6a9c0023d0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.430 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.427+0000 7f6ab0ff9640 1 --2- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6805/1086087815 conn(0x7f6a8c080fe0 0x7f6a8c083440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.430 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.427+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 --> v2:192.168.123.100:6805/1086087815 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f6aa8020e50 con 0x7f6a8c080fe0 2026-03-09T17:27:34.430 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.427+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7f6aa80627f0 con 0x7f6ab410a470 2026-03-09T17:27:34.430 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.427+0000 7f6ab3fff640 1 --2- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6805/1086087815 conn(0x7f6a8c080fe0 0x7f6a8c083440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.432 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.431+0000 7f6ab3fff640 1 --2- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6805/1086087815 conn(0x7f6a8c080fe0 0x7f6a8c083440 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.435 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.431+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 <== osd.1 v2:192.168.123.100:6805/1086087815 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f6aa8020e50 con 0x7f6a8c080fe0 2026-03-09T17:27:34.435 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.435+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 --> v2:192.168.123.102:6808/2053868073 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fbd14064000 con 0x7fbd140639d0 2026-03-09T17:27:34.440 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.435+0000 7fbd10ff9640 1 -- 192.168.123.100:0/3034223093 <== osd.6 v2:192.168.123.102:6808/2053868073 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7fbd14064000 con 0x7fbd140639d0 2026-03-09T17:27:34.442 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f12ec048020 con 0x7f12f411c780 2026-03-09T17:27:34.442 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12e3fff640 1 --2- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6800/2673235927 conn(0x7f12bc077680 0x7f12bc079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.442 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f12ec0bdab0 con 0x7f12f411c780 2026-03-09T17:27:34.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12e3fff640 1 --2- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6813/652999983 conn(0x7f12bc081020 0x7f12bc083480 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 --> v2:192.168.123.100:6813/652999983 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f12ec00aea0 con 0x7f12bc081020 2026-03-09T17:27:34.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7f12ec0bdea0 con 0x7f12f411c780 2026-03-09T17:27:34.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12f2ffd640 1 --2- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6800/2673235927 conn(0x7f12bc077680 0x7f12bc079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12f27fc640 1 --2- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6813/652999983 conn(0x7f12bc081020 0x7f12bc083480 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12f27fc640 1 --2- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6813/652999983 conn(0x7f12bc081020 0x7f12bc083480 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.3 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12f2ffd640 1 --2- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6800/2673235927 conn(0x7f12bc077680 0x7f12bc079b40 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f12dc0097c0 tx=0x7f12dc009340 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.439+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 <== osd.3 v2:192.168.123.100:6813/652999983 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f12ec00aea0 con 0x7f12bc081020 2026-03-09T17:27:34.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 >> v2:192.168.123.102:6808/2053868073 conn(0x7fbd140639d0 msgr2=0x7fbd14075d40 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 --2- 192.168.123.100:0/3034223093 >> v2:192.168.123.102:6808/2053868073 conn(0x7fbd140639d0 0x7fbd14075d40 crc :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbcec077790 msgr2=0x7fbcec079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 --2- 192.168.123.100:0/3034223093 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbcec077790 0x7fbcec079c50 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7fbd04004560 tx=0x7fbd04009290 comp rx=0 tx=0).stop 2026-03-09T17:27:34.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbd1410b080 msgr2=0x7fbd14085fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 --2- 192.168.123.100:0/3034223093 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbd1410b080 0x7fbd14085fe0 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fbd0800ed30 tx=0x7fbd0800c6a0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbd13fff640 1 -- 192.168.123.100:0/3034223093 reap_dead start 2026-03-09T17:27:34.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 shutdown_connections 2026-03-09T17:27:34.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 >> 192.168.123.100:0/3034223093 conn(0x7fbd1406d9f0 msgr2=0x7fbd14072ba0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 shutdown_connections 2026-03-09T17:27:34.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.451+0000 7fbcf27fc640 1 -- 192.168.123.100:0/3034223093 wait complete. 2026-03-09T17:27:34.459 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 --> v2:192.168.123.100:6805/1086087815 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f6a80002cf0 con 0x7f6a8c080fe0 2026-03-09T17:27:34.460 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6ab0ff9640 1 -- 192.168.123.100:0/1874543358 <== osd.1 v2:192.168.123.100:6805/1086087815 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f6a80002cf0 con 0x7f6a8c080fe0 2026-03-09T17:27:34.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6805/1086087815 conn(0x7f6a8c080fe0 msgr2=0x7f6a8c083440 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 --2- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6805/1086087815 conn(0x7f6a8c080fe0 0x7f6a8c083440 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6a8c0776c0 msgr2=0x7f6a8c079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 --2- 192.168.123.100:0/1874543358 >> v2:192.168.123.100:6800/2673235927 conn(0x7f6a8c0776c0 0x7f6a8c079b80 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f6ab410c4a0 tx=0x7f6a9c0023d0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a470 msgr2=0x7f6ab41af6e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 --2- 192.168.123.100:0/1874543358 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ab410a470 0x7f6ab41af6e0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f6aa800d950 tx=0x7f6aa800de20 comp rx=0 tx=0).stop 2026-03-09T17:27:34.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6ab3fff640 1 -- 192.168.123.100:0/1874543358 reap_dead start 2026-03-09T17:27:34.463 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 shutdown_connections 2026-03-09T17:27:34.463 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 >> 192.168.123.100:0/1874543358 conn(0x7f6ab406db00 msgr2=0x7f6ab410b450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.463 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 shutdown_connections 2026-03-09T17:27:34.463 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f6a927fc640 1 -- 192.168.123.100:0/1874543358 wait complete. 2026-03-09T17:27:34.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 --> v2:192.168.123.100:6813/652999983 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f12f4074650 con 0x7f12bc081020 2026-03-09T17:27:34.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.459+0000 7f12e3fff640 1 -- 192.168.123.100:0/809679606 <== osd.3 v2:192.168.123.100:6813/652999983 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f12f4074650 con 0x7f12bc081020 2026-03-09T17:27:34.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6813/652999983 conn(0x7f12bc081020 msgr2=0x7f12bc083480 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 --2- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6813/652999983 conn(0x7f12bc081020 0x7f12bc083480 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6800/2673235927 conn(0x7f12bc077680 msgr2=0x7f12bc079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 --2- 192.168.123.100:0/809679606 >> v2:192.168.123.100:6800/2673235927 conn(0x7f12bc077680 0x7f12bc079b40 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f12dc0097c0 tx=0x7f12dc009340 comp rx=0 tx=0).stop 2026-03-09T17:27:34.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f12f411c780 msgr2=0x7f12f41be1f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 --2- 192.168.123.100:0/809679606 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f12f411c780 0x7f12f41be1f0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f12f406bff0 tx=0x7f12ec002ee0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f37fe640 1 -- 192.168.123.100:0/809679606 reap_dead start 2026-03-09T17:27:34.467 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 shutdown_connections 2026-03-09T17:27:34.467 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 >> 192.168.123.100:0/809679606 conn(0x7f12f406d9c0 msgr2=0x7f12f410b480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.467 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 shutdown_connections 2026-03-09T17:27:34.467 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.463+0000 7f12f3fff640 1 -- 192.168.123.100:0/809679606 wait complete. 2026-03-09T17:27:34.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 -- 192.168.123.100:0/426216315 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a850 msgr2=0x7fa76810acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 --2- 192.168.123.100:0/426216315 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a850 0x7fa76810acd0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fa75c009960 tx=0x7fa75c02f140 comp rx=0 tx=0).stop 2026-03-09T17:27:34.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 -- 192.168.123.100:0/426216315 shutdown_connections 2026-03-09T17:27:34.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 --2- 192.168.123.100:0/426216315 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76811c780 0x7fa76811eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 --2- 192.168.123.100:0/426216315 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a850 0x7fa76810acd0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 --2- 192.168.123.100:0/426216315 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa76810a470 0x7fa7681114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 -- 192.168.123.100:0/426216315 >> 192.168.123.100:0/426216315 conn(0x7fa76806d9f0 msgr2=0x7fa76806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 -- 192.168.123.100:0/426216315 shutdown_connections 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 -- 192.168.123.100:0/426216315 wait complete. 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 Processor -- start 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.475+0000 7fa76df31640 1 -- start start 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa76df31640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a470 0x7fa768112b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa76df31640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76811c780 0x7fa768113040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa76df31640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa7681bbc20 0x7fa7681be010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa76df31640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa768121430 con 0x7fa76811c780 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa76df31640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fa7681212b0 con 0x7fa76810a470 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa76df31640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fa7681215b0 con 0x7fa7681bbc20 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa7677fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a470 0x7fa768112b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa7677fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a470 0x7fa768112b00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:58278/0 (socket says 192.168.123.100:58278) 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa7677fe640 1 -- 192.168.123.100:0/2571580727 learned_addr learned my addr 192.168.123.100:0/2571580727 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa7677fe640 1 -- 192.168.123.100:0/2571580727 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa7681bbc20 msgr2=0x7fa7681be010 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa7677fe640 1 --2- 192.168.123.100:0/2571580727 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa7681bbc20 0x7fa7681be010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa7677fe640 1 -- 192.168.123.100:0/2571580727 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76811c780 msgr2=0x7fa768113040 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa7677fe640 1 --2- 192.168.123.100:0/2571580727 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76811c780 0x7fa768113040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.479+0000 7fa7677fe640 1 -- 192.168.123.100:0/2571580727 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa7681be6e0 con 0x7fa76810a470 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.483+0000 7fa7677fe640 1 --2- 192.168.123.100:0/2571580727 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a470 0x7fa768112b00 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fa75800c0c0 tx=0x7fa75800c590 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa758019070 con 0x7fa76810a470 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa76df31640 1 -- 192.168.123.100:0/2571580727 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa7681be970 con 0x7fa76810a470 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa76df31640 1 -- 192.168.123.100:0/2571580727 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa7681beeb0 con 0x7fa76810a470 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa7580092d0 con 0x7fa76810a470 2026-03-09T17:27:34.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa758004900 con 0x7fa76810a470 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa76df31640 1 -- 192.168.123.100:0/2571580727 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7fa734000f80 con 0x7fa76810a470 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fa7580040d0 con 0x7fa76810a470 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa764ff9640 1 --2- 192.168.123.100:0/2571580727 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa740077790 0x7fa740079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.487+0000 7fa766ffd640 1 --2- 192.168.123.100:0/2571580727 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa740077790 0x7fa740079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.491+0000 7fa766ffd640 1 --2- 192.168.123.100:0/2571580727 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa740077790 0x7fa740079c50 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fa75c009880 tx=0x7fa75c0057d0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.491+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fa758099920 con 0x7fa76810a470 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.491+0000 7fa764ff9640 1 --2- 192.168.123.100:0/2571580727 >> v2:192.168.123.102:6812/3460410068 conn(0x7fa7400810b0 0x7fa740083510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.491+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 --> v2:192.168.123.102:6812/3460410068 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fa75801be50 con 0x7fa7400810b0 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.491+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7fa758099d10 con 0x7fa76810a470 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.491+0000 7fa767fff640 1 --2- 192.168.123.100:0/2571580727 >> v2:192.168.123.102:6812/3460410068 conn(0x7fa7400810b0 0x7fa740083510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.491+0000 7fa767fff640 1 --2- 192.168.123.100:0/2571580727 >> v2:192.168.123.102:6812/3460410068 conn(0x7fa7400810b0 0x7fa740083510 crc :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.7 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.491+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 <== osd.7 v2:192.168.123.102:6812/3460410068 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7fa75801be50 con 0x7fa7400810b0 2026-03-09T17:27:34.503 INFO:teuthology.orchestra.run.vm00.stdout:137438953511 2026-03-09T17:27:34.503 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd last-stat-seq osd.4 2026-03-09T17:27:34.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.515+0000 7fa76df31640 1 -- 192.168.123.100:0/2571580727 --> v2:192.168.123.102:6812/3460410068 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fa734002cf0 con 0x7fa7400810b0 2026-03-09T17:27:34.531 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa764ff9640 1 -- 192.168.123.100:0/2571580727 <== osd.7 v2:192.168.123.102:6812/3460410068 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7fa734002cf0 con 0x7fa7400810b0 2026-03-09T17:27:34.531 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 -- 192.168.123.100:0/2571580727 >> v2:192.168.123.102:6812/3460410068 conn(0x7fa7400810b0 msgr2=0x7fa740083510 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.531 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 --2- 192.168.123.100:0/2571580727 >> v2:192.168.123.102:6812/3460410068 conn(0x7fa7400810b0 0x7fa740083510 crc :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 -- 192.168.123.100:0/2571580727 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa740077790 msgr2=0x7fa740079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 --2- 192.168.123.100:0/2571580727 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa740077790 0x7fa740079c50 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fa75c009880 tx=0x7fa75c0057d0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 -- 192.168.123.100:0/2571580727 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a470 msgr2=0x7fa768112b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 --2- 192.168.123.100:0/2571580727 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76810a470 0x7fa768112b00 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fa75800c0c0 tx=0x7fa75800c590 comp rx=0 tx=0).stop 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa767fff640 1 -- 192.168.123.100:0/2571580727 reap_dead start 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 -- 192.168.123.100:0/2571580727 shutdown_connections 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 -- 192.168.123.100:0/2571580727 >> 192.168.123.100:0/2571580727 conn(0x7fa76806d9f0 msgr2=0x7fa76811cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 -- 192.168.123.100:0/2571580727 shutdown_connections 2026-03-09T17:27:34.532 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.519+0000 7fa7467fc640 1 -- 192.168.123.100:0/2571580727 wait complete. 2026-03-09T17:27:34.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:34 vm00 bash[28333]: cluster 2026-03-09T17:27:32.655361+0000 mgr.y (mgr.14505) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:34.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:34 vm00 bash[28333]: cluster 2026-03-09T17:27:32.655361+0000 mgr.y (mgr.14505) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:34 vm00 bash[20770]: cluster 2026-03-09T17:27:32.655361+0000 mgr.y (mgr.14505) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:34 vm00 bash[20770]: cluster 2026-03-09T17:27:32.655361+0000 mgr.y (mgr.14505) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- 192.168.123.100:0/3255026899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7d410a470 msgr2=0x7ff7d41114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 --2- 192.168.123.100:0/3255026899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7d410a470 0x7ff7d41114d0 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7ff7c8009a80 tx=0x7ff7c802f2f0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- 192.168.123.100:0/3255026899 shutdown_connections 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 --2- 192.168.123.100:0/3255026899 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7d411c780 0x7ff7d411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 --2- 192.168.123.100:0/3255026899 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7d410a850 0x7ff7d410acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 --2- 192.168.123.100:0/3255026899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7d410a470 0x7ff7d41114d0 unknown :-1 s=CLOSED pgs=148 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- 192.168.123.100:0/3255026899 >> 192.168.123.100:0/3255026899 conn(0x7ff7d406d9f0 msgr2=0x7ff7d406de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- 192.168.123.100:0/3255026899 shutdown_connections 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- 192.168.123.100:0/3255026899 wait complete. 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 Processor -- start 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- start start 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7d410a850 0x7ff7d41a95b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7d411c780 0x7ff7d41a9af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7d41ade80 0x7ff7d41ae330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff7d4121430 con 0x7ff7d41ade80 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7ff7d41212b0 con 0x7ff7d411c780 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7ff7d41215b0 con 0x7ff7d410a850 2026-03-09T17:27:34.605 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d3fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7d41ade80 0x7ff7d41ae330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d2ffd640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7d411c780 0x7ff7d41a9af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d2ffd640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7d411c780 0x7ff7d41a9af0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:58292/0 (socket says 192.168.123.100:58292) 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d2ffd640 1 -- 192.168.123.100:0/1037573101 learned_addr learned my addr 192.168.123.100:0/1037573101 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d3fff640 1 -- 192.168.123.100:0/1037573101 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7d410a850 msgr2=0x7ff7d41a95b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d3fff640 1 --2- 192.168.123.100:0/1037573101 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7d410a850 0x7ff7d41a95b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d3fff640 1 -- 192.168.123.100:0/1037573101 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7d411c780 msgr2=0x7ff7d41a9af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d3fff640 1 --2- 192.168.123.100:0/1037573101 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7d411c780 0x7ff7d41a9af0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d3fff640 1 -- 192.168.123.100:0/1037573101 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff7d41ae9d0 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d3fff640 1 --2- 192.168.123.100:0/1037573101 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7d41ade80 0x7ff7d41ae330 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7ff7d4074ce0 tx=0x7ff7c400c710 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff7c4019070 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff7d41aec60 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff7d4112070 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7ff7c40092d0 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff7c40048c0 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.595+0000 7ff7b27fc640 1 -- 192.168.123.100:0/1037573101 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7ff7a0000f80 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.603+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7ff7c4007500 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.603+0000 7ff7d0ff9640 1 --2- 192.168.123.100:0/1037573101 >> v2:192.168.123.100:6800/2673235927 conn(0x7ff7a4077790 0x7ff7a4079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.603+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7ff7c409a660 con 0x7ff7d41ade80 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.603+0000 7ff7d37fe640 1 --2- 192.168.123.100:0/1037573101 >> v2:192.168.123.100:6800/2673235927 conn(0x7ff7a4077790 0x7ff7a4079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.603+0000 7ff7d0ff9640 1 --2- 192.168.123.100:0/1037573101 >> v2:192.168.123.102:6804/2433922459 conn(0x7ff7a4081130 0x7ff7a4083590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.603+0000 7ff7d37fe640 1 --2- 192.168.123.100:0/1037573101 >> v2:192.168.123.100:6800/2673235927 conn(0x7ff7a4077790 0x7ff7a4079c50 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7ff7c80099a0 tx=0x7ff7c80023d0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.603+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 --> v2:192.168.123.102:6804/2433922459 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7ff7c4016ec0 con 0x7ff7a4081130 2026-03-09T17:27:34.606 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.603+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7ff7c40a1050 con 0x7ff7d41ade80 2026-03-09T17:27:34.618 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.607+0000 7ff7d2ffd640 1 --2- 192.168.123.100:0/1037573101 >> v2:192.168.123.102:6804/2433922459 conn(0x7ff7a4081130 0x7ff7a4083590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7ff7d2ffd640 1 --2- 192.168.123.100:0/1037573101 >> v2:192.168.123.102:6804/2433922459 conn(0x7ff7a4081130 0x7ff7a4083590 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.5 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.620 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.619+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 <== osd.5 v2:192.168.123.102:6804/2433922459 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7ff7c4016ec0 con 0x7ff7a4081130 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- 192.168.123.100:0/1464451323 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974010a6d0 msgr2=0x7f974010aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 --2- 192.168.123.100:0/1464451323 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974010a6d0 0x7f974010aab0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f973800cb20 tx=0x7f9738030580 comp rx=0 tx=0).stop 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- 192.168.123.100:0/1464451323 shutdown_connections 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 --2- 192.168.123.100:0/1464451323 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9740075470 0x7f974007be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 --2- 192.168.123.100:0/1464451323 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f974010b080 0x7f9740074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 --2- 192.168.123.100:0/1464451323 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974010a6d0 0x7f974010aab0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- 192.168.123.100:0/1464451323 >> 192.168.123.100:0/1464451323 conn(0x7f974006d9f0 msgr2=0x7f974006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- 192.168.123.100:0/1464451323 shutdown_connections 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- 192.168.123.100:0/1464451323 wait complete. 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 Processor -- start 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- start start 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9740075470 0x7f9740085b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f974010b080 0x7f9740086070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f974007fc90 0x7f9740080120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f974007e830 con 0x7f974007fc90 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f974007e6b0 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f974007e9b0 con 0x7f974010b080 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9740075470 0x7f9740085b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9740075470 0x7f9740085b30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:58312/0 (socket says 192.168.123.100:58312) 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 -- 192.168.123.100:0/2573339217 learned_addr learned my addr 192.168.123.100:0/2573339217 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 -- 192.168.123.100:0/2573339217 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f974010b080 msgr2=0x7f9740086070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 --2- 192.168.123.100:0/2573339217 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f974010b080 0x7f9740086070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 -- 192.168.123.100:0/2573339217 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f974007fc90 msgr2=0x7f9740080120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 --2- 192.168.123.100:0/2573339217 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f974007fc90 0x7f9740080120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 -- 192.168.123.100:0/2573339217 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f97400809e0 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973f7fe640 1 --2- 192.168.123.100:0/2573339217 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9740075470 0x7f9740085b30 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f9738008cc0 tx=0x7f9738008cf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9738009420 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f97401bea10 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f97401befa0 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f9738009cb0 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f973800c590 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.611+0000 7f971e7fc640 1 -- 192.168.123.100:0/2573339217 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f970c000f80 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f973800c730 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973cff9640 1 --2- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6800/2673235927 conn(0x7f97180776c0 0x7f9718079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973effd640 1 --2- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6800/2673235927 conn(0x7f97180776c0 0x7f9718079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973effd640 1 --2- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6800/2673235927 conn(0x7f97180776c0 0x7f9718079b80 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f9740081370 tx=0x7f9730006d90 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f97380be990 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973cff9640 1 --2- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6801/1564530650 conn(0x7f9718080fe0 0x7f9718083440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 --> v2:192.168.123.100:6801/1564530650 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f973800ae30 con 0x7f9718080fe0 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7f97380bed80 con 0x7f9740075470 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973ffff640 1 --2- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6801/1564530650 conn(0x7f9718080fe0 0x7f9718083440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973ffff640 1 --2- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6801/1564530650 conn(0x7f9718080fe0 0x7f9718083440 crc :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.615+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 <== osd.0 v2:192.168.123.100:6801/1564530650 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f973800ae30 con 0x7f9718080fe0 2026-03-09T17:27:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:34 vm02 bash[23351]: cluster 2026-03-09T17:27:32.655361+0000 mgr.y (mgr.14505) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:34 vm02 bash[23351]: cluster 2026-03-09T17:27:32.655361+0000 mgr.y (mgr.14505) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:34.639 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.631+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 --> v2:192.168.123.100:6801/1564530650 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f970c002cb0 con 0x7f9718080fe0 2026-03-09T17:27:34.639 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.635+0000 7f973cff9640 1 -- 192.168.123.100:0/2573339217 <== osd.0 v2:192.168.123.100:6801/1564530650 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f970c002cb0 con 0x7f9718080fe0 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.635+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6801/1564530650 conn(0x7f9718080fe0 msgr2=0x7f9718083440 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.635+0000 7f9746059640 1 --2- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6801/1564530650 conn(0x7f9718080fe0 0x7f9718083440 crc :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6800/2673235927 conn(0x7f97180776c0 msgr2=0x7f9718079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f9746059640 1 --2- 192.168.123.100:0/2573339217 >> v2:192.168.123.100:6800/2673235927 conn(0x7f97180776c0 0x7f9718079b80 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f9740081370 tx=0x7f9730006d90 comp rx=0 tx=0).stop 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9740075470 msgr2=0x7f9740085b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f9746059640 1 --2- 192.168.123.100:0/2573339217 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9740075470 0x7f9740085b30 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f9738008cc0 tx=0x7f9738008cf0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f973ffff640 1 -- 192.168.123.100:0/2573339217 reap_dead start 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 shutdown_connections 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 >> 192.168.123.100:0/2573339217 conn(0x7f974006d9f0 msgr2=0x7f9740072df0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 shutdown_connections 2026-03-09T17:27:34.644 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.639+0000 7f9746059640 1 -- 192.168.123.100:0/2573339217 wait complete. 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.659+0000 7f4c2ead0640 1 -- 192.168.123.100:0/2121988484 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c2810a470 msgr2=0x7f4c281114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.659+0000 7f4c2ead0640 1 --2- 192.168.123.100:0/2121988484 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c2810a470 0x7f4c281114d0 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7f4c20009f90 tx=0x7f4c2002f3d0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- 192.168.123.100:0/2121988484 shutdown_connections 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 --2- 192.168.123.100:0/2121988484 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c2811c780 0x7f4c2811eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 --2- 192.168.123.100:0/2121988484 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c2810a850 0x7f4c2810acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 --2- 192.168.123.100:0/2121988484 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c2810a470 0x7f4c281114d0 unknown :-1 s=CLOSED pgs=150 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- 192.168.123.100:0/2121988484 >> 192.168.123.100:0/2121988484 conn(0x7f4c2806d9f0 msgr2=0x7f4c2806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- 192.168.123.100:0/2121988484 shutdown_connections 2026-03-09T17:27:34.664 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- 192.168.123.100:0/2121988484 wait complete. 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 Processor -- start 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- start start 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c2810a470 0x7f4c281126e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c2810a850 0x7f4c28112c20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c2811c780 0x7f4c281ace50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4c28121430 con 0x7f4c2810a470 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f4c281212b0 con 0x7f4c2811c780 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f4c281215b0 con 0x7f4c2810a850 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c2811c780 0x7f4c281ace50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c2811c780 0x7f4c281ace50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:58320/0 (socket says 192.168.123.100:58320) 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 -- 192.168.123.100:0/1365283070 learned_addr learned my addr 192.168.123.100:0/1365283070 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c27fff640 1 --2- 192.168.123.100:0/1365283070 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c2810a850 0x7f4c28112c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2c845640 1 --2- 192.168.123.100:0/1365283070 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c2810a470 0x7f4c281126e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.665 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 -- 192.168.123.100:0/1365283070 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c2810a850 msgr2=0x7f4c28112c20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 --2- 192.168.123.100:0/1365283070 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4c2810a850 0x7f4c28112c20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 -- 192.168.123.100:0/1365283070 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c2810a470 msgr2=0x7f4c281126e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 --2- 192.168.123.100:0/1365283070 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c2810a470 0x7f4c281126e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 -- 192.168.123.100:0/1365283070 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4c281ad390 con 0x7f4c2811c780 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2c845640 1 --2- 192.168.123.100:0/1365283070 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4c2810a470 0x7f4c281126e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2d046640 1 --2- 192.168.123.100:0/1365283070 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c2811c780 0x7f4c281ace50 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f4c1800e9c0 tx=0x7f4c1800ee90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c25ffb640 1 -- 192.168.123.100:0/1365283070 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4c1800cd50 con 0x7f4c2811c780 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- 192.168.123.100:0/1365283070 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4c281ad680 con 0x7f4c2811c780 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c25ffb640 1 -- 192.168.123.100:0/1365283070 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f4c18004540 con 0x7f4c2811c780 2026-03-09T17:27:34.666 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c25ffb640 1 -- 192.168.123.100:0/1365283070 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4c18010690 con 0x7f4c2811c780 2026-03-09T17:27:34.683 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.663+0000 7f4c2ead0640 1 -- 192.168.123.100:0/1365283070 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f4c281adbc0 con 0x7f4c2811c780 2026-03-09T17:27:34.683 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c25ffb640 1 -- 192.168.123.100:0/1365283070 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f4c18020070 con 0x7f4c2811c780 2026-03-09T17:27:34.683 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c25ffb640 1 --2- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c14077790 0x7f4c14079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.683 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c2c845640 1 --2- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c14077790 0x7f4c14079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.683 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c25ffb640 1 -- 192.168.123.100:0/1365283070 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f4c1809acd0 con 0x7f4c2811c780 2026-03-09T17:27:34.683 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c2c845640 1 --2- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c14077790 0x7f4c14079c50 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f4c20009da0 tx=0x7f4c2003a040 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.683 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c2ead0640 1 --2- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6809/4038313383 conn(0x7f4bf8001630 0x7f4bf8003af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:34.683 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c2ead0640 1 -- 192.168.123.100:0/1365283070 --> v2:192.168.123.100:6809/4038313383 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f4bf8006ba0 con 0x7f4bf8001630 2026-03-09T17:27:34.685 INFO:teuthology.orchestra.run.vm00.stdout:55834574908 2026-03-09T17:27:34.685 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd last-stat-seq osd.1 2026-03-09T17:27:34.685 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c27fff640 1 --2- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6809/4038313383 conn(0x7f4bf8001630 0x7f4bf8003af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:34.685 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.675+0000 7f4c27fff640 1 --2- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6809/4038313383 conn(0x7f4bf8001630 0x7f4bf8003af0 crc :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:34.685 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.679+0000 7f4c25ffb640 1 -- 192.168.123.100:0/1365283070 <== osd.2 v2:192.168.123.100:6809/4038313383 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f4bf8006ba0 con 0x7f4bf8001630 2026-03-09T17:27:34.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.699+0000 7f4c077fe640 1 -- 192.168.123.100:0/1365283070 --> v2:192.168.123.100:6809/4038313383 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f4bf8005ca0 con 0x7f4bf8001630 2026-03-09T17:27:34.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.699+0000 7f4c25ffb640 1 -- 192.168.123.100:0/1365283070 <== osd.2 v2:192.168.123.100:6809/4038313383 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f4bf8005ca0 con 0x7f4bf8001630 2026-03-09T17:27:34.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.699+0000 7f4c077fe640 1 -- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6809/4038313383 conn(0x7f4bf8001630 msgr2=0x7f4bf8003af0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.699+0000 7f4c077fe640 1 --2- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6809/4038313383 conn(0x7f4bf8001630 0x7f4bf8003af0 crc :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.699+0000 7f4c077fe640 1 -- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c14077790 msgr2=0x7f4c14079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.699+0000 7f4c077fe640 1 --2- 192.168.123.100:0/1365283070 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4c14077790 0x7f4c14079c50 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f4c20009da0 tx=0x7f4c2003a040 comp rx=0 tx=0).stop 2026-03-09T17:27:34.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.699+0000 7f4c077fe640 1 -- 192.168.123.100:0/1365283070 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c2811c780 msgr2=0x7f4c281ace50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.699+0000 7f4c077fe640 1 --2- 192.168.123.100:0/1365283070 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c2811c780 0x7f4c281ace50 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f4c1800e9c0 tx=0x7f4c1800ee90 comp rx=0 tx=0).stop 2026-03-09T17:27:34.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.703+0000 7f4c2d046640 1 -- 192.168.123.100:0/1365283070 reap_dead start 2026-03-09T17:27:34.708 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.703+0000 7f4c077fe640 1 -- 192.168.123.100:0/1365283070 shutdown_connections 2026-03-09T17:27:34.708 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.703+0000 7f4c077fe640 1 -- 192.168.123.100:0/1365283070 >> 192.168.123.100:0/1365283070 conn(0x7f4c2806d9f0 msgr2=0x7f4c2811cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.708 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.703+0000 7f4c077fe640 1 -- 192.168.123.100:0/1365283070 shutdown_connections 2026-03-09T17:27:34.708 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.703+0000 7f4c077fe640 1 -- 192.168.123.100:0/1365283070 wait complete. 2026-03-09T17:27:34.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.711+0000 7ff7b27fc640 1 -- 192.168.123.100:0/1037573101 --> v2:192.168.123.102:6804/2433922459 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7ff7a0002d70 con 0x7ff7a4081130 2026-03-09T17:27:34.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.711+0000 7ff7d0ff9640 1 -- 192.168.123.100:0/1037573101 <== osd.5 v2:192.168.123.102:6804/2433922459 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7ff7a0002d70 con 0x7ff7a4081130 2026-03-09T17:27:34.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.711+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 >> v2:192.168.123.102:6804/2433922459 conn(0x7ff7a4081130 msgr2=0x7ff7a4083590 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.711+0000 7ff7d9e14640 1 --2- 192.168.123.100:0/1037573101 >> v2:192.168.123.102:6804/2433922459 conn(0x7ff7a4081130 0x7ff7a4083590 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 >> v2:192.168.123.100:6800/2673235927 conn(0x7ff7a4077790 msgr2=0x7ff7a4079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d9e14640 1 --2- 192.168.123.100:0/1037573101 >> v2:192.168.123.100:6800/2673235927 conn(0x7ff7a4077790 0x7ff7a4079c50 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7ff7c80099a0 tx=0x7ff7c80023d0 comp rx=0 tx=0).stop 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7d41ade80 msgr2=0x7ff7d41ae330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d9e14640 1 --2- 192.168.123.100:0/1037573101 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7d41ade80 0x7ff7d41ae330 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7ff7d4074ce0 tx=0x7ff7c400c710 comp rx=0 tx=0).stop 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d3fff640 1 -- 192.168.123.100:0/1037573101 reap_dead start 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 shutdown_connections 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 >> 192.168.123.100:0/1037573101 conn(0x7ff7d406d9f0 msgr2=0x7ff7d411cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 shutdown_connections 2026-03-09T17:27:34.731 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:34.715+0000 7ff7d9e14640 1 -- 192.168.123.100:0/1037573101 wait complete. 2026-03-09T17:27:34.764 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 17:27:34 vm00 bash[56982]: ts=2026-03-09T17:27:34.756Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.007073754s 2026-03-09T17:27:34.768 INFO:teuthology.orchestra.run.vm00.stdout:188978561050 2026-03-09T17:27:34.768 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd last-stat-seq osd.6 2026-03-09T17:27:34.872 INFO:teuthology.orchestra.run.vm00.stdout:163208757280 2026-03-09T17:27:34.872 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd last-stat-seq osd.5 2026-03-09T17:27:34.889 INFO:teuthology.orchestra.run.vm00.stdout:111669149742 2026-03-09T17:27:34.889 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd last-stat-seq osd.3 2026-03-09T17:27:34.905 INFO:teuthology.orchestra.run.vm00.stdout:77309411381 2026-03-09T17:27:34.906 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd last-stat-seq osd.2 2026-03-09T17:27:34.906 INFO:teuthology.orchestra.run.vm00.stdout:34359738434 2026-03-09T17:27:34.906 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd last-stat-seq osd.0 2026-03-09T17:27:34.909 INFO:teuthology.orchestra.run.vm00.stdout:214748364819 2026-03-09T17:27:34.910 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph osd last-stat-seq osd.7 2026-03-09T17:27:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:35 vm02 bash[23351]: cluster 2026-03-09T17:27:34.657958+0000 mgr.y (mgr.14505) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T17:27:35.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:35 vm02 bash[23351]: cluster 2026-03-09T17:27:34.657958+0000 mgr.y (mgr.14505) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T17:27:35.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:35 vm00 bash[28333]: cluster 2026-03-09T17:27:34.657958+0000 mgr.y (mgr.14505) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T17:27:35.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:35 vm00 bash[28333]: cluster 2026-03-09T17:27:34.657958+0000 mgr.y (mgr.14505) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T17:27:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:35 vm00 bash[20770]: cluster 2026-03-09T17:27:34.657958+0000 mgr.y (mgr.14505) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T17:27:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:35 vm00 bash[20770]: cluster 2026-03-09T17:27:34.657958+0000 mgr.y (mgr.14505) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T17:27:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:27:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:27:38.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:37 vm00 bash[28333]: cluster 2026-03-09T17:27:36.658297+0000 mgr.y (mgr.14505) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T17:27:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:37 vm00 bash[28333]: cluster 2026-03-09T17:27:36.658297+0000 mgr.y (mgr.14505) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T17:27:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:37 vm00 bash[20770]: cluster 2026-03-09T17:27:36.658297+0000 mgr.y (mgr.14505) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T17:27:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:37 vm00 bash[20770]: cluster 2026-03-09T17:27:36.658297+0000 mgr.y (mgr.14505) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T17:27:38.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:37 vm02 bash[23351]: cluster 2026-03-09T17:27:36.658297+0000 mgr.y (mgr.14505) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T17:27:38.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:37 vm02 bash[23351]: cluster 2026-03-09T17:27:36.658297+0000 mgr.y (mgr.14505) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T17:27:39.244 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:39.244 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:39.246 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:39.248 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:39.250 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:39.251 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:39.252 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:39.253 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- 192.168.123.100:0/317847320 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbf1810a6d0 msgr2=0x7fbf1810aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 --2- 192.168.123.100:0/317847320 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbf1810a6d0 0x7fbf1810aab0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fbf140099b0 tx=0x7fbf1402f1d0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- 192.168.123.100:0/317847320 shutdown_connections 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 --2- 192.168.123.100:0/317847320 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf18075470 0x7fbf1807be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 --2- 192.168.123.100:0/317847320 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbf1810b080 0x7fbf18074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 --2- 192.168.123.100:0/317847320 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbf1810a6d0 0x7fbf1810aab0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- 192.168.123.100:0/317847320 >> 192.168.123.100:0/317847320 conn(0x7fbf1806d9f0 msgr2=0x7fbf1806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- 192.168.123.100:0/317847320 shutdown_connections 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- 192.168.123.100:0/317847320 wait complete. 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 Processor -- start 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- start start 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbf18075470 0x7fbf18085b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf1810b080 0x7fbf18086090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbf1807fcb0 0x7fbf18080120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fbf1807e7d0 con 0x7fbf1810b080 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fbf1807e650 con 0x7fbf1807fcb0 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.567+0000 7fbf1f2da640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fbf1807e950 con 0x7fbf18075470 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf1810b080 0x7fbf18086090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf1810b080 0x7fbf18086090 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:48106/0 (socket says 192.168.123.100:48106) 2026-03-09T17:27:39.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 -- 192.168.123.100:0/2943366575 learned_addr learned my addr 192.168.123.100:0/2943366575 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:39.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 -- 192.168.123.100:0/2943366575 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbf18075470 msgr2=0x7fbf18085b50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:39.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 --2- 192.168.123.100:0/2943366575 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbf18075470 0x7fbf18085b50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 -- 192.168.123.100:0/2943366575 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbf1807fcb0 msgr2=0x7fbf18080120 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:39.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 --2- 192.168.123.100:0/2943366575 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbf1807fcb0 0x7fbf18080120 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 -- 192.168.123.100:0/2943366575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbf180809e0 con 0x7fbf1810b080 2026-03-09T17:27:39.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1c84e640 1 --2- 192.168.123.100:0/2943366575 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf1810b080 0x7fbf18086090 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7fbf1000c340 tx=0x7fbf1000ef90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf0e7fc640 1 -- 192.168.123.100:0/2943366575 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fbf10019070 con 0x7fbf1810b080 2026-03-09T17:27:39.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1f2da640 1 -- 192.168.123.100:0/2943366575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbf181bea10 con 0x7fbf1810b080 2026-03-09T17:27:39.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf1f2da640 1 -- 192.168.123.100:0/2943366575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fbf181bef20 con 0x7fbf1810b080 2026-03-09T17:27:39.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf0e7fc640 1 -- 192.168.123.100:0/2943366575 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fbf100092d0 con 0x7fbf1810b080 2026-03-09T17:27:39.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf0e7fc640 1 -- 192.168.123.100:0/2943366575 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fbf100075c0 con 0x7fbf1810b080 2026-03-09T17:27:39.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf0e7fc640 1 -- 192.168.123.100:0/2943366575 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fbf10007760 con 0x7fbf1810b080 2026-03-09T17:27:39.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.571+0000 7fbf0e7fc640 1 --2- 192.168.123.100:0/2943366575 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbeec0777c0 0x7fbeec079c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.575+0000 7fbf1d04f640 1 --2- 192.168.123.100:0/2943366575 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbeec0777c0 0x7fbeec079c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.575+0000 7fbf1d04f640 1 --2- 192.168.123.100:0/2943366575 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbeec0777c0 0x7fbeec079c80 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fbf140098d0 tx=0x7fbf140023d0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.575+0000 7fbf0e7fc640 1 -- 192.168.123.100:0/2943366575 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fbf1009aa60 con 0x7fbf1810b080 2026-03-09T17:27:39.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.575+0000 7fbf1f2da640 1 -- 192.168.123.100:0/2943366575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbee4005180 con 0x7fbf1810b080 2026-03-09T17:27:39.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.579+0000 7fbf0e7fc640 1 -- 192.168.123.100:0/2943366575 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fbf10010040 con 0x7fbf1810b080 2026-03-09T17:27:39.634 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- 192.168.123.100:0/408191568 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f010a850 msgr2=0x7fa9f010acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.634 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/408191568 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f010a850 0x7fa9f010acd0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7fa9d8009960 tx=0x7fa9d802f140 comp rx=0 tx=0).stop 2026-03-09T17:27:39.634 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- 192.168.123.100:0/408191568 shutdown_connections 2026-03-09T17:27:39.634 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/408191568 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9f011c780 0x7fa9f011eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.634 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/408191568 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f010a850 0x7fa9f010acd0 unknown :-1 s=CLOSED pgs=152 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.634 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/408191568 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa9f010a470 0x7fa9f01114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- 192.168.123.100:0/408191568 >> 192.168.123.100:0/408191568 conn(0x7fa9f006d9f0 msgr2=0x7fa9f006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- 192.168.123.100:0/408191568 shutdown_connections 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- 192.168.123.100:0/408191568 wait complete. 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 Processor -- start 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- start start 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9f010a470 0x7fa9f01a95a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa9f011c780 0x7fa9f01a9ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f01ade70 0x7fa9f01ae320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa9f0121430 con 0x7fa9f01ade70 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fa9f01212b0 con 0x7fa9f010a470 2026-03-09T17:27:39.635 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.631+0000 7fa9f60e6640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fa9f01215b0 con 0x7fa9f011c780 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9ef7fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9f010a470 0x7fa9f01a95a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f01ade70 0x7fa9f01ae320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f01ade70 0x7fa9f01ae320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:48120/0 (socket says 192.168.123.100:48120) 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 -- 192.168.123.100:0/3602320734 learned_addr learned my addr 192.168.123.100:0/3602320734 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 -- 192.168.123.100:0/3602320734 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa9f011c780 msgr2=0x7fa9f01a9ae0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 --2- 192.168.123.100:0/3602320734 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa9f011c780 0x7fa9f01a9ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 -- 192.168.123.100:0/3602320734 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9f010a470 msgr2=0x7fa9f01a95a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 --2- 192.168.123.100:0/3602320734 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9f010a470 0x7fa9f01a95a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 -- 192.168.123.100:0/3602320734 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa9f01ae9c0 con 0x7fa9f01ade70 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9effff640 1 --2- 192.168.123.100:0/3602320734 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f01ade70 0x7fa9f01ae320 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7fa9e0009fd0 tx=0x7fa9e000ef90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9ecff9640 1 -- 192.168.123.100:0/3602320734 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa9e0019070 con 0x7fa9f01ade70 2026-03-09T17:27:39.648 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9f60e6640 1 -- 192.168.123.100:0/3602320734 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa9f01aec50 con 0x7fa9f01ade70 2026-03-09T17:27:39.648 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.643+0000 7fa9f60e6640 1 -- 192.168.123.100:0/3602320734 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa9f0112070 con 0x7fa9f01ade70 2026-03-09T17:27:39.648 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.647+0000 7fa9ecff9640 1 -- 192.168.123.100:0/3602320734 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa9e00092d0 con 0x7fa9f01ade70 2026-03-09T17:27:39.648 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.647+0000 7fa9ecff9640 1 -- 192.168.123.100:0/3602320734 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa9e0004920 con 0x7fa9f01ade70 2026-03-09T17:27:39.650 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.647+0000 7fa9ecff9640 1 -- 192.168.123.100:0/3602320734 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fa9e0007500 con 0x7fa9f01ade70 2026-03-09T17:27:39.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.651+0000 7fa9ecff9640 1 --2- 192.168.123.100:0/3602320734 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa9c0077790 0x7fa9c0079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.651+0000 7fa9ef7fe640 1 --2- 192.168.123.100:0/3602320734 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa9c0077790 0x7fa9c0079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.659 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.655+0000 7fa9ecff9640 1 -- 192.168.123.100:0/3602320734 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fa9e009a5f0 con 0x7fa9f01ade70 2026-03-09T17:27:39.659 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.655+0000 7fa9ef7fe640 1 --2- 192.168.123.100:0/3602320734 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa9c0077790 0x7fa9c0079c50 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fa9e4009e30 tx=0x7fa9e4009340 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.659 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.655+0000 7fa9ce7fc640 1 -- 192.168.123.100:0/3602320734 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa9bc005180 con 0x7fa9f01ade70 2026-03-09T17:27:39.660 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.659+0000 7fa9ecff9640 1 -- 192.168.123.100:0/3602320734 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa9e0010040 con 0x7fa9f01ade70 2026-03-09T17:27:39.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- 192.168.123.100:0/2262371344 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4410a850 msgr2=0x7efd4410acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 --2- 192.168.123.100:0/2262371344 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4410a850 0x7efd4410acd0 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7efd3c00b0a0 tx=0x7efd3c02f450 comp rx=0 tx=0).stop 2026-03-09T17:27:39.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- 192.168.123.100:0/2262371344 shutdown_connections 2026-03-09T17:27:39.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 --2- 192.168.123.100:0/2262371344 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd4411c780 0x7efd4411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 --2- 192.168.123.100:0/2262371344 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4410a850 0x7efd4410acd0 unknown :-1 s=CLOSED pgs=154 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 --2- 192.168.123.100:0/2262371344 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efd4410a470 0x7efd441114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- 192.168.123.100:0/2262371344 >> 192.168.123.100:0/2262371344 conn(0x7efd4406ed00 msgr2=0x7efd4406f110 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- 192.168.123.100:0/2262371344 shutdown_connections 2026-03-09T17:27:39.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- 192.168.123.100:0/2262371344 wait complete. 2026-03-09T17:27:39.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 Processor -- start 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- start start 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd4410a470 0x7efd4411bc00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efd4410a850 0x7efd4411c140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4411c780 0x7efd44115e10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7efd441124d0 con 0x7efd4411c780 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7efd44112350 con 0x7efd4410a850 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd43577640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7efd44112650 con 0x7efd4410a470 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd42575640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd4410a470 0x7efd4411bc00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd42575640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd4410a470 0x7efd4411bc00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:33392/0 (socket says 192.168.123.100:33392) 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd42575640 1 -- 192.168.123.100:0/1566494502 learned_addr learned my addr 192.168.123.100:0/1566494502 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.699+0000 7efd41d74640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efd4410a850 0x7efd4411c140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.703+0000 7efd42d76640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4411c780 0x7efd44115e10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.703+0000 7efd41d74640 1 -- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd4410a470 msgr2=0x7efd4411bc00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.706 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.703+0000 7efd41d74640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd4410a470 0x7efd4411bc00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.706 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.703+0000 7efd41d74640 1 -- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4411c780 msgr2=0x7efd44115e10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.706 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.703+0000 7efd41d74640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4411c780 0x7efd44115e10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.711 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.703+0000 7efd41d74640 1 -- 192.168.123.100:0/1566494502 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efd441166d0 con 0x7efd4410a850 2026-03-09T17:27:39.711 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.703+0000 7efd42575640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd4410a470 0x7efd4411bc00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:27:39.711 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.703+0000 7efd42d76640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4411c780 0x7efd44115e10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:27:39.711 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.707+0000 7efd41d74640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efd4410a850 0x7efd4411c140 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7efd3c036000 tx=0x7efd3c0094d0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.712 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.707+0000 7efd337fe640 1 -- 192.168.123.100:0/1566494502 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efd3c047070 con 0x7efd4410a850 2026-03-09T17:27:39.712 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.711+0000 7efd337fe640 1 -- 192.168.123.100:0/1566494502 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7efd3c004910 con 0x7efd4410a850 2026-03-09T17:27:39.712 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.711+0000 7efd337fe640 1 -- 192.168.123.100:0/1566494502 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efd3c0429b0 con 0x7efd4410a850 2026-03-09T17:27:39.712 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.711+0000 7efd43577640 1 -- 192.168.123.100:0/1566494502 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7efd44116960 con 0x7efd4410a850 2026-03-09T17:27:39.712 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.711+0000 7efd43577640 1 -- 192.168.123.100:0/1566494502 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7efd441bef10 con 0x7efd4410a850 2026-03-09T17:27:39.713 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.711+0000 7efd317fa640 1 -- 192.168.123.100:0/1566494502 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7efd0c005180 con 0x7efd4410a850 2026-03-09T17:27:39.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.715+0000 7efd337fe640 1 -- 192.168.123.100:0/1566494502 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7efd3c007bd0 con 0x7efd4410a850 2026-03-09T17:27:39.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.715+0000 7efd337fe640 1 --2- 192.168.123.100:0/1566494502 >> v2:192.168.123.100:6800/2673235927 conn(0x7efd18077630 0x7efd18079af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.715+0000 7efd337fe640 1 -- 192.168.123.100:0/1566494502 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7efd3c0be4d0 con 0x7efd4410a850 2026-03-09T17:27:39.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.715+0000 7efd42575640 1 --2- 192.168.123.100:0/1566494502 >> v2:192.168.123.100:6800/2673235927 conn(0x7efd18077630 0x7efd18079af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.715+0000 7efd337fe640 1 -- 192.168.123.100:0/1566494502 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7efd3c03d070 con 0x7efd4410a850 2026-03-09T17:27:39.724 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.723+0000 7efd42575640 1 --2- 192.168.123.100:0/1566494502 >> v2:192.168.123.100:6800/2673235927 conn(0x7efd18077630 0x7efd18079af0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7efd2c009f00 tx=0x7efd2c009290 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.735+0000 7f0bb7d64640 1 -- 192.168.123.100:0/1744801325 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0bb010a850 msgr2=0x7f0bb010acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.740 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.735+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/1744801325 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0bb010a850 0x7f0bb010acd0 secure :-1 s=READY pgs=155 cs=0 l=1 rev1=1 crypto rx=0x7f0ba801c4c0 tx=0x7f0ba80408b0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.744 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 -- 192.168.123.100:0/1744801325 shutdown_connections 2026-03-09T17:27:39.744 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/1744801325 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0bb011c780 0x7f0bb011eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.744 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/1744801325 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0bb010a850 0x7f0bb010acd0 unknown :-1 s=CLOSED pgs=155 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.745 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/1744801325 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0bb010a470 0x7f0bb01114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.745 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 -- 192.168.123.100:0/1744801325 >> 192.168.123.100:0/1744801325 conn(0x7f0bb006d9f0 msgr2=0x7f0bb006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.745 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 -- 192.168.123.100:0/1744801325 shutdown_connections 2026-03-09T17:27:39.745 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 -- 192.168.123.100:0/1744801325 wait complete. 2026-03-09T17:27:39.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 Processor -- start 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 -- start start 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0bb010a470 0x7f0bb011c070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0bb010a850 0x7f0bb0117100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0bb011c780 0x7f0bb0117640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0bb0113c00 con 0x7f0bb010a470 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f0bb0113a80 con 0x7f0bb011c780 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb7d64640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f0bb0113d80 con 0x7f0bb010a850 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb5ad9640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0bb010a470 0x7f0bb011c070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0bb010a850 0x7f0bb0117100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0bb010a850 0x7f0bb0117100 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:33416/0 (socket says 192.168.123.100:33416) 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 -- 192.168.123.100:0/19039699 learned_addr learned my addr 192.168.123.100:0/19039699 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb62da640 1 --2- 192.168.123.100:0/19039699 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0bb011c780 0x7f0bb0117640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 -- 192.168.123.100:0/19039699 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0bb011c780 msgr2=0x7f0bb0117640 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 --2- 192.168.123.100:0/19039699 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0bb011c780 0x7f0bb0117640 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 -- 192.168.123.100:0/19039699 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0bb010a470 msgr2=0x7f0bb011c070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 --2- 192.168.123.100:0/19039699 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0bb010a470 0x7f0bb011c070 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 -- 192.168.123.100:0/19039699 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0bb0117f00 con 0x7f0bb010a850 2026-03-09T17:27:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.743+0000 7f0bb52d8640 1 --2- 192.168.123.100:0/19039699 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0bb010a850 0x7f0bb0117100 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f0ba8018c80 tx=0x7f0ba8018cb0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.767 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.763+0000 7f0b9effd640 1 -- 192.168.123.100:0/19039699 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0ba8004d40 con 0x7f0bb010a850 2026-03-09T17:27:39.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.763+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0bb00752a0 con 0x7f0bb010a850 2026-03-09T17:27:39.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.763+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f0bb0075830 con 0x7f0bb010a850 2026-03-09T17:27:39.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.763+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0b7c005180 con 0x7f0bb010a850 2026-03-09T17:27:39.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.767+0000 7f0b9effd640 1 -- 192.168.123.100:0/19039699 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0ba8072030 con 0x7f0bb010a850 2026-03-09T17:27:39.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.767+0000 7f0b9effd640 1 -- 192.168.123.100:0/19039699 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0ba801a0d0 con 0x7f0bb010a850 2026-03-09T17:27:39.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.767+0000 7f0b9effd640 1 -- 192.168.123.100:0/19039699 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f0ba801a2f0 con 0x7f0bb010a850 2026-03-09T17:27:39.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.767+0000 7f0b9effd640 1 --2- 192.168.123.100:0/19039699 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0b98077670 0x7f0b98079b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.767+0000 7f0bb5ad9640 1 --2- 192.168.123.100:0/19039699 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0b98077670 0x7f0b98079b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.767+0000 7f0bb5ad9640 1 --2- 192.168.123.100:0/19039699 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0b98077670 0x7f0b98079b30 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f0bac004620 tx=0x7f0bac00a400 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.771+0000 7f0b9effd640 1 -- 192.168.123.100:0/19039699 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f0ba80b80e0 con 0x7f0bb010a850 2026-03-09T17:27:39.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.775+0000 7f0b9effd640 1 -- 192.168.123.100:0/19039699 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0ba80c00f0 con 0x7f0bb010a850 2026-03-09T17:27:39.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.803+0000 7f66ad585640 1 -- 192.168.123.100:0/342448350 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a811c780 msgr2=0x7f66a811eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.803+0000 7f66ad585640 1 --2- 192.168.123.100:0/342448350 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a811c780 0x7f66a811eb70 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7f66a000b0a0 tx=0x7f66a002f410 comp rx=0 tx=0).stop 2026-03-09T17:27:39.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- 192.168.123.100:0/342448350 shutdown_connections 2026-03-09T17:27:39.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 --2- 192.168.123.100:0/342448350 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a811c780 0x7f66a811eb70 unknown :-1 s=CLOSED pgs=156 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 --2- 192.168.123.100:0/342448350 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f66a810a850 0x7f66a810acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 --2- 192.168.123.100:0/342448350 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f66a810a470 0x7f66a81114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- 192.168.123.100:0/342448350 >> 192.168.123.100:0/342448350 conn(0x7f66a806d9f0 msgr2=0x7f66a806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- 192.168.123.100:0/342448350 shutdown_connections 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- 192.168.123.100:0/342448350 wait complete. 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 Processor -- start 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- start start 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a810a470 0x7f66a8119a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f66a810a850 0x7f66a8119fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f66a811c780 0x7f66a8112b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f66a8121430 con 0x7f66a810a470 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f66a81212b0 con 0x7f66a810a850 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f66a81215b0 con 0x7f66a811c780 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a810a470 0x7f66a8119a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a810a470 0x7f66a8119a90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:48188/0 (socket says 192.168.123.100:48188) 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 -- 192.168.123.100:0/2798762984 learned_addr learned my addr 192.168.123.100:0/2798762984 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:39.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a77fe640 1 --2- 192.168.123.100:0/2798762984 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f66a811c780 0x7f66a8112b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.809 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 -- 192.168.123.100:0/2798762984 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f66a811c780 msgr2=0x7f66a8112b40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.809 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 --2- 192.168.123.100:0/2798762984 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f66a811c780 0x7f66a8112b40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.809 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 -- 192.168.123.100:0/2798762984 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f66a810a850 msgr2=0x7f66a8119fd0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:39.809 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 --2- 192.168.123.100:0/2798762984 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f66a810a850 0x7f66a8119fd0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.809 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 -- 192.168.123.100:0/2798762984 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f66a8113400 con 0x7f66a810a470 2026-03-09T17:27:39.809 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a6ffd640 1 --2- 192.168.123.100:0/2798762984 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a810a470 0x7f66a8119a90 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7f6690002a10 tx=0x7f6690002ee0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a4ff9640 1 -- 192.168.123.100:0/2798762984 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f669000ecc0 con 0x7f66a810a470 2026-03-09T17:27:39.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f66a81136f0 con 0x7f66a810a470 2026-03-09T17:27:39.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f66a8117790 con 0x7f66a810a470 2026-03-09T17:27:39.812 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.807+0000 7f66a4ff9640 1 -- 192.168.123.100:0/2798762984 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f669000ee60 con 0x7f66a810a470 2026-03-09T17:27:39.812 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.811+0000 7f66a4ff9640 1 -- 192.168.123.100:0/2798762984 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6690018740 con 0x7f66a810a470 2026-03-09T17:27:39.820 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.819+0000 7f66a4ff9640 1 -- 192.168.123.100:0/2798762984 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f66900189a0 con 0x7f66a810a470 2026-03-09T17:27:39.820 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.819+0000 7f66a4ff9640 1 --2- 192.168.123.100:0/2798762984 >> v2:192.168.123.100:6800/2673235927 conn(0x7f66780777c0 0x7f6678079c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.820 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.819+0000 7f669ffff640 1 --2- 192.168.123.100:0/2798762984 >> v2:192.168.123.100:6800/2673235927 conn(0x7f66780777c0 0x7f6678079c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.819+0000 7f669ffff640 1 --2- 192.168.123.100:0/2798762984 >> v2:192.168.123.100:6800/2673235927 conn(0x7f66780777c0 0x7f6678079c80 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f66940096f0 tx=0x7f6694009290 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.819+0000 7f66a4ff9640 1 -- 192.168.123.100:0/2798762984 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f669009b1f0 con 0x7f66a810a470 2026-03-09T17:27:39.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.819+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6674005180 con 0x7f66a810a470 2026-03-09T17:27:39.829 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.827+0000 7f66a4ff9640 1 -- 192.168.123.100:0/2798762984 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6690014030 con 0x7f66a810a470 2026-03-09T17:27:39.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.855+0000 7f176f577640 1 -- 192.168.123.100:0/1161841560 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 msgr2=0x7f177011eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.855+0000 7f176f577640 1 --2- 192.168.123.100:0/1161841560 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 0x7f177011eb70 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7f1758009990 tx=0x7f175802f170 comp rx=0 tx=0).stop 2026-03-09T17:27:39.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 -- 192.168.123.100:0/1161841560 shutdown_connections 2026-03-09T17:27:39.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 --2- 192.168.123.100:0/1161841560 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 0x7f177011eb70 unknown :-1 s=CLOSED pgs=158 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 --2- 192.168.123.100:0/1161841560 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f177010a850 0x7f177010acb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 --2- 192.168.123.100:0/1161841560 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f177010a470 0x7f17701114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 -- 192.168.123.100:0/1161841560 >> 192.168.123.100:0/1161841560 conn(0x7f177006d9c0 msgr2=0x7f177006ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 -- 192.168.123.100:0/1161841560 shutdown_connections 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 -- 192.168.123.100:0/1161841560 wait complete. 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 Processor -- start 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 -- start start 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f177010a470 0x7f1770119a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f177010a850 0x7f1770119fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 0x7f1770112af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f1770121400 con 0x7f177011c780 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f1770121280 con 0x7f177010a470 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176f577640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f1770121580 con 0x7f177010a850 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176dd74640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f177010a850 0x7f1770119fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176dd74640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f177010a850 0x7f1770119fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:33454/0 (socket says 192.168.123.100:33454) 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176dd74640 1 -- 192.168.123.100:0/1658221137 learned_addr learned my addr 192.168.123.100:0/1658221137 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:39.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176ed76640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 0x7f1770112af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.861 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176ed76640 1 -- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f177010a850 msgr2=0x7f1770119fa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.861 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176e575640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f177010a470 0x7f1770119a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.861 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176ed76640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f177010a850 0x7f1770119fa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.861 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176ed76640 1 -- 192.168.123.100:0/1658221137 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f177010a470 msgr2=0x7f1770119a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.861 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176ed76640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f177010a470 0x7f1770119a60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176ed76640 1 -- 192.168.123.100:0/1658221137 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1770113310 con 0x7f177011c780 2026-03-09T17:27:39.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176e575640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f177010a470 0x7f1770119a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:27:39.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.859+0000 7f176dd74640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f177010a850 0x7f1770119fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T17:27:39.867 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.863+0000 7f176ed76640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 0x7f1770112af0 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7f1758009880 tx=0x7f1758031d20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.867 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.867+0000 7f175f7fe640 1 -- 192.168.123.100:0/1658221137 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1758046070 con 0x7f177011c780 2026-03-09T17:27:39.868 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.867+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f17701135a0 con 0x7f177011c780 2026-03-09T17:27:39.868 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.867+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f1770117590 con 0x7f177011c780 2026-03-09T17:27:39.868 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.867+0000 7f175f7fe640 1 -- 192.168.123.100:0/1658221137 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f1758033040 con 0x7f177011c780 2026-03-09T17:27:39.869 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.867+0000 7f175f7fe640 1 -- 192.168.123.100:0/1658221137 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f17580316e0 con 0x7f177011c780 2026-03-09T17:27:39.877 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.875+0000 7f175f7fe640 1 -- 192.168.123.100:0/1658221137 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f175802f990 con 0x7f177011c780 2026-03-09T17:27:39.877 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.875+0000 7f175f7fe640 1 --2- 192.168.123.100:0/1658221137 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1748077710 0x7f1748079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.877 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.875+0000 7f176e575640 1 --2- 192.168.123.100:0/1658221137 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1748077710 0x7f1748079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.875+0000 7f176e575640 1 --2- 192.168.123.100:0/1658221137 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1748077710 0x7f1748079bd0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f1760009fd0 tx=0x7f17600099e0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.875+0000 7f175f7fe640 1 -- 192.168.123.100:0/1658221137 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f17580bdda0 con 0x7f177011c780 2026-03-09T17:27:39.882 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.879+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1734005180 con 0x7f177011c780 2026-03-09T17:27:39.886 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.883+0000 7f175f7fe640 1 -- 192.168.123.100:0/1658221137 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f175808ad00 con 0x7f177011c780 2026-03-09T17:27:39.921 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.919+0000 7fbf1f2da640 1 -- 192.168.123.100:0/2943366575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7fbee80051a0 con 0x7fbf1810b080 2026-03-09T17:27:39.922 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.919+0000 7fbf0e7fc640 1 -- 192.168.123.100:0/2943366575 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7fbf100678c0 con 0x7fbf1810b080 2026-03-09T17:27:39.922 INFO:teuthology.orchestra.run.vm00.stdout:77309411382 2026-03-09T17:27:39.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.927+0000 7fbee3fff640 1 -- 192.168.123.100:0/2943366575 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbeec0777c0 msgr2=0x7fbeec079c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 --2- 192.168.123.100:0/2943366575 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbeec0777c0 0x7fbeec079c80 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fbf140098d0 tx=0x7fbf140023d0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 -- 192.168.123.100:0/2943366575 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf1810b080 msgr2=0x7fbf18086090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 --2- 192.168.123.100:0/2943366575 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf1810b080 0x7fbf18086090 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7fbf1000c340 tx=0x7fbf1000ef90 comp rx=0 tx=0).stop 2026-03-09T17:27:39.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 -- 192.168.123.100:0/2943366575 shutdown_connections 2026-03-09T17:27:39.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 --2- 192.168.123.100:0/2943366575 >> v2:192.168.123.100:6800/2673235927 conn(0x7fbeec0777c0 0x7fbeec079c80 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 --2- 192.168.123.100:0/2943366575 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbf1807fcb0 0x7fbf18080120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 --2- 192.168.123.100:0/2943366575 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbf1810b080 0x7fbf18086090 unknown :-1 s=CLOSED pgs=151 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 --2- 192.168.123.100:0/2943366575 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbf18075470 0x7fbf18085b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.937 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 -- 192.168.123.100:0/2943366575 >> 192.168.123.100:0/2943366575 conn(0x7fbf1806d9f0 msgr2=0x7fbf18072fd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.937 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 -- 192.168.123.100:0/2943366575 shutdown_connections 2026-03-09T17:27:39.937 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.935+0000 7fbee3fff640 1 -- 192.168.123.100:0/2943366575 wait complete. 2026-03-09T17:27:39.947 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:39 vm00 bash[28333]: cluster 2026-03-09T17:27:38.658748+0000 mgr.y (mgr.14505) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:39.947 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:39 vm00 bash[28333]: cluster 2026-03-09T17:27:38.658748+0000 mgr.y (mgr.14505) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:39.947 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:39 vm00 bash[20770]: cluster 2026-03-09T17:27:38.658748+0000 mgr.y (mgr.14505) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:39.947 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:39 vm00 bash[20770]: cluster 2026-03-09T17:27:38.658748+0000 mgr.y (mgr.14505) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:39.964 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.963+0000 7fa29c94c640 1 -- 192.168.123.100:0/3598713096 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa29810a470 msgr2=0x7fa2981114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.963+0000 7fa29c94c640 1 --2- 192.168.123.100:0/3598713096 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa29810a470 0x7fa2981114d0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fa290009f90 tx=0x7fa29002f3b0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- 192.168.123.100:0/954290619 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1b3810a470 msgr2=0x7f1b381114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/954290619 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1b3810a470 0x7f1b381114d0 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f1b2c009a30 tx=0x7f1b2c02f260 comp rx=0 tx=0).stop 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3d117640 1 --2- 192.168.123.100:0/954290619 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1b3811c780 0x7f1b3811eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7fa29c94c640 1 -- 192.168.123.100:0/3598713096 shutdown_connections 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7fa29c94c640 1 --2- 192.168.123.100:0/3598713096 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa29811c780 0x7fa29811eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7fa29c94c640 1 --2- 192.168.123.100:0/3598713096 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa29810a850 0x7fa29810acb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7fa29c94c640 1 --2- 192.168.123.100:0/3598713096 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa29810a470 0x7fa2981114d0 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7fa29c94c640 1 -- 192.168.123.100:0/3598713096 >> 192.168.123.100:0/3598713096 conn(0x7fa29806d9c0 msgr2=0x7fa29806ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.973 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b37fff640 1 --2- 192.168.123.100:0/954290619 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b3810a850 0x7f1b3810acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:27:39.974 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7fa29c94c640 1 -- 192.168.123.100:0/3598713096 shutdown_connections 2026-03-09T17:27:39.974 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- 192.168.123.100:0/954290619 shutdown_connections 2026-03-09T17:27:39.974 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/954290619 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1b3811c780 0x7f1b3811eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.974 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/954290619 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b3810a850 0x7f1b3810acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.974 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/954290619 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1b3810a470 0x7f1b381114d0 unknown :-1 s=CLOSED pgs=160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.974 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- 192.168.123.100:0/954290619 >> 192.168.123.100:0/954290619 conn(0x7f1b3806d9f0 msgr2=0x7f1b3806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:39.974 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7fa29c94c640 1 -- 192.168.123.100:0/3598713096 wait complete. 2026-03-09T17:27:39.974 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- 192.168.123.100:0/954290619 shutdown_connections 2026-03-09T17:27:39.975 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- 192.168.123.100:0/954290619 wait complete. 2026-03-09T17:27:39.975 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 Processor -- start 2026-03-09T17:27:39.975 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- start start 2026-03-09T17:27:39.975 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1b3810a470 0x7f1b381125b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.975 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b3810a850 0x7f1b38112af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.975 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1b3811c780 0x7f1b38113030 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.975 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f1b38121490 con 0x7f1b3811c780 2026-03-09T17:27:39.976 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f1b38121310 con 0x7f1b3810a850 2026-03-09T17:27:39.976 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7f1b3eba1640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f1b38121610 con 0x7f1b3810a470 2026-03-09T17:27:39.976 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.971+0000 7fa29c94c640 1 Processor -- start 2026-03-09T17:27:39.977 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.975+0000 7fa29c94c640 1 -- start start 2026-03-09T17:27:39.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.975+0000 7f1b3c916640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1b3810a470 0x7f1b381125b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.975+0000 7f1b37fff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b3810a850 0x7f1b38112af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.975+0000 7f1b37fff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b3810a850 0x7f1b38112af0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:46478/0 (socket says 192.168.123.100:46478) 2026-03-09T17:27:39.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.975+0000 7f1b37fff640 1 -- 192.168.123.100:0/2942861630 learned_addr learned my addr 192.168.123.100:0/2942861630 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:39.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa29c94c640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa29810a470 0x7fa298115f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa29c94c640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa29810a850 0x7fa298116450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa29c94c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa29811c780 0x7fa29811b790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa29c94c640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa2981124b0 con 0x7fa29811c780 2026-03-09T17:27:39.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa29c94c640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fa298112330 con 0x7fa29810a850 2026-03-09T17:27:39.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa29c94c640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fa298112630 con 0x7fa29810a470 2026-03-09T17:27:39.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa296ffd640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa29810a850 0x7fa298116450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.982 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa29811c780 0x7fa29811b790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.982 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa29811c780 0x7fa29811b790 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:48216/0 (socket says 192.168.123.100:48216) 2026-03-09T17:27:39.982 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 -- 192.168.123.100:0/2413233837 learned_addr learned my addr 192.168.123.100:0/2413233837 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:39.982 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa2977fe640 1 --2- 192.168.123.100:0/2413233837 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa29810a470 0x7fa298115f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 -- 192.168.123.100:0/2413233837 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa29810a470 msgr2=0x7fa298115f10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 --2- 192.168.123.100:0/2413233837 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa29810a470 0x7fa298115f10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 -- 192.168.123.100:0/2413233837 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa29810a850 msgr2=0x7fa298116450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 --2- 192.168.123.100:0/2413233837 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa29810a850 0x7fa298116450 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 -- 192.168.123.100:0/2413233837 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa29811c050 con 0x7fa29811c780 2026-03-09T17:27:39.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b3c916640 1 -- 192.168.123.100:0/2942861630 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b3810a850 msgr2=0x7f1b38112af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b3c916640 1 --2- 192.168.123.100:0/2942861630 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b3810a850 0x7f1b38112af0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b3c916640 1 -- 192.168.123.100:0/2942861630 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1b3811c780 msgr2=0x7f1b38113030 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b3c916640 1 --2- 192.168.123.100:0/2942861630 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1b3811c780 0x7f1b38113030 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b3c916640 1 -- 192.168.123.100:0/2942861630 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1b381be5a0 con 0x7f1b3810a470 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b3c916640 1 --2- 192.168.123.100:0/2942861630 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1b3810a470 0x7f1b381125b0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f1b2c009a00 tx=0x7f1b2c004600 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b35ffb640 1 -- 192.168.123.100:0/2942861630 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1b2c038980 con 0x7f1b3810a470 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b35ffb640 1 -- 192.168.123.100:0/2942861630 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f1b2c005020 con 0x7f1b3810a470 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b35ffb640 1 -- 192.168.123.100:0/2942861630 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1b2c004b40 con 0x7f1b3810a470 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b3eba1640 1 -- 192.168.123.100:0/2942861630 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f1b381be830 con 0x7f1b3810a470 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b3eba1640 1 -- 192.168.123.100:0/2942861630 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f1b381becc0 con 0x7f1b3810a470 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7fa297fff640 1 --2- 192.168.123.100:0/2413233837 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa29811c780 0x7fa29811b790 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7fa28000b550 tx=0x7fa28000ba20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.979+0000 7f1b077fe640 1 -- 192.168.123.100:0/2942861630 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1b00005180 con 0x7f1b3810a470 2026-03-09T17:27:39.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.983+0000 7fa294ff9640 1 -- 192.168.123.100:0/2413233837 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa280013020 con 0x7fa29811c780 2026-03-09T17:27:39.993 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.983+0000 7fa29c94c640 1 -- 192.168.123.100:0/2413233837 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa29811c340 con 0x7fa29811c780 2026-03-09T17:27:39.993 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.983+0000 7fa29c94c640 1 -- 192.168.123.100:0/2413233837 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa2981bee50 con 0x7fa29811c780 2026-03-09T17:27:39.993 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.983+0000 7fa294ff9640 1 -- 192.168.123.100:0/2413233837 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa280004480 con 0x7fa29811c780 2026-03-09T17:27:39.993 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.983+0000 7fa294ff9640 1 -- 192.168.123.100:0/2413233837 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa28000fa10 con 0x7fa29811c780 2026-03-09T17:27:39.994 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.983+0000 7fa2767fc640 1 -- 192.168.123.100:0/2413233837 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa25c005180 con 0x7fa29811c780 2026-03-09T17:27:39.994 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.987+0000 7fa294ff9640 1 -- 192.168.123.100:0/2413233837 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fa280020020 con 0x7fa29811c780 2026-03-09T17:27:39.994 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.987+0000 7fa294ff9640 1 --2- 192.168.123.100:0/2413233837 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa2680776c0 0x7fa268079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.994 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.987+0000 7fa294ff9640 1 -- 192.168.123.100:0/2413233837 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fa28009a460 con 0x7fa29811c780 2026-03-09T17:27:39.995 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.983+0000 7f1b35ffb640 1 -- 192.168.123.100:0/2942861630 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f1b2c04a070 con 0x7f1b3810a470 2026-03-09T17:27:39.995 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.991+0000 7fa2977fe640 1 --2- 192.168.123.100:0/2413233837 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa2680776c0 0x7fa268079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.995 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.991+0000 7fa2977fe640 1 --2- 192.168.123.100:0/2413233837 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa2680776c0 0x7fa268079b80 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fa290009da0 tx=0x7fa290007420 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:39.995 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.991+0000 7fa294ff9640 1 -- 192.168.123.100:0/2413233837 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa2800672c0 con 0x7fa29811c780 2026-03-09T17:27:39.996 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.995+0000 7f1b35ffb640 1 --2- 192.168.123.100:0/2942861630 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1b24077790 0x7f1b24079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:39.996 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.995+0000 7f1b35ffb640 1 -- 192.168.123.100:0/2942861630 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f1b2c0be130 con 0x7f1b3810a470 2026-03-09T17:27:39.996 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.995+0000 7f1b37fff640 1 --2- 192.168.123.100:0/2942861630 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1b24077790 0x7f1b24079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:39.996 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.995+0000 7f1b37fff640 1 --2- 192.168.123.100:0/2942861630 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1b24077790 0x7f1b24079c50 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f1b20004710 tx=0x7f1b20004050 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:40.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:39.999+0000 7f1b35ffb640 1 -- 192.168.123.100:0/2942861630 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f1b2c0c06e0 con 0x7f1b3810a470 2026-03-09T17:27:40.041 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411381 got 77309411382 for osd.2 2026-03-09T17:27:40.041 DEBUG:teuthology.parallel:result is None 2026-03-09T17:27:40.098 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.095+0000 7fa9ce7fc640 1 -- 192.168.123.100:0/3602320734 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 6} v 0) -- 0x7fa9bc005470 con 0x7fa9f01ade70 2026-03-09T17:27:40.099 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.095+0000 7fa9ecff9640 1 -- 192.168.123.100:0/3602320734 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 6}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7fa9e0067450 con 0x7fa9f01ade70 2026-03-09T17:27:40.099 INFO:teuthology.orchestra.run.vm00.stdout:188978561051 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 -- 192.168.123.100:0/3602320734 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa9c0077790 msgr2=0x7fa9c0079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/3602320734 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa9c0077790 0x7fa9c0079c50 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fa9e4009e30 tx=0x7fa9e4009340 comp rx=0 tx=0).stop 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 -- 192.168.123.100:0/3602320734 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f01ade70 msgr2=0x7fa9f01ae320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/3602320734 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f01ade70 0x7fa9f01ae320 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7fa9e0009fd0 tx=0x7fa9e000ef90 comp rx=0 tx=0).stop 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 -- 192.168.123.100:0/3602320734 shutdown_connections 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/3602320734 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa9c0077790 0x7fa9c0079c50 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/3602320734 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa9f01ade70 0x7fa9f01ae320 unknown :-1 s=CLOSED pgs=153 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/3602320734 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa9f011c780 0x7fa9f01a9ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 --2- 192.168.123.100:0/3602320734 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9f010a470 0x7fa9f01a95a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 -- 192.168.123.100:0/3602320734 >> 192.168.123.100:0/3602320734 conn(0x7fa9f006d9f0 msgr2=0x7fa9f011cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:40.105 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 -- 192.168.123.100:0/3602320734 shutdown_connections 2026-03-09T17:27:40.105 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.103+0000 7fa9f60e6640 1 -- 192.168.123.100:0/3602320734 wait complete. 2026-03-09T17:27:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:39 vm02 bash[23351]: cluster 2026-03-09T17:27:38.658748+0000 mgr.y (mgr.14505) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:40.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:39 vm02 bash[23351]: cluster 2026-03-09T17:27:38.658748+0000 mgr.y (mgr.14505) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:40.165 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.163+0000 7efd317fa640 1 -- 192.168.123.100:0/1566494502 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) -- 0x7efd0c005740 con 0x7efd4410a850 2026-03-09T17:27:40.169 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.167+0000 7efd337fe640 1 -- 192.168.123.100:0/1566494502 <== mon.1 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7efd3c08b330 con 0x7efd4410a850 2026-03-09T17:27:40.171 INFO:teuthology.orchestra.run.vm00.stdout:137438953512 2026-03-09T17:27:40.171 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.167+0000 7efd317fa640 1 -- 192.168.123.100:0/1566494502 >> v2:192.168.123.100:6800/2673235927 conn(0x7efd18077630 msgr2=0x7efd18079af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.171 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.167+0000 7efd317fa640 1 --2- 192.168.123.100:0/1566494502 >> v2:192.168.123.100:6800/2673235927 conn(0x7efd18077630 0x7efd18079af0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7efd2c009f00 tx=0x7efd2c009290 comp rx=0 tx=0).stop 2026-03-09T17:27:40.171 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.167+0000 7efd317fa640 1 -- 192.168.123.100:0/1566494502 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efd4410a850 msgr2=0x7efd4411c140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.171 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.167+0000 7efd317fa640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efd4410a850 0x7efd4411c140 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7efd3c036000 tx=0x7efd3c0094d0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.179+0000 7efd317fa640 1 -- 192.168.123.100:0/1566494502 shutdown_connections 2026-03-09T17:27:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.179+0000 7efd317fa640 1 --2- 192.168.123.100:0/1566494502 >> v2:192.168.123.100:6800/2673235927 conn(0x7efd18077630 0x7efd18079af0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.179+0000 7efd317fa640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd4411c780 0x7efd44115e10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.179+0000 7efd317fa640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efd4410a850 0x7efd4411c140 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.179+0000 7efd317fa640 1 --2- 192.168.123.100:0/1566494502 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd4410a470 0x7efd4411bc00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.179+0000 7efd317fa640 1 -- 192.168.123.100:0/1566494502 >> 192.168.123.100:0/1566494502 conn(0x7efd4406ed00 msgr2=0x7efd440711d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.183+0000 7efd317fa640 1 -- 192.168.123.100:0/1566494502 shutdown_connections 2026-03-09T17:27:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.187+0000 7efd317fa640 1 -- 192.168.123.100:0/1566494502 wait complete. 2026-03-09T17:27:40.192 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.187+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7f0b7c005740 con 0x7f0bb010a850 2026-03-09T17:27:40.192 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.191+0000 7f0b9effd640 1 -- 192.168.123.100:0/19039699 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f0ba80c4fa0 con 0x7f0bb010a850 2026-03-09T17:27:40.200 INFO:teuthology.orchestra.run.vm00.stdout:55834574909 2026-03-09T17:27:40.208 INFO:teuthology.orchestra.run.vm00.stdout:111669149743 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.203+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) -- 0x7f6674005470 con 0x7f66a810a470 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f66a4ff9640 1 -- 192.168.123.100:0/2798762984 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f6690068050 con 0x7f66a810a470 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0b98077670 msgr2=0x7f0b98079b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/19039699 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0b98077670 0x7f0b98079b30 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f0bac004620 tx=0x7f0bac00a400 comp rx=0 tx=0).stop 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0bb010a850 msgr2=0x7f0bb0117100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/19039699 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0bb010a850 0x7f0bb0117100 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f0ba8018c80 tx=0x7f0ba8018cb0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 shutdown_connections 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/19039699 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0b98077670 0x7f0b98079b30 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/19039699 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0bb011c780 0x7f0bb0117640 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/19039699 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0bb010a850 0x7f0bb0117100 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 --2- 192.168.123.100:0/19039699 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0bb010a470 0x7f0bb011c070 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 >> 192.168.123.100:0/19039699 conn(0x7f0bb006d9f0 msgr2=0x7f0bb010b3e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:40.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 shutdown_connections 2026-03-09T17:27:40.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.207+0000 7f0bb7d64640 1 -- 192.168.123.100:0/19039699 wait complete. 2026-03-09T17:27:40.216 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 >> v2:192.168.123.100:6800/2673235927 conn(0x7f66780777c0 msgr2=0x7f6678079c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 --2- 192.168.123.100:0/2798762984 >> v2:192.168.123.100:6800/2673235927 conn(0x7f66780777c0 0x7f6678079c80 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f66940096f0 tx=0x7f6694009290 comp rx=0 tx=0).stop 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a810a470 msgr2=0x7f66a8119a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 --2- 192.168.123.100:0/2798762984 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a810a470 0x7f66a8119a90 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7f6690002a10 tx=0x7f6690002ee0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 shutdown_connections 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 --2- 192.168.123.100:0/2798762984 >> v2:192.168.123.100:6800/2673235927 conn(0x7f66780777c0 0x7f6678079c80 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 --2- 192.168.123.100:0/2798762984 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f66a811c780 0x7f66a8112b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 --2- 192.168.123.100:0/2798762984 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f66a810a850 0x7f66a8119fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 --2- 192.168.123.100:0/2798762984 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f66a810a470 0x7f66a8119a90 unknown :-1 s=CLOSED pgs=157 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 >> 192.168.123.100:0/2798762984 conn(0x7f66a806d9f0 msgr2=0x7f66a811cbc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:40.217 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 shutdown_connections 2026-03-09T17:27:40.218 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.215+0000 7f66ad585640 1 -- 192.168.123.100:0/2798762984 wait complete. 2026-03-09T17:27:40.300 INFO:tasks.cephadm.ceph_manager.ceph:need seq 188978561050 got 188978561051 for osd.6 2026-03-09T17:27:40.301 DEBUG:teuthology.parallel:result is None 2026-03-09T17:27:40.342 INFO:teuthology.orchestra.run.vm00.stdout:34359738435 2026-03-09T17:27:40.342 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.327+0000 7f1b077fe640 1 -- 192.168.123.100:0/2942861630 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7f1b00005470 con 0x7f1b3810a470 2026-03-09T17:27:40.342 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.335+0000 7f1b35ffb640 1 -- 192.168.123.100:0/2942861630 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f1b2c08af90 con 0x7f1b3810a470 2026-03-09T17:27:40.342 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.339+0000 7f1b3eba1640 1 -- 192.168.123.100:0/2942861630 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1b24077790 msgr2=0x7f1b24079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.342 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.339+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/2942861630 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1b24077790 0x7f1b24079c50 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f1b20004710 tx=0x7f1b20004050 comp rx=0 tx=0).stop 2026-03-09T17:27:40.346 INFO:tasks.cephadm.ceph_manager.ceph:need seq 137438953511 got 137438953512 for osd.4 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.339+0000 7f1b3eba1640 1 -- 192.168.123.100:0/2942861630 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1b3810a470 msgr2=0x7f1b381125b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.339+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/2942861630 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1b3810a470 0x7f1b381125b0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f1b2c009a00 tx=0x7f1b2c004600 comp rx=0 tx=0).stop 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.343+0000 7f1b3eba1640 1 -- 192.168.123.100:0/2942861630 shutdown_connections 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.343+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/2942861630 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1b24077790 0x7f1b24079c50 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.343+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/2942861630 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1b3811c780 0x7f1b38113030 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.343+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/2942861630 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b3810a850 0x7f1b38112af0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.343+0000 7f1b3eba1640 1 --2- 192.168.123.100:0/2942861630 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f1b3810a470 0x7f1b381125b0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.343+0000 7f1b3eba1640 1 -- 192.168.123.100:0/2942861630 >> 192.168.123.100:0/2942861630 conn(0x7f1b3806d9f0 msgr2=0x7f1b3811cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.343+0000 7f1b3eba1640 1 -- 192.168.123.100:0/2942861630 shutdown_connections 2026-03-09T17:27:40.346 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.343+0000 7f1b3eba1640 1 -- 192.168.123.100:0/2942861630 wait complete. 2026-03-09T17:27:40.346 DEBUG:teuthology.parallel:result is None 2026-03-09T17:27:40.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.379+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) -- 0x7f1734005740 con 0x7f177011c780 2026-03-09T17:27:40.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.379+0000 7f175f7fe640 1 -- 192.168.123.100:0/1658221137 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f175808fbb0 con 0x7f177011c780 2026-03-09T17:27:40.383 INFO:teuthology.orchestra.run.vm00.stdout:163208757281 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1748077710 msgr2=0x7f1748079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 --2- 192.168.123.100:0/1658221137 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1748077710 0x7f1748079bd0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f1760009fd0 tx=0x7f17600099e0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 msgr2=0x7f1770112af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 0x7f1770112af0 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7f1758009880 tx=0x7f1758031d20 comp rx=0 tx=0).stop 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 shutdown_connections 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 --2- 192.168.123.100:0/1658221137 >> v2:192.168.123.100:6800/2673235927 conn(0x7f1748077710 0x7f1748079bd0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f177011c780 0x7f1770112af0 unknown :-1 s=CLOSED pgs=159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f177010a850 0x7f1770119fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 --2- 192.168.123.100:0/1658221137 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f177010a470 0x7f1770119a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 >> 192.168.123.100:0/1658221137 conn(0x7f177006d9c0 msgr2=0x7f177011cc00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 shutdown_connections 2026-03-09T17:27:40.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.383+0000 7f176f577640 1 -- 192.168.123.100:0/1658221137 wait complete. 2026-03-09T17:27:40.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.391+0000 7fa2767fc640 1 -- 192.168.123.100:0/2413233837 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 7} v 0) -- 0x7fa25c005740 con 0x7fa29811c780 2026-03-09T17:27:40.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.399+0000 7fa294ff9640 1 -- 192.168.123.100:0/2413233837 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 7}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7fa28006c170 con 0x7fa29811c780 2026-03-09T17:27:40.416 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574908 got 55834574909 for osd.1 2026-03-09T17:27:40.417 DEBUG:teuthology.parallel:result is None 2026-03-09T17:27:40.417 INFO:teuthology.orchestra.run.vm00.stdout:214748364820 2026-03-09T17:27:40.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 -- 192.168.123.100:0/2413233837 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa2680776c0 msgr2=0x7fa268079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 --2- 192.168.123.100:0/2413233837 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa2680776c0 0x7fa268079b80 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fa290009da0 tx=0x7fa290007420 comp rx=0 tx=0).stop 2026-03-09T17:27:40.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 -- 192.168.123.100:0/2413233837 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa29811c780 msgr2=0x7fa29811b790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:40.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 --2- 192.168.123.100:0/2413233837 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa29811c780 0x7fa29811b790 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7fa28000b550 tx=0x7fa28000ba20 comp rx=0 tx=0).stop 2026-03-09T17:27:40.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 -- 192.168.123.100:0/2413233837 shutdown_connections 2026-03-09T17:27:40.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 --2- 192.168.123.100:0/2413233837 >> v2:192.168.123.100:6800/2673235927 conn(0x7fa2680776c0 0x7fa268079b80 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 --2- 192.168.123.100:0/2413233837 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa29811c780 0x7fa29811b790 unknown :-1 s=CLOSED pgs=161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 --2- 192.168.123.100:0/2413233837 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa29810a850 0x7fa298116450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 --2- 192.168.123.100:0/2413233837 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa29810a470 0x7fa298115f10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:40.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.423+0000 7fa2767fc640 1 -- 192.168.123.100:0/2413233837 >> 192.168.123.100:0/2413233837 conn(0x7fa29806d9c0 msgr2=0x7fa29810b410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:40.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.427+0000 7fa2767fc640 1 -- 192.168.123.100:0/2413233837 shutdown_connections 2026-03-09T17:27:40.428 INFO:tasks.cephadm.ceph_manager.ceph:need seq 111669149742 got 111669149743 for osd.3 2026-03-09T17:27:40.428 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:40.427+0000 7fa2767fc640 1 -- 192.168.123.100:0/2413233837 wait complete. 2026-03-09T17:27:40.428 DEBUG:teuthology.parallel:result is None 2026-03-09T17:27:40.498 INFO:tasks.cephadm.ceph_manager.ceph:need seq 214748364819 got 214748364820 for osd.7 2026-03-09T17:27:40.498 DEBUG:teuthology.parallel:result is None 2026-03-09T17:27:40.507 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738434 got 34359738435 for osd.0 2026-03-09T17:27:40.507 DEBUG:teuthology.parallel:result is None 2026-03-09T17:27:40.533 INFO:tasks.cephadm.ceph_manager.ceph:need seq 163208757280 got 163208757281 for osd.5 2026-03-09T17:27:40.533 DEBUG:teuthology.parallel:result is None 2026-03-09T17:27:40.533 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T17:27:40.533 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph pg dump --format=json 2026-03-09T17:27:40.753 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:39.922563+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.100:0/2943366575' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T17:27:40.753 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:39.922563+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.100:0/2943366575' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T17:27:40.753 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.098822+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.100:0/3602320734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T17:27:40.753 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.098822+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.100:0/3602320734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.168679+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.100:0/1566494502' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.168679+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.100:0/1566494502' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.192141+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.100:0/19039699' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.192141+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.100:0/19039699' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.208644+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.100:0/2798762984' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.208644+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.100:0/2798762984' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.339873+0000 mon.c (mon.2) 65 : audit [DBG] from='client.? 192.168.123.100:0/2942861630' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.339873+0000 mon.c (mon.2) 65 : audit [DBG] from='client.? 192.168.123.100:0/2942861630' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.382346+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.100:0/1658221137' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.382346+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.100:0/1658221137' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.401457+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.100:0/2413233837' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:40 vm00 bash[20770]: audit 2026-03-09T17:27:40.401457+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.100:0/2413233837' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:39.922563+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.100:0/2943366575' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:39.922563+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.100:0/2943366575' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.098822+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.100:0/3602320734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.098822+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.100:0/3602320734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.168679+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.100:0/1566494502' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.168679+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.100:0/1566494502' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.192141+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.100:0/19039699' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.192141+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.100:0/19039699' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.208644+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.100:0/2798762984' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.208644+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.100:0/2798762984' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.339873+0000 mon.c (mon.2) 65 : audit [DBG] from='client.? 192.168.123.100:0/2942861630' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.339873+0000 mon.c (mon.2) 65 : audit [DBG] from='client.? 192.168.123.100:0/2942861630' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.382346+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.100:0/1658221137' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.382346+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.100:0/1658221137' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.401457+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.100:0/2413233837' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T17:27:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:40 vm00 bash[28333]: audit 2026-03-09T17:27:40.401457+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.100:0/2413233837' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:39.922563+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.100:0/2943366575' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:39.922563+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.100:0/2943366575' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.098822+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.100:0/3602320734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.098822+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.100:0/3602320734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.168679+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.100:0/1566494502' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.168679+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.100:0/1566494502' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.192141+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.100:0/19039699' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.192141+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.100:0/19039699' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.208644+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.100:0/2798762984' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.208644+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.100:0/2798762984' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.339873+0000 mon.c (mon.2) 65 : audit [DBG] from='client.? 192.168.123.100:0/2942861630' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.339873+0000 mon.c (mon.2) 65 : audit [DBG] from='client.? 192.168.123.100:0/2942861630' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.382346+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.100:0/1658221137' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.382346+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.100:0/1658221137' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.401457+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.100:0/2413233837' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T17:27:41.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:40 vm02 bash[23351]: audit 2026-03-09T17:27:40.401457+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.100:0/2413233837' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T17:27:41.755 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:27:42.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:41 vm00 bash[28333]: cluster 2026-03-09T17:27:40.659060+0000 mgr.y (mgr.14505) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:41 vm00 bash[28333]: cluster 2026-03-09T17:27:40.659060+0000 mgr.y (mgr.14505) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:41 vm00 bash[20770]: cluster 2026-03-09T17:27:40.659060+0000 mgr.y (mgr.14505) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:41 vm00 bash[20770]: cluster 2026-03-09T17:27:40.659060+0000 mgr.y (mgr.14505) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:41 vm02 bash[23351]: cluster 2026-03-09T17:27:40.659060+0000 mgr.y (mgr.14505) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:42.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:41 vm02 bash[23351]: cluster 2026-03-09T17:27:40.659060+0000 mgr.y (mgr.14505) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:43.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:42 vm00 bash[28333]: audit 2026-03-09T17:27:41.475218+0000 mgr.y (mgr.14505) 61 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:43.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:42 vm00 bash[28333]: audit 2026-03-09T17:27:41.475218+0000 mgr.y (mgr.14505) 61 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:43.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:42 vm00 bash[28333]: audit 2026-03-09T17:27:41.778115+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:43.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:42 vm00 bash[28333]: audit 2026-03-09T17:27:41.778115+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:42 vm00 bash[20770]: audit 2026-03-09T17:27:41.475218+0000 mgr.y (mgr.14505) 61 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:42 vm00 bash[20770]: audit 2026-03-09T17:27:41.475218+0000 mgr.y (mgr.14505) 61 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:42 vm00 bash[20770]: audit 2026-03-09T17:27:41.778115+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:42 vm00 bash[20770]: audit 2026-03-09T17:27:41.778115+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:42 vm02 bash[23351]: audit 2026-03-09T17:27:41.475218+0000 mgr.y (mgr.14505) 61 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:42 vm02 bash[23351]: audit 2026-03-09T17:27:41.475218+0000 mgr.y (mgr.14505) 61 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:42 vm02 bash[23351]: audit 2026-03-09T17:27:41.778115+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:42 vm02 bash[23351]: audit 2026-03-09T17:27:41.778115+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:44.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:43 vm00 bash[28333]: cluster 2026-03-09T17:27:42.659360+0000 mgr.y (mgr.14505) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:44.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:43 vm00 bash[28333]: cluster 2026-03-09T17:27:42.659360+0000 mgr.y (mgr.14505) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:44.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:43 vm00 bash[20770]: cluster 2026-03-09T17:27:42.659360+0000 mgr.y (mgr.14505) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:44.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:43 vm00 bash[20770]: cluster 2026-03-09T17:27:42.659360+0000 mgr.y (mgr.14505) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:44.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:43 vm02 bash[23351]: cluster 2026-03-09T17:27:42.659360+0000 mgr.y (mgr.14505) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:44.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:43 vm02 bash[23351]: cluster 2026-03-09T17:27:42.659360+0000 mgr.y (mgr.14505) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:45.213 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:45.355 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.351+0000 7f5028730640 1 -- 192.168.123.100:0/1537195905 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5020107910 msgr2=0x7f5020107cf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:45.355 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.351+0000 7f5028730640 1 --2- 192.168.123.100:0/1537195905 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5020107910 0x7f5020107cf0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f5014009a30 tx=0x7f501402f220 comp rx=0 tx=0).stop 2026-03-09T17:27:45.355 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- 192.168.123.100:0/1537195905 shutdown_connections 2026-03-09T17:27:45.355 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 --2- 192.168.123.100:0/1537195905 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f50201022b0 0x7f502010e7e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 --2- 192.168.123.100:0/1537195905 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5020101910 0x7f5020101d70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 --2- 192.168.123.100:0/1537195905 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5020107910 0x7f5020107cf0 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- 192.168.123.100:0/1537195905 >> 192.168.123.100:0/1537195905 conn(0x7f50200fd630 msgr2=0x7f50200ffa50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- 192.168.123.100:0/1537195905 shutdown_connections 2026-03-09T17:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- 192.168.123.100:0/1537195905 wait complete. 2026-03-09T17:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 Processor -- start 2026-03-09T17:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- start start 2026-03-09T17:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5020101910 0x7f50201a2740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50201022b0 0x7f50201a2c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f50264a5640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5020101910 0x7f50201a2740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f50264a5640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5020101910 0x7f50201a2740 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:33524/0 (socket says 192.168.123.100:33524) 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5025ca4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50201022b0 0x7f50201a2c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5025ca4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50201022b0 0x7f50201a2c80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:48240/0 (socket says 192.168.123.100:48240) 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5020107910 0x7f502019c900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f50201100a0 con 0x7f50201022b0 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f502010ff20 con 0x7f5020107910 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f5020110220 con 0x7f5020101910 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f50264a5640 1 -- 192.168.123.100:0/2783963610 learned_addr learned my addr 192.168.123.100:0/2783963610 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:45.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5026ca6640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5020107910 0x7f502019c900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:45.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5025ca4640 1 -- 192.168.123.100:0/2783963610 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5020101910 msgr2=0x7f50201a2740 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:45.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5025ca4640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5020101910 0x7f50201a2740 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5025ca4640 1 -- 192.168.123.100:0/2783963610 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5020107910 msgr2=0x7f502019c900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:45.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5025ca4640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5020107910 0x7f502019c900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5025ca4640 1 -- 192.168.123.100:0/2783963610 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f502019d0d0 con 0x7f50201022b0 2026-03-09T17:27:45.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f50264a5640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5020101910 0x7f50201a2740 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:27:45.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5025ca4640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50201022b0 0x7f50201a2c80 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f50100079d0 tx=0x7f5010007ea0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:45.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f500f7fe640 1 -- 192.168.123.100:0/2783963610 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f50100042e0 con 0x7f50201022b0 2026-03-09T17:27:45.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f502019d3c0 con 0x7f50201022b0 2026-03-09T17:27:45.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f50201a9610 con 0x7f50201022b0 2026-03-09T17:27:45.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f500f7fe640 1 -- 192.168.123.100:0/2783963610 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f501000ad60 con 0x7f50201022b0 2026-03-09T17:27:45.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.355+0000 7f500f7fe640 1 -- 192.168.123.100:0/2783963610 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f50100153d0 con 0x7f50201022b0 2026-03-09T17:27:45.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.359+0000 7f500f7fe640 1 -- 192.168.123.100:0/2783963610 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f5010015630 con 0x7f50201022b0 2026-03-09T17:27:45.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.359+0000 7f500f7fe640 1 --2- 192.168.123.100:0/2783963610 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4fec077710 0x7f4fec079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:45.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.359+0000 7f50264a5640 1 --2- 192.168.123.100:0/2783963610 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4fec077710 0x7f4fec079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:45.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.359+0000 7f50264a5640 1 --2- 192.168.123.100:0/2783963610 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4fec077710 0x7f4fec079bd0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f50140097c0 tx=0x7f501403a040 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:45.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.359+0000 7f500f7fe640 1 -- 192.168.123.100:0/2783963610 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f501005ed80 con 0x7f50201022b0 2026-03-09T17:27:45.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.359+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4ff0005180 con 0x7f50201022b0 2026-03-09T17:27:45.364 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.363+0000 7f500f7fe640 1 -- 192.168.123.100:0/2783963610 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5010066d90 con 0x7f50201022b0 2026-03-09T17:27:45.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.455+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 --> v2:192.168.123.100:6800/2673235927 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f4ff0002bf0 con 0x7f4fec077710 2026-03-09T17:27:45.460 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.459+0000 7f500f7fe640 1 -- 192.168.123.100:0/2783963610 <== mgr.14505 v2:192.168.123.100:6800/2673235927 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346481 (secure 0 0 0) 0x7f501409b990 con 0x7f4fec077710 2026-03-09T17:27:45.461 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:27:45.462 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-09T17:27:45.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4fec077710 msgr2=0x7f4fec079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:45.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 --2- 192.168.123.100:0/2783963610 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4fec077710 0x7f4fec079bd0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f50140097c0 tx=0x7f501403a040 comp rx=0 tx=0).stop 2026-03-09T17:27:45.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50201022b0 msgr2=0x7f50201a2c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50201022b0 0x7f50201a2c80 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f50100079d0 tx=0x7f5010007ea0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 shutdown_connections 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 --2- 192.168.123.100:0/2783963610 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4fec077710 0x7f4fec079bd0 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5020107910 0x7f502019c900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50201022b0 0x7f50201a2c80 unknown :-1 s=CLOSED pgs=162 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 --2- 192.168.123.100:0/2783963610 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5020101910 0x7f50201a2740 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 >> 192.168.123.100:0/2783963610 conn(0x7f50200fd630 msgr2=0x7f502010cc30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 shutdown_connections 2026-03-09T17:27:45.465 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:45.463+0000 7f5028730640 1 -- 192.168.123.100:0/2783963610 wait complete. 2026-03-09T17:27:45.518 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":27,"stamp":"2026-03-09T17:27:44.659509+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":911,"num_read_kb":770,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221224,"kb_used_data":6524,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518168,"statfs":{"total":171765137408,"available":171538604032,"internally_reserved":0,"allocated":6680576,"data_stored":3376401,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":15,"num_read_kb":15,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.004289"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754278+0000","last_change":"2026-03-09T17:26:34.121816+0000","last_active":"2026-03-09T17:26:56.754278+0000","last_peered":"2026-03-09T17:26:56.754278+0000","last_clean":"2026-03-09T17:26:56.754278+0000","last_became_active":"2026-03-09T17:26:34.121601+0000","last_became_peered":"2026-03-09T17:26:34.121601+0000","last_unstale":"2026-03-09T17:26:56.754278+0000","last_undegraded":"2026-03-09T17:26:56.754278+0000","last_fullsized":"2026-03-09T17:26:56.754278+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:36:25.863362+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627446+0000","last_change":"2026-03-09T17:26:28.100627+0000","last_active":"2026-03-09T17:26:56.627446+0000","last_peered":"2026-03-09T17:26:56.627446+0000","last_clean":"2026-03-09T17:26:56.627446+0000","last_became_active":"2026-03-09T17:26:28.100425+0000","last_became_peered":"2026-03-09T17:26:28.100425+0000","last_unstale":"2026-03-09T17:26:56.627446+0000","last_undegraded":"2026-03-09T17:26:56.627446+0000","last_fullsized":"2026-03-09T17:26:56.627446+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:03:00.063577+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"61'10","reported_seq":44,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754280+0000","last_change":"2026-03-09T17:26:30.395835+0000","last_active":"2026-03-09T17:26:56.754280+0000","last_peered":"2026-03-09T17:26:56.754280+0000","last_clean":"2026-03-09T17:26:56.754280+0000","last_became_active":"2026-03-09T17:26:30.395688+0000","last_became_peered":"2026-03-09T17:26:30.395688+0000","last_unstale":"2026-03-09T17:26:56.754280+0000","last_undegraded":"2026-03-09T17:26:56.754280+0000","last_fullsized":"2026-03-09T17:26:56.754280+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:41:19.007158+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634715+0000","last_change":"2026-03-09T17:26:32.121819+0000","last_active":"2026-03-09T17:26:56.634715+0000","last_peered":"2026-03-09T17:26:56.634715+0000","last_clean":"2026-03-09T17:26:56.634715+0000","last_became_active":"2026-03-09T17:26:32.121693+0000","last_became_peered":"2026-03-09T17:26:32.121693+0000","last_unstale":"2026-03-09T17:26:56.634715+0000","last_undegraded":"2026-03-09T17:26:56.634715+0000","last_fullsized":"2026-03-09T17:26:56.634715+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:47:17.885391+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754096+0000","last_change":"2026-03-09T17:26:28.089505+0000","last_active":"2026-03-09T17:26:56.754096+0000","last_peered":"2026-03-09T17:26:56.754096+0000","last_clean":"2026-03-09T17:26:56.754096+0000","last_became_active":"2026-03-09T17:26:28.089308+0000","last_became_peered":"2026-03-09T17:26:28.089308+0000","last_unstale":"2026-03-09T17:26:56.754096+0000","last_undegraded":"2026-03-09T17:26:56.754096+0000","last_fullsized":"2026-03-09T17:26:56.754096+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:45:23.762126+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"61'11","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627863+0000","last_change":"2026-03-09T17:26:30.394005+0000","last_active":"2026-03-09T17:26:56.627863+0000","last_peered":"2026-03-09T17:26:56.627863+0000","last_clean":"2026-03-09T17:26:56.627863+0000","last_became_active":"2026-03-09T17:26:30.393798+0000","last_became_peered":"2026-03-09T17:26:30.393798+0000","last_unstale":"2026-03-09T17:26:56.627863+0000","last_undegraded":"2026-03-09T17:26:56.627863+0000","last_fullsized":"2026-03-09T17:26:56.627863+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:32:40.048006+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749538+0000","last_change":"2026-03-09T17:26:32.124103+0000","last_active":"2026-03-09T17:26:56.749538+0000","last_peered":"2026-03-09T17:26:56.749538+0000","last_clean":"2026-03-09T17:26:56.749538+0000","last_became_active":"2026-03-09T17:26:32.123957+0000","last_became_peered":"2026-03-09T17:26:32.123957+0000","last_unstale":"2026-03-09T17:26:56.749538+0000","last_undegraded":"2026-03-09T17:26:56.749538+0000","last_fullsized":"2026-03-09T17:26:56.749538+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:41:55.577257+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634338+0000","last_change":"2026-03-09T17:26:34.138187+0000","last_active":"2026-03-09T17:26:56.634338+0000","last_peered":"2026-03-09T17:26:56.634338+0000","last_clean":"2026-03-09T17:26:56.634338+0000","last_became_active":"2026-03-09T17:26:34.137670+0000","last_became_peered":"2026-03-09T17:26:34.137670+0000","last_unstale":"2026-03-09T17:26:56.634338+0000","last_undegraded":"2026-03-09T17:26:56.634338+0000","last_fullsized":"2026-03-09T17:26:56.634338+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:48:46.852188+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":15,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.626845+0000","last_change":"2026-03-09T17:26:43.419132+0000","last_active":"2026-03-09T17:26:56.626845+0000","last_peered":"2026-03-09T17:26:56.626845+0000","last_clean":"2026-03-09T17:26:56.626845+0000","last_became_active":"2026-03-09T17:26:43.418755+0000","last_became_peered":"2026-03-09T17:26:43.418755+0000","last_unstale":"2026-03-09T17:26:56.626845+0000","last_undegraded":"2026-03-09T17:26:56.626845+0000","last_fullsized":"2026-03-09T17:26:56.626845+0000","mapping_epoch":62,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":63,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:03:17.341663+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,6,0],"acting":[2,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.1c","version":"61'15","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753312+0000","last_change":"2026-03-09T17:26:30.123075+0000","last_active":"2026-03-09T17:26:56.753312+0000","last_peered":"2026-03-09T17:26:56.753312+0000","last_clean":"2026-03-09T17:26:56.753312+0000","last_became_active":"2026-03-09T17:26:30.122923+0000","last_became_peered":"2026-03-09T17:26:30.122923+0000","last_unstale":"2026-03-09T17:26:56.753312+0000","last_undegraded":"2026-03-09T17:26:56.753312+0000","last_fullsized":"2026-03-09T17:26:56.753312+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:27:40.685922+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755104+0000","last_change":"2026-03-09T17:26:32.126733+0000","last_active":"2026-03-09T17:26:56.755104+0000","last_peered":"2026-03-09T17:26:56.755104+0000","last_clean":"2026-03-09T17:26:56.755104+0000","last_became_active":"2026-03-09T17:26:32.126622+0000","last_became_peered":"2026-03-09T17:26:32.126622+0000","last_unstale":"2026-03-09T17:26:56.755104+0000","last_undegraded":"2026-03-09T17:26:56.755104+0000","last_fullsized":"2026-03-09T17:26:56.755104+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:01:31.280677+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753283+0000","last_change":"2026-03-09T17:26:34.135211+0000","last_active":"2026-03-09T17:26:56.753283+0000","last_peered":"2026-03-09T17:26:56.753283+0000","last_clean":"2026-03-09T17:26:56.753283+0000","last_became_active":"2026-03-09T17:26:34.135094+0000","last_became_peered":"2026-03-09T17:26:34.135094+0000","last_unstale":"2026-03-09T17:26:56.753283+0000","last_undegraded":"2026-03-09T17:26:56.753283+0000","last_fullsized":"2026-03-09T17:26:56.753283+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:23:18.847909+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754975+0000","last_change":"2026-03-09T17:26:28.102985+0000","last_active":"2026-03-09T17:26:56.754975+0000","last_peered":"2026-03-09T17:26:56.754975+0000","last_clean":"2026-03-09T17:26:56.754975+0000","last_became_active":"2026-03-09T17:26:28.102915+0000","last_became_peered":"2026-03-09T17:26:28.102915+0000","last_unstale":"2026-03-09T17:26:56.754975+0000","last_undegraded":"2026-03-09T17:26:56.754975+0000","last_fullsized":"2026-03-09T17:26:56.754975+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:16:16.389834+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"61'12","reported_seq":52,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752250+0000","last_change":"2026-03-09T17:26:30.121047+0000","last_active":"2026-03-09T17:26:56.752250+0000","last_peered":"2026-03-09T17:26:56.752250+0000","last_clean":"2026-03-09T17:26:56.752250+0000","last_became_active":"2026-03-09T17:26:30.120947+0000","last_became_peered":"2026-03-09T17:26:30.120947+0000","last_unstale":"2026-03-09T17:26:56.752250+0000","last_undegraded":"2026-03-09T17:26:56.752250+0000","last_fullsized":"2026-03-09T17:26:56.752250+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:13:14.791683+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752198+0000","last_change":"2026-03-09T17:26:32.120033+0000","last_active":"2026-03-09T17:26:56.752198+0000","last_peered":"2026-03-09T17:26:56.752198+0000","last_clean":"2026-03-09T17:26:56.752198+0000","last_became_active":"2026-03-09T17:26:32.119868+0000","last_became_peered":"2026-03-09T17:26:32.119868+0000","last_unstale":"2026-03-09T17:26:56.752198+0000","last_undegraded":"2026-03-09T17:26:56.752198+0000","last_fullsized":"2026-03-09T17:26:56.752198+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:04:54.592179+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627168+0000","last_change":"2026-03-09T17:26:34.121326+0000","last_active":"2026-03-09T17:26:56.627168+0000","last_peered":"2026-03-09T17:26:56.627168+0000","last_clean":"2026-03-09T17:26:56.627168+0000","last_became_active":"2026-03-09T17:26:34.121259+0000","last_became_peered":"2026-03-09T17:26:34.121259+0000","last_unstale":"2026-03-09T17:26:56.627168+0000","last_undegraded":"2026-03-09T17:26:56.627168+0000","last_fullsized":"2026-03-09T17:26:56.627168+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:32:35.534173+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"61'19","reported_seq":60,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.757293+0000","last_change":"2026-03-09T17:26:30.122084+0000","last_active":"2026-03-09T17:26:56.757293+0000","last_peered":"2026-03-09T17:26:56.757293+0000","last_clean":"2026-03-09T17:26:56.757293+0000","last_became_active":"2026-03-09T17:26:30.121995+0000","last_became_peered":"2026-03-09T17:26:30.121995+0000","last_unstale":"2026-03-09T17:26:56.757293+0000","last_undegraded":"2026-03-09T17:26:56.757293+0000","last_fullsized":"2026-03-09T17:26:56.757293+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:59:24.400840+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754883+0000","last_change":"2026-03-09T17:26:28.110689+0000","last_active":"2026-03-09T17:26:56.754883+0000","last_peered":"2026-03-09T17:26:56.754883+0000","last_clean":"2026-03-09T17:26:56.754883+0000","last_became_active":"2026-03-09T17:26:28.110571+0000","last_became_peered":"2026-03-09T17:26:28.110571+0000","last_unstale":"2026-03-09T17:26:56.754883+0000","last_undegraded":"2026-03-09T17:26:56.754883+0000","last_fullsized":"2026-03-09T17:26:56.754883+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:03:24.475506+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749446+0000","last_change":"2026-03-09T17:26:32.129148+0000","last_active":"2026-03-09T17:26:56.749446+0000","last_peered":"2026-03-09T17:26:56.749446+0000","last_clean":"2026-03-09T17:26:56.749446+0000","last_became_active":"2026-03-09T17:26:32.128978+0000","last_became_peered":"2026-03-09T17:26:32.128978+0000","last_unstale":"2026-03-09T17:26:56.749446+0000","last_undegraded":"2026-03-09T17:26:56.749446+0000","last_fullsized":"2026-03-09T17:26:56.749446+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:35.264837+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627745+0000","last_change":"2026-03-09T17:26:34.123007+0000","last_active":"2026-03-09T17:26:56.627745+0000","last_peered":"2026-03-09T17:26:56.627745+0000","last_clean":"2026-03-09T17:26:56.627745+0000","last_became_active":"2026-03-09T17:26:34.122830+0000","last_became_peered":"2026-03-09T17:26:34.122830+0000","last_unstale":"2026-03-09T17:26:56.627745+0000","last_undegraded":"2026-03-09T17:26:56.627745+0000","last_fullsized":"2026-03-09T17:26:56.627745+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:54:41.840250+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753940+0000","last_change":"2026-03-09T17:26:30.122030+0000","last_active":"2026-03-09T17:26:56.753940+0000","last_peered":"2026-03-09T17:26:56.753940+0000","last_clean":"2026-03-09T17:26:56.753940+0000","last_became_active":"2026-03-09T17:26:30.121102+0000","last_became_peered":"2026-03-09T17:26:30.121102+0000","last_unstale":"2026-03-09T17:26:56.753940+0000","last_undegraded":"2026-03-09T17:26:56.753940+0000","last_fullsized":"2026-03-09T17:26:56.753940+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:11:00.033702+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748877+0000","last_change":"2026-03-09T17:26:28.096234+0000","last_active":"2026-03-09T17:26:56.748877+0000","last_peered":"2026-03-09T17:26:56.748877+0000","last_clean":"2026-03-09T17:26:56.748877+0000","last_became_active":"2026-03-09T17:26:28.096106+0000","last_became_peered":"2026-03-09T17:26:28.096106+0000","last_unstale":"2026-03-09T17:26:56.748877+0000","last_undegraded":"2026-03-09T17:26:56.748877+0000","last_fullsized":"2026-03-09T17:26:56.748877+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:45.854747+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"61'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.189912+0000","last_change":"2026-03-09T17:26:32.119046+0000","last_active":"2026-03-09T17:27:33.189912+0000","last_peered":"2026-03-09T17:27:33.189912+0000","last_clean":"2026-03-09T17:27:33.189912+0000","last_became_active":"2026-03-09T17:26:32.116392+0000","last_became_peered":"2026-03-09T17:26:32.116392+0000","last_unstale":"2026-03-09T17:27:33.189912+0000","last_undegraded":"2026-03-09T17:27:33.189912+0000","last_fullsized":"2026-03-09T17:27:33.189912+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:34:40.418353+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635522+0000","last_change":"2026-03-09T17:26:34.126734+0000","last_active":"2026-03-09T17:26:56.635522+0000","last_peered":"2026-03-09T17:26:56.635522+0000","last_clean":"2026-03-09T17:26:56.635522+0000","last_became_active":"2026-03-09T17:26:34.126366+0000","last_became_peered":"2026-03-09T17:26:34.126366+0000","last_unstale":"2026-03-09T17:26:56.635522+0000","last_undegraded":"2026-03-09T17:26:56.635522+0000","last_fullsized":"2026-03-09T17:26:56.635522+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:08:27.868690+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"61'15","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754005+0000","last_change":"2026-03-09T17:26:30.114507+0000","last_active":"2026-03-09T17:26:56.754005+0000","last_peered":"2026-03-09T17:26:56.754005+0000","last_clean":"2026-03-09T17:26:56.754005+0000","last_became_active":"2026-03-09T17:26:30.114417+0000","last_became_peered":"2026-03-09T17:26:30.114417+0000","last_unstale":"2026-03-09T17:26:56.754005+0000","last_undegraded":"2026-03-09T17:26:56.754005+0000","last_fullsized":"2026-03-09T17:26:56.754005+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:45:45.268689+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748919+0000","last_change":"2026-03-09T17:26:28.096195+0000","last_active":"2026-03-09T17:26:56.748919+0000","last_peered":"2026-03-09T17:26:56.748919+0000","last_clean":"2026-03-09T17:26:56.748919+0000","last_became_active":"2026-03-09T17:26:28.096106+0000","last_became_peered":"2026-03-09T17:26:28.096106+0000","last_unstale":"2026-03-09T17:26:56.748919+0000","last_undegraded":"2026-03-09T17:26:56.748919+0000","last_fullsized":"2026-03-09T17:26:56.748919+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:33:12.974014+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"61'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.188989+0000","last_change":"2026-03-09T17:26:32.126691+0000","last_active":"2026-03-09T17:27:33.188989+0000","last_peered":"2026-03-09T17:27:33.188989+0000","last_clean":"2026-03-09T17:27:33.188989+0000","last_became_active":"2026-03-09T17:26:32.126603+0000","last_became_peered":"2026-03-09T17:26:32.126603+0000","last_unstale":"2026-03-09T17:27:33.188989+0000","last_undegraded":"2026-03-09T17:27:33.188989+0000","last_fullsized":"2026-03-09T17:27:33.188989+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:35:31.811392+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752623+0000","last_change":"2026-03-09T17:26:34.140679+0000","last_active":"2026-03-09T17:26:56.752623+0000","last_peered":"2026-03-09T17:26:56.752623+0000","last_clean":"2026-03-09T17:26:56.752623+0000","last_became_active":"2026-03-09T17:26:34.138814+0000","last_became_peered":"2026-03-09T17:26:34.138814+0000","last_unstale":"2026-03-09T17:26:56.752623+0000","last_undegraded":"2026-03-09T17:26:56.752623+0000","last_fullsized":"2026-03-09T17:26:56.752623+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:23:16.209889+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"61'12","reported_seq":52,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634581+0000","last_change":"2026-03-09T17:26:30.395904+0000","last_active":"2026-03-09T17:26:56.634581+0000","last_peered":"2026-03-09T17:26:56.634581+0000","last_clean":"2026-03-09T17:26:56.634581+0000","last_became_active":"2026-03-09T17:26:30.394804+0000","last_became_peered":"2026-03-09T17:26:30.394804+0000","last_unstale":"2026-03-09T17:26:56.634581+0000","last_undegraded":"2026-03-09T17:26:56.634581+0000","last_fullsized":"2026-03-09T17:26:56.634581+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:35:31.607941+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754869+0000","last_change":"2026-03-09T17:26:28.097744+0000","last_active":"2026-03-09T17:26:56.754869+0000","last_peered":"2026-03-09T17:26:56.754869+0000","last_clean":"2026-03-09T17:26:56.754869+0000","last_became_active":"2026-03-09T17:26:28.097670+0000","last_became_peered":"2026-03-09T17:26:28.097670+0000","last_unstale":"2026-03-09T17:26:56.754869+0000","last_undegraded":"2026-03-09T17:26:56.754869+0000","last_fullsized":"2026-03-09T17:26:56.754869+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:13:21.154249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753454+0000","last_change":"2026-03-09T17:26:32.121492+0000","last_active":"2026-03-09T17:26:56.753454+0000","last_peered":"2026-03-09T17:26:56.753454+0000","last_clean":"2026-03-09T17:26:56.753454+0000","last_became_active":"2026-03-09T17:26:32.121400+0000","last_became_peered":"2026-03-09T17:26:32.121400+0000","last_unstale":"2026-03-09T17:26:56.753454+0000","last_undegraded":"2026-03-09T17:26:56.753454+0000","last_fullsized":"2026-03-09T17:26:56.753454+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:31:21.190162+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753530+0000","last_change":"2026-03-09T17:26:34.135457+0000","last_active":"2026-03-09T17:26:56.753530+0000","last_peered":"2026-03-09T17:26:56.753530+0000","last_clean":"2026-03-09T17:26:56.753530+0000","last_became_active":"2026-03-09T17:26:34.135379+0000","last_became_peered":"2026-03-09T17:26:34.135379+0000","last_unstale":"2026-03-09T17:26:56.753530+0000","last_undegraded":"2026-03-09T17:26:56.753530+0000","last_fullsized":"2026-03-09T17:26:56.753530+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:22:23.849646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"61'12","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.629513+0000","last_change":"2026-03-09T17:26:30.097305+0000","last_active":"2026-03-09T17:26:56.629513+0000","last_peered":"2026-03-09T17:26:56.629513+0000","last_clean":"2026-03-09T17:26:56.629513+0000","last_became_active":"2026-03-09T17:26:30.097200+0000","last_became_peered":"2026-03-09T17:26:30.097200+0000","last_unstale":"2026-03-09T17:26:56.629513+0000","last_undegraded":"2026-03-09T17:26:56.629513+0000","last_fullsized":"2026-03-09T17:26:56.629513+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:54:21.042441+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756964+0000","last_change":"2026-03-09T17:26:28.097003+0000","last_active":"2026-03-09T17:26:56.756964+0000","last_peered":"2026-03-09T17:26:56.756964+0000","last_clean":"2026-03-09T17:26:56.756964+0000","last_became_active":"2026-03-09T17:26:28.096797+0000","last_became_peered":"2026-03-09T17:26:28.096797+0000","last_unstale":"2026-03-09T17:26:56.756964+0000","last_undegraded":"2026-03-09T17:26:56.756964+0000","last_fullsized":"2026-03-09T17:26:56.756964+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:46:24.631257+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"59'1","reported_seq":37,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635422+0000","last_change":"2026-03-09T17:26:35.110853+0000","last_active":"2026-03-09T17:26:56.635422+0000","last_peered":"2026-03-09T17:26:56.635422+0000","last_clean":"2026-03-09T17:26:56.635422+0000","last_became_active":"2026-03-09T17:26:30.119547+0000","last_became_peered":"2026-03-09T17:26:30.119547+0000","last_unstale":"2026-03-09T17:26:56.635422+0000","last_undegraded":"2026-03-09T17:26:56.635422+0000","last_fullsized":"2026-03-09T17:26:56.635422+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:21:29.186205+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00025376499999999998,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"61'11","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.189185+0000","last_change":"2026-03-09T17:26:32.120838+0000","last_active":"2026-03-09T17:27:33.189185+0000","last_peered":"2026-03-09T17:27:33.189185+0000","last_clean":"2026-03-09T17:27:33.189185+0000","last_became_active":"2026-03-09T17:26:32.116128+0000","last_became_peered":"2026-03-09T17:26:32.116128+0000","last_unstale":"2026-03-09T17:27:33.189185+0000","last_undegraded":"2026-03-09T17:27:33.189185+0000","last_fullsized":"2026-03-09T17:27:33.189185+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:54:10.852902+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754752+0000","last_change":"2026-03-09T17:26:34.122898+0000","last_active":"2026-03-09T17:26:56.754752+0000","last_peered":"2026-03-09T17:26:56.754752+0000","last_clean":"2026-03-09T17:26:56.754752+0000","last_became_active":"2026-03-09T17:26:34.122815+0000","last_became_peered":"2026-03-09T17:26:34.122815+0000","last_unstale":"2026-03-09T17:26:56.754752+0000","last_undegraded":"2026-03-09T17:26:56.754752+0000","last_fullsized":"2026-03-09T17:26:56.754752+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:26:51.974918+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"61'13","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753753+0000","last_change":"2026-03-09T17:26:30.114718+0000","last_active":"2026-03-09T17:26:56.753753+0000","last_peered":"2026-03-09T17:26:56.753753+0000","last_clean":"2026-03-09T17:26:56.753753+0000","last_became_active":"2026-03-09T17:26:30.114362+0000","last_became_peered":"2026-03-09T17:26:30.114362+0000","last_unstale":"2026-03-09T17:26:56.753753+0000","last_undegraded":"2026-03-09T17:26:56.753753+0000","last_fullsized":"2026-03-09T17:26:56.753753+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:24:10.950068+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"54'1","reported_seq":34,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748963+0000","last_change":"2026-03-09T17:26:28.108575+0000","last_active":"2026-03-09T17:26:56.748963+0000","last_peered":"2026-03-09T17:26:56.748963+0000","last_clean":"2026-03-09T17:26:56.748963+0000","last_became_active":"2026-03-09T17:26:28.108288+0000","last_became_peered":"2026-03-09T17:26:28.108288+0000","last_unstale":"2026-03-09T17:26:56.748963+0000","last_undegraded":"2026-03-09T17:26:56.748963+0000","last_fullsized":"2026-03-09T17:26:56.748963+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:21:11.470021+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"61'5","reported_seq":106,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:41.499209+0000","last_change":"2026-03-09T17:26:35.111503+0000","last_active":"2026-03-09T17:27:41.499209+0000","last_peered":"2026-03-09T17:27:41.499209+0000","last_clean":"2026-03-09T17:27:41.499209+0000","last_became_active":"2026-03-09T17:26:30.119247+0000","last_became_peered":"2026-03-09T17:26:30.119247+0000","last_unstale":"2026-03-09T17:27:41.499209+0000","last_undegraded":"2026-03-09T17:27:41.499209+0000","last_fullsized":"2026-03-09T17:27:41.499209+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:07:34.738017+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000233787,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635179+0000","last_change":"2026-03-09T17:26:32.125970+0000","last_active":"2026-03-09T17:26:56.635179+0000","last_peered":"2026-03-09T17:26:56.635179+0000","last_clean":"2026-03-09T17:26:56.635179+0000","last_became_active":"2026-03-09T17:26:32.125422+0000","last_became_peered":"2026-03-09T17:26:32.125422+0000","last_unstale":"2026-03-09T17:26:56.635179+0000","last_undegraded":"2026-03-09T17:26:56.635179+0000","last_fullsized":"2026-03-09T17:26:56.635179+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:10:27.368722+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635121+0000","last_change":"2026-03-09T17:26:34.126909+0000","last_active":"2026-03-09T17:26:56.635121+0000","last_peered":"2026-03-09T17:26:56.635121+0000","last_clean":"2026-03-09T17:26:56.635121+0000","last_became_active":"2026-03-09T17:26:34.125303+0000","last_became_peered":"2026-03-09T17:26:34.125303+0000","last_unstale":"2026-03-09T17:26:56.635121+0000","last_undegraded":"2026-03-09T17:26:56.635121+0000","last_fullsized":"2026-03-09T17:26:56.635121+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:33:27.610543+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"61'30","reported_seq":94,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.189677+0000","last_change":"2026-03-09T17:26:30.398325+0000","last_active":"2026-03-09T17:27:33.189677+0000","last_peered":"2026-03-09T17:27:33.189677+0000","last_clean":"2026-03-09T17:27:33.189677+0000","last_became_active":"2026-03-09T17:26:30.398128+0000","last_became_peered":"2026-03-09T17:26:30.398128+0000","last_unstale":"2026-03-09T17:27:33.189677+0000","last_undegraded":"2026-03-09T17:27:33.189677+0000","last_fullsized":"2026-03-09T17:27:33.189677+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:31:02.151892+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754890+0000","last_change":"2026-03-09T17:26:28.108063+0000","last_active":"2026-03-09T17:26:56.754890+0000","last_peered":"2026-03-09T17:26:56.754890+0000","last_clean":"2026-03-09T17:26:56.754890+0000","last_became_active":"2026-03-09T17:26:28.107959+0000","last_became_peered":"2026-03-09T17:26:28.107959+0000","last_unstale":"2026-03-09T17:26:56.754890+0000","last_undegraded":"2026-03-09T17:26:56.754890+0000","last_fullsized":"2026-03-09T17:26:56.754890+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:23:27.430232+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.757349+0000","last_change":"2026-03-09T17:26:32.119154+0000","last_active":"2026-03-09T17:26:56.757349+0000","last_peered":"2026-03-09T17:26:56.757349+0000","last_clean":"2026-03-09T17:26:56.757349+0000","last_became_active":"2026-03-09T17:26:32.118877+0000","last_became_peered":"2026-03-09T17:26:32.118877+0000","last_unstale":"2026-03-09T17:26:56.757349+0000","last_undegraded":"2026-03-09T17:26:56.757349+0000","last_fullsized":"2026-03-09T17:26:56.757349+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:50:52.389723+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748240+0000","last_change":"2026-03-09T17:26:34.120382+0000","last_active":"2026-03-09T17:26:56.748240+0000","last_peered":"2026-03-09T17:26:56.748240+0000","last_clean":"2026-03-09T17:26:56.748240+0000","last_became_active":"2026-03-09T17:26:34.120282+0000","last_became_peered":"2026-03-09T17:26:34.120282+0000","last_unstale":"2026-03-09T17:26:56.748240+0000","last_undegraded":"2026-03-09T17:26:56.748240+0000","last_fullsized":"2026-03-09T17:26:56.748240+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:17:00.910205+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"61'16","reported_seq":66,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.189402+0000","last_change":"2026-03-09T17:26:30.396968+0000","last_active":"2026-03-09T17:27:33.189402+0000","last_peered":"2026-03-09T17:27:33.189402+0000","last_clean":"2026-03-09T17:27:33.189402+0000","last_became_active":"2026-03-09T17:26:30.396841+0000","last_became_peered":"2026-03-09T17:26:30.396841+0000","last_unstale":"2026-03-09T17:27:33.189402+0000","last_undegraded":"2026-03-09T17:27:33.189402+0000","last_fullsized":"2026-03-09T17:27:33.189402+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:03:12.965142+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749023+0000","last_change":"2026-03-09T17:26:28.095980+0000","last_active":"2026-03-09T17:26:56.749023+0000","last_peered":"2026-03-09T17:26:56.749023+0000","last_clean":"2026-03-09T17:26:56.749023+0000","last_became_active":"2026-03-09T17:26:28.095893+0000","last_became_peered":"2026-03-09T17:26:28.095893+0000","last_unstale":"2026-03-09T17:26:56.749023+0000","last_undegraded":"2026-03-09T17:26:56.749023+0000","last_fullsized":"2026-03-09T17:26:56.749023+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:06:56.897212+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"61'2","reported_seq":38,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749059+0000","last_change":"2026-03-09T17:26:35.119940+0000","last_active":"2026-03-09T17:26:56.749059+0000","last_peered":"2026-03-09T17:26:56.749059+0000","last_clean":"2026-03-09T17:26:56.749059+0000","last_became_active":"2026-03-09T17:26:30.118422+0000","last_became_peered":"2026-03-09T17:26:30.118422+0000","last_unstale":"2026-03-09T17:26:56.749059+0000","last_undegraded":"2026-03-09T17:26:56.749059+0000","last_fullsized":"2026-03-09T17:26:56.749059+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:11:58.236454+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00054771400000000004,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"61'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.189206+0000","last_change":"2026-03-09T17:26:32.119742+0000","last_active":"2026-03-09T17:27:33.189206+0000","last_peered":"2026-03-09T17:27:33.189206+0000","last_clean":"2026-03-09T17:27:33.189206+0000","last_became_active":"2026-03-09T17:26:32.119655+0000","last_became_peered":"2026-03-09T17:26:32.119655+0000","last_unstale":"2026-03-09T17:27:33.189206+0000","last_undegraded":"2026-03-09T17:27:33.189206+0000","last_fullsized":"2026-03-09T17:27:33.189206+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:50:57.099697+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627619+0000","last_change":"2026-03-09T17:26:34.119403+0000","last_active":"2026-03-09T17:26:56.627619+0000","last_peered":"2026-03-09T17:26:56.627619+0000","last_clean":"2026-03-09T17:26:56.627619+0000","last_became_active":"2026-03-09T17:26:34.119266+0000","last_became_peered":"2026-03-09T17:26:34.119266+0000","last_unstale":"2026-03-09T17:26:56.627619+0000","last_undegraded":"2026-03-09T17:26:56.627619+0000","last_fullsized":"2026-03-09T17:26:56.627619+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:34:37.398165+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"61'19","reported_seq":65,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634804+0000","last_change":"2026-03-09T17:26:30.118050+0000","last_active":"2026-03-09T17:26:56.634804+0000","last_peered":"2026-03-09T17:26:56.634804+0000","last_clean":"2026-03-09T17:26:56.634804+0000","last_became_active":"2026-03-09T17:26:30.117283+0000","last_became_peered":"2026-03-09T17:26:30.117283+0000","last_unstale":"2026-03-09T17:26:56.634804+0000","last_undegraded":"2026-03-09T17:26:56.634804+0000","last_fullsized":"2026-03-09T17:26:56.634804+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:49:37.828390+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753069+0000","last_change":"2026-03-09T17:26:28.095537+0000","last_active":"2026-03-09T17:26:56.753069+0000","last_peered":"2026-03-09T17:26:56.753069+0000","last_clean":"2026-03-09T17:26:56.753069+0000","last_became_active":"2026-03-09T17:26:28.090595+0000","last_became_peered":"2026-03-09T17:26:28.090595+0000","last_unstale":"2026-03-09T17:26:56.753069+0000","last_undegraded":"2026-03-09T17:26:56.753069+0000","last_fullsized":"2026-03-09T17:26:56.753069+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:05:44.882119+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627487+0000","last_change":"2026-03-09T17:26:32.128057+0000","last_active":"2026-03-09T17:26:56.627487+0000","last_peered":"2026-03-09T17:26:56.627487+0000","last_clean":"2026-03-09T17:26:56.627487+0000","last_became_active":"2026-03-09T17:26:32.126612+0000","last_became_peered":"2026-03-09T17:26:32.126612+0000","last_unstale":"2026-03-09T17:26:56.627487+0000","last_undegraded":"2026-03-09T17:26:56.627487+0000","last_fullsized":"2026-03-09T17:26:56.627487+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:20:37.659788+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"61'1","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754002+0000","last_change":"2026-03-09T17:26:34.125838+0000","last_active":"2026-03-09T17:26:56.754002+0000","last_peered":"2026-03-09T17:26:56.754002+0000","last_clean":"2026-03-09T17:26:56.754002+0000","last_became_active":"2026-03-09T17:26:34.125770+0000","last_became_peered":"2026-03-09T17:26:34.125770+0000","last_unstale":"2026-03-09T17:26:56.754002+0000","last_undegraded":"2026-03-09T17:26:56.754002+0000","last_fullsized":"2026-03-09T17:26:56.754002+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:09:56.884477+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"61'18","reported_seq":61,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748559+0000","last_change":"2026-03-09T17:26:30.395271+0000","last_active":"2026-03-09T17:26:56.748559+0000","last_peered":"2026-03-09T17:26:56.748559+0000","last_clean":"2026-03-09T17:26:56.748559+0000","last_became_active":"2026-03-09T17:26:30.395032+0000","last_became_peered":"2026-03-09T17:26:30.395032+0000","last_unstale":"2026-03-09T17:26:56.748559+0000","last_undegraded":"2026-03-09T17:26:56.748559+0000","last_fullsized":"2026-03-09T17:26:56.748559+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:08:42.103405+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627425+0000","last_change":"2026-03-09T17:26:28.104215+0000","last_active":"2026-03-09T17:26:56.627425+0000","last_peered":"2026-03-09T17:26:56.627425+0000","last_clean":"2026-03-09T17:26:56.627425+0000","last_became_active":"2026-03-09T17:26:28.104051+0000","last_became_peered":"2026-03-09T17:26:28.104051+0000","last_unstale":"2026-03-09T17:26:56.627425+0000","last_undegraded":"2026-03-09T17:26:56.627425+0000","last_fullsized":"2026-03-09T17:26:56.627425+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:39:20.566288+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627470+0000","last_change":"2026-03-09T17:26:32.116470+0000","last_active":"2026-03-09T17:26:56.627470+0000","last_peered":"2026-03-09T17:26:56.627470+0000","last_clean":"2026-03-09T17:26:56.627470+0000","last_became_active":"2026-03-09T17:26:32.116379+0000","last_became_peered":"2026-03-09T17:26:32.116379+0000","last_unstale":"2026-03-09T17:26:56.627470+0000","last_undegraded":"2026-03-09T17:26:56.627470+0000","last_fullsized":"2026-03-09T17:26:56.627470+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:32:27.813666+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755310+0000","last_change":"2026-03-09T17:26:34.129611+0000","last_active":"2026-03-09T17:26:56.755310+0000","last_peered":"2026-03-09T17:26:56.755310+0000","last_clean":"2026-03-09T17:26:56.755310+0000","last_became_active":"2026-03-09T17:26:34.127502+0000","last_became_peered":"2026-03-09T17:26:34.127502+0000","last_unstale":"2026-03-09T17:26:56.755310+0000","last_undegraded":"2026-03-09T17:26:56.755310+0000","last_fullsized":"2026-03-09T17:26:56.755310+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:04:05.431518+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"61'14","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.629443+0000","last_change":"2026-03-09T17:26:30.113476+0000","last_active":"2026-03-09T17:26:56.629443+0000","last_peered":"2026-03-09T17:26:56.629443+0000","last_clean":"2026-03-09T17:26:56.629443+0000","last_became_active":"2026-03-09T17:26:30.113360+0000","last_became_peered":"2026-03-09T17:26:30.113360+0000","last_unstale":"2026-03-09T17:26:56.629443+0000","last_undegraded":"2026-03-09T17:26:56.629443+0000","last_fullsized":"2026-03-09T17:26:56.629443+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:54:38.826467+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755093+0000","last_change":"2026-03-09T17:26:28.097840+0000","last_active":"2026-03-09T17:26:56.755093+0000","last_peered":"2026-03-09T17:26:56.755093+0000","last_clean":"2026-03-09T17:26:56.755093+0000","last_became_active":"2026-03-09T17:26:28.097684+0000","last_became_peered":"2026-03-09T17:26:28.097684+0000","last_unstale":"2026-03-09T17:26:56.755093+0000","last_undegraded":"2026-03-09T17:26:56.755093+0000","last_fullsized":"2026-03-09T17:26:56.755093+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:38:41.639436+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753370+0000","last_change":"2026-03-09T17:26:32.117255+0000","last_active":"2026-03-09T17:26:56.753370+0000","last_peered":"2026-03-09T17:26:56.753370+0000","last_clean":"2026-03-09T17:26:56.753370+0000","last_became_active":"2026-03-09T17:26:32.117149+0000","last_became_peered":"2026-03-09T17:26:32.117149+0000","last_unstale":"2026-03-09T17:26:56.753370+0000","last_undegraded":"2026-03-09T17:26:56.753370+0000","last_fullsized":"2026-03-09T17:26:56.753370+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:41:27.840394+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748724+0000","last_change":"2026-03-09T17:26:34.135460+0000","last_active":"2026-03-09T17:26:56.748724+0000","last_peered":"2026-03-09T17:26:56.748724+0000","last_clean":"2026-03-09T17:26:56.748724+0000","last_became_active":"2026-03-09T17:26:34.135369+0000","last_became_peered":"2026-03-09T17:26:34.135369+0000","last_unstale":"2026-03-09T17:26:56.748724+0000","last_undegraded":"2026-03-09T17:26:56.748724+0000","last_fullsized":"2026-03-09T17:26:56.748724+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:09:12.545572+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"61'10","reported_seq":44,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754389+0000","last_change":"2026-03-09T17:26:30.120511+0000","last_active":"2026-03-09T17:26:56.754389+0000","last_peered":"2026-03-09T17:26:56.754389+0000","last_clean":"2026-03-09T17:26:56.754389+0000","last_became_active":"2026-03-09T17:26:30.119388+0000","last_became_peered":"2026-03-09T17:26:30.119388+0000","last_unstale":"2026-03-09T17:26:56.754389+0000","last_undegraded":"2026-03-09T17:26:56.754389+0000","last_fullsized":"2026-03-09T17:26:56.754389+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:40:36.245642+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752340+0000","last_change":"2026-03-09T17:26:28.098622+0000","last_active":"2026-03-09T17:26:56.752340+0000","last_peered":"2026-03-09T17:26:56.752340+0000","last_clean":"2026-03-09T17:26:56.752340+0000","last_became_active":"2026-03-09T17:26:28.098425+0000","last_became_peered":"2026-03-09T17:26:28.098425+0000","last_unstale":"2026-03-09T17:26:56.752340+0000","last_undegraded":"2026-03-09T17:26:56.752340+0000","last_fullsized":"2026-03-09T17:26:56.752340+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:26:29.688318+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"65'39","reported_seq":68,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:58.745207+0000","last_change":"2026-03-09T17:26:08.617223+0000","last_active":"2026-03-09T17:26:58.745207+0000","last_peered":"2026-03-09T17:26:58.745207+0000","last_clean":"2026-03-09T17:26:58.745207+0000","last_became_active":"2026-03-09T17:26:08.308356+0000","last_became_peered":"2026-03-09T17:26:08.308356+0000","last_unstale":"2026-03-09T17:26:58.745207+0000","last_undegraded":"2026-03-09T17:26:58.745207+0000","last_fullsized":"2026-03-09T17:26:58.745207+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:23:15.924968+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:23:15.924968+0000","last_clean_scrub_stamp":"2026-03-09T17:23:15.924968+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:32:39.998590+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754946+0000","last_change":"2026-03-09T17:26:32.129576+0000","last_active":"2026-03-09T17:26:56.754946+0000","last_peered":"2026-03-09T17:26:56.754946+0000","last_clean":"2026-03-09T17:26:56.754946+0000","last_became_active":"2026-03-09T17:26:32.129477+0000","last_became_peered":"2026-03-09T17:26:32.129477+0000","last_unstale":"2026-03-09T17:26:56.754946+0000","last_undegraded":"2026-03-09T17:26:56.754946+0000","last_fullsized":"2026-03-09T17:26:56.754946+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:19:06.048193+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752290+0000","last_change":"2026-03-09T17:26:34.135869+0000","last_active":"2026-03-09T17:26:56.752290+0000","last_peered":"2026-03-09T17:26:56.752290+0000","last_clean":"2026-03-09T17:26:56.752290+0000","last_became_active":"2026-03-09T17:26:34.135338+0000","last_became_peered":"2026-03-09T17:26:34.135338+0000","last_unstale":"2026-03-09T17:26:56.752290+0000","last_undegraded":"2026-03-09T17:26:56.752290+0000","last_fullsized":"2026-03-09T17:26:56.752290+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:26:58.210940+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"61'17","reported_seq":57,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754379+0000","last_change":"2026-03-09T17:26:30.118758+0000","last_active":"2026-03-09T17:26:56.754379+0000","last_peered":"2026-03-09T17:26:56.754379+0000","last_clean":"2026-03-09T17:26:56.754379+0000","last_became_active":"2026-03-09T17:26:30.118676+0000","last_became_peered":"2026-03-09T17:26:30.118676+0000","last_unstale":"2026-03-09T17:26:56.754379+0000","last_undegraded":"2026-03-09T17:26:56.754379+0000","last_fullsized":"2026-03-09T17:26:56.754379+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:38:49.179914+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627372+0000","last_change":"2026-03-09T17:26:28.102544+0000","last_active":"2026-03-09T17:26:56.627372+0000","last_peered":"2026-03-09T17:26:56.627372+0000","last_clean":"2026-03-09T17:26:56.627372+0000","last_became_active":"2026-03-09T17:26:28.102447+0000","last_became_peered":"2026-03-09T17:26:28.102447+0000","last_unstale":"2026-03-09T17:26:56.627372+0000","last_undegraded":"2026-03-09T17:26:56.627372+0000","last_fullsized":"2026-03-09T17:26:56.627372+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:43:29.591411+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627399+0000","last_change":"2026-03-09T17:26:32.107460+0000","last_active":"2026-03-09T17:26:56.627399+0000","last_peered":"2026-03-09T17:26:56.627399+0000","last_clean":"2026-03-09T17:26:56.627399+0000","last_became_active":"2026-03-09T17:26:32.107332+0000","last_became_peered":"2026-03-09T17:26:32.107332+0000","last_unstale":"2026-03-09T17:26:56.627399+0000","last_undegraded":"2026-03-09T17:26:56.627399+0000","last_fullsized":"2026-03-09T17:26:56.627399+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:32:26.845186+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754433+0000","last_change":"2026-03-09T17:26:34.124631+0000","last_active":"2026-03-09T17:26:56.754433+0000","last_peered":"2026-03-09T17:26:56.754433+0000","last_clean":"2026-03-09T17:26:56.754433+0000","last_became_active":"2026-03-09T17:26:34.124564+0000","last_became_peered":"2026-03-09T17:26:34.124564+0000","last_unstale":"2026-03-09T17:26:56.754433+0000","last_undegraded":"2026-03-09T17:26:56.754433+0000","last_fullsized":"2026-03-09T17:26:56.754433+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:48:15.865249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"61'10","reported_seq":44,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753247+0000","last_change":"2026-03-09T17:26:30.122443+0000","last_active":"2026-03-09T17:26:56.753247+0000","last_peered":"2026-03-09T17:26:56.753247+0000","last_clean":"2026-03-09T17:26:56.753247+0000","last_became_active":"2026-03-09T17:26:30.121255+0000","last_became_peered":"2026-03-09T17:26:30.121255+0000","last_unstale":"2026-03-09T17:26:56.753247+0000","last_undegraded":"2026-03-09T17:26:56.753247+0000","last_fullsized":"2026-03-09T17:26:56.753247+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:11:21.072195+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748833+0000","last_change":"2026-03-09T17:26:28.108511+0000","last_active":"2026-03-09T17:26:56.748833+0000","last_peered":"2026-03-09T17:26:56.748833+0000","last_clean":"2026-03-09T17:26:56.748833+0000","last_became_active":"2026-03-09T17:26:28.108189+0000","last_became_peered":"2026-03-09T17:26:28.108189+0000","last_unstale":"2026-03-09T17:26:56.748833+0000","last_undegraded":"2026-03-09T17:26:56.748833+0000","last_fullsized":"2026-03-09T17:26:56.748833+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:56:03.838251+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627613+0000","last_change":"2026-03-09T17:26:32.120354+0000","last_active":"2026-03-09T17:26:56.627613+0000","last_peered":"2026-03-09T17:26:56.627613+0000","last_clean":"2026-03-09T17:26:56.627613+0000","last_became_active":"2026-03-09T17:26:32.120229+0000","last_became_peered":"2026-03-09T17:26:32.120229+0000","last_unstale":"2026-03-09T17:26:56.627613+0000","last_undegraded":"2026-03-09T17:26:56.627613+0000","last_fullsized":"2026-03-09T17:26:56.627613+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:46:50.708907+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.626931+0000","last_change":"2026-03-09T17:26:34.120871+0000","last_active":"2026-03-09T17:26:56.626931+0000","last_peered":"2026-03-09T17:26:56.626931+0000","last_clean":"2026-03-09T17:26:56.626931+0000","last_became_active":"2026-03-09T17:26:34.120611+0000","last_became_peered":"2026-03-09T17:26:34.120611+0000","last_unstale":"2026-03-09T17:26:56.626931+0000","last_undegraded":"2026-03-09T17:26:56.626931+0000","last_fullsized":"2026-03-09T17:26:56.626931+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:18:24.312859+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"61'15","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754539+0000","last_change":"2026-03-09T17:26:30.125043+0000","last_active":"2026-03-09T17:26:56.754539+0000","last_peered":"2026-03-09T17:26:56.754539+0000","last_clean":"2026-03-09T17:26:56.754539+0000","last_became_active":"2026-03-09T17:26:30.124914+0000","last_became_peered":"2026-03-09T17:26:30.124914+0000","last_unstale":"2026-03-09T17:26:56.754539+0000","last_undegraded":"2026-03-09T17:26:56.754539+0000","last_fullsized":"2026-03-09T17:26:56.754539+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:19:05.151893+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627325+0000","last_change":"2026-03-09T17:26:28.104230+0000","last_active":"2026-03-09T17:26:56.627325+0000","last_peered":"2026-03-09T17:26:56.627325+0000","last_clean":"2026-03-09T17:26:56.627325+0000","last_became_active":"2026-03-09T17:26:28.104027+0000","last_became_peered":"2026-03-09T17:26:28.104027+0000","last_unstale":"2026-03-09T17:26:56.627325+0000","last_undegraded":"2026-03-09T17:26:56.627325+0000","last_fullsized":"2026-03-09T17:26:56.627325+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:05.555677+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"61'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.189235+0000","last_change":"2026-03-09T17:26:32.129127+0000","last_active":"2026-03-09T17:27:33.189235+0000","last_peered":"2026-03-09T17:27:33.189235+0000","last_clean":"2026-03-09T17:27:33.189235+0000","last_became_active":"2026-03-09T17:26:32.127304+0000","last_became_peered":"2026-03-09T17:26:32.127304+0000","last_unstale":"2026-03-09T17:27:33.189235+0000","last_undegraded":"2026-03-09T17:27:33.189235+0000","last_fullsized":"2026-03-09T17:27:33.189235+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:12:24.681468+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752599+0000","last_change":"2026-03-09T17:26:34.143981+0000","last_active":"2026-03-09T17:26:56.752599+0000","last_peered":"2026-03-09T17:26:56.752599+0000","last_clean":"2026-03-09T17:26:56.752599+0000","last_became_active":"2026-03-09T17:26:34.140600+0000","last_became_peered":"2026-03-09T17:26:34.140600+0000","last_unstale":"2026-03-09T17:26:56.752599+0000","last_undegraded":"2026-03-09T17:26:56.752599+0000","last_fullsized":"2026-03-09T17:26:56.752599+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:28:00.404167+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"61'11","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754954+0000","last_change":"2026-03-09T17:26:30.125033+0000","last_active":"2026-03-09T17:26:56.754954+0000","last_peered":"2026-03-09T17:26:56.754954+0000","last_clean":"2026-03-09T17:26:56.754954+0000","last_became_active":"2026-03-09T17:26:30.124886+0000","last_became_peered":"2026-03-09T17:26:30.124886+0000","last_unstale":"2026-03-09T17:26:56.754954+0000","last_undegraded":"2026-03-09T17:26:56.754954+0000","last_fullsized":"2026-03-09T17:26:56.754954+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:10:56.157992+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"54'2","reported_seq":49,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635328+0000","last_change":"2026-03-09T17:26:28.107361+0000","last_active":"2026-03-09T17:26:56.635328+0000","last_peered":"2026-03-09T17:26:56.635328+0000","last_clean":"2026-03-09T17:26:56.635328+0000","last_became_active":"2026-03-09T17:26:28.107284+0000","last_became_peered":"2026-03-09T17:26:28.107284+0000","last_unstale":"2026-03-09T17:26:56.635328+0000","last_undegraded":"2026-03-09T17:26:56.635328+0000","last_fullsized":"2026-03-09T17:26:56.635328+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:18:02.260389+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627571+0000","last_change":"2026-03-09T17:26:32.113566+0000","last_active":"2026-03-09T17:26:56.627571+0000","last_peered":"2026-03-09T17:26:56.627571+0000","last_clean":"2026-03-09T17:26:56.627571+0000","last_became_active":"2026-03-09T17:26:32.113454+0000","last_became_peered":"2026-03-09T17:26:32.113454+0000","last_unstale":"2026-03-09T17:26:56.627571+0000","last_undegraded":"2026-03-09T17:26:56.627571+0000","last_fullsized":"2026-03-09T17:26:56.627571+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:13:31.629318+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754131+0000","last_change":"2026-03-09T17:26:34.121865+0000","last_active":"2026-03-09T17:26:56.754131+0000","last_peered":"2026-03-09T17:26:56.754131+0000","last_clean":"2026-03-09T17:26:56.754131+0000","last_became_active":"2026-03-09T17:26:34.121751+0000","last_became_peered":"2026-03-09T17:26:34.121751+0000","last_unstale":"2026-03-09T17:26:56.754131+0000","last_undegraded":"2026-03-09T17:26:56.754131+0000","last_fullsized":"2026-03-09T17:26:56.754131+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:56:20.301200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"61'11","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754326+0000","last_change":"2026-03-09T17:26:30.124988+0000","last_active":"2026-03-09T17:26:56.754326+0000","last_peered":"2026-03-09T17:26:56.754326+0000","last_clean":"2026-03-09T17:26:56.754326+0000","last_became_active":"2026-03-09T17:26:30.124810+0000","last_became_peered":"2026-03-09T17:26:30.124810+0000","last_unstale":"2026-03-09T17:26:56.754326+0000","last_undegraded":"2026-03-09T17:26:56.754326+0000","last_fullsized":"2026-03-09T17:26:56.754326+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:58:27.927821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627230+0000","last_change":"2026-03-09T17:26:28.097881+0000","last_active":"2026-03-09T17:26:56.627230+0000","last_peered":"2026-03-09T17:26:56.627230+0000","last_clean":"2026-03-09T17:26:56.627230+0000","last_became_active":"2026-03-09T17:26:28.097778+0000","last_became_peered":"2026-03-09T17:26:28.097778+0000","last_unstale":"2026-03-09T17:26:56.627230+0000","last_undegraded":"2026-03-09T17:26:56.627230+0000","last_fullsized":"2026-03-09T17:26:56.627230+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:35:44.117390+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754454+0000","last_change":"2026-03-09T17:26:32.125143+0000","last_active":"2026-03-09T17:26:56.754454+0000","last_peered":"2026-03-09T17:26:56.754454+0000","last_clean":"2026-03-09T17:26:56.754454+0000","last_became_active":"2026-03-09T17:26:32.114480+0000","last_became_peered":"2026-03-09T17:26:32.114480+0000","last_unstale":"2026-03-09T17:26:56.754454+0000","last_undegraded":"2026-03-09T17:26:56.754454+0000","last_fullsized":"2026-03-09T17:26:56.754454+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:20:22.944515+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627260+0000","last_change":"2026-03-09T17:26:34.123068+0000","last_active":"2026-03-09T17:26:56.627260+0000","last_peered":"2026-03-09T17:26:56.627260+0000","last_clean":"2026-03-09T17:26:56.627260+0000","last_became_active":"2026-03-09T17:26:34.122938+0000","last_became_peered":"2026-03-09T17:26:34.122938+0000","last_unstale":"2026-03-09T17:26:56.627260+0000","last_undegraded":"2026-03-09T17:26:56.627260+0000","last_fullsized":"2026-03-09T17:26:56.627260+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:05:10.094914+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"61'4","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.757011+0000","last_change":"2026-03-09T17:26:30.120153+0000","last_active":"2026-03-09T17:26:56.757011+0000","last_peered":"2026-03-09T17:26:56.757011+0000","last_clean":"2026-03-09T17:26:56.757011+0000","last_became_active":"2026-03-09T17:26:30.120047+0000","last_became_peered":"2026-03-09T17:26:30.120047+0000","last_unstale":"2026-03-09T17:26:56.757011+0000","last_undegraded":"2026-03-09T17:26:56.757011+0000","last_fullsized":"2026-03-09T17:26:56.757011+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:28:29.528471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756898+0000","last_change":"2026-03-09T17:26:28.104471+0000","last_active":"2026-03-09T17:26:56.756898+0000","last_peered":"2026-03-09T17:26:56.756898+0000","last_clean":"2026-03-09T17:26:56.756898+0000","last_became_active":"2026-03-09T17:26:28.104117+0000","last_became_peered":"2026-03-09T17:26:28.104117+0000","last_unstale":"2026-03-09T17:26:56.756898+0000","last_undegraded":"2026-03-09T17:26:56.756898+0000","last_fullsized":"2026-03-09T17:26:56.756898+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:16:36.518485+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753413+0000","last_change":"2026-03-09T17:26:32.117750+0000","last_active":"2026-03-09T17:26:56.753413+0000","last_peered":"2026-03-09T17:26:56.753413+0000","last_clean":"2026-03-09T17:26:56.753413+0000","last_became_active":"2026-03-09T17:26:32.117664+0000","last_became_peered":"2026-03-09T17:26:32.117664+0000","last_unstale":"2026-03-09T17:26:56.753413+0000","last_undegraded":"2026-03-09T17:26:56.753413+0000","last_fullsized":"2026-03-09T17:26:56.753413+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:11:52.597538+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755402+0000","last_change":"2026-03-09T17:26:34.130232+0000","last_active":"2026-03-09T17:26:56.755402+0000","last_peered":"2026-03-09T17:26:56.755402+0000","last_clean":"2026-03-09T17:26:56.755402+0000","last_became_active":"2026-03-09T17:26:34.130084+0000","last_became_peered":"2026-03-09T17:26:34.130084+0000","last_unstale":"2026-03-09T17:26:56.755402+0000","last_undegraded":"2026-03-09T17:26:56.755402+0000","last_fullsized":"2026-03-09T17:26:56.755402+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:36:39.812867+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"61'11","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754701+0000","last_change":"2026-03-09T17:26:30.395252+0000","last_active":"2026-03-09T17:26:56.754701+0000","last_peered":"2026-03-09T17:26:56.754701+0000","last_clean":"2026-03-09T17:26:56.754701+0000","last_became_active":"2026-03-09T17:26:30.395123+0000","last_became_peered":"2026-03-09T17:26:30.395123+0000","last_unstale":"2026-03-09T17:26:56.754701+0000","last_undegraded":"2026-03-09T17:26:56.754701+0000","last_fullsized":"2026-03-09T17:26:56.754701+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:48:44.876719+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"54'2","reported_seq":49,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752709+0000","last_change":"2026-03-09T17:26:28.096054+0000","last_active":"2026-03-09T17:26:56.752709+0000","last_peered":"2026-03-09T17:26:56.752709+0000","last_clean":"2026-03-09T17:26:56.752709+0000","last_became_active":"2026-03-09T17:26:28.095719+0000","last_became_peered":"2026-03-09T17:26:28.095719+0000","last_unstale":"2026-03-09T17:26:56.752709+0000","last_undegraded":"2026-03-09T17:26:56.752709+0000","last_fullsized":"2026-03-09T17:26:56.752709+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:47:51.318385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":1429,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":20,"num_read_kb":20,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"61'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.189282+0000","last_change":"2026-03-09T17:26:32.120173+0000","last_active":"2026-03-09T17:27:33.189282+0000","last_peered":"2026-03-09T17:27:33.189282+0000","last_clean":"2026-03-09T17:27:33.189282+0000","last_became_active":"2026-03-09T17:26:32.120055+0000","last_became_peered":"2026-03-09T17:26:32.120055+0000","last_unstale":"2026-03-09T17:27:33.189282+0000","last_undegraded":"2026-03-09T17:27:33.189282+0000","last_fullsized":"2026-03-09T17:27:33.189282+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:10:39.114276+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627238+0000","last_change":"2026-03-09T17:26:34.121045+0000","last_active":"2026-03-09T17:26:56.627238+0000","last_peered":"2026-03-09T17:26:56.627238+0000","last_clean":"2026-03-09T17:26:56.627238+0000","last_became_active":"2026-03-09T17:26:34.120741+0000","last_became_peered":"2026-03-09T17:26:34.120741+0000","last_unstale":"2026-03-09T17:26:56.627238+0000","last_undegraded":"2026-03-09T17:26:56.627238+0000","last_fullsized":"2026-03-09T17:26:56.627238+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:35:36.246159+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627388+0000","last_change":"2026-03-09T17:26:30.115342+0000","last_active":"2026-03-09T17:26:56.627388+0000","last_peered":"2026-03-09T17:26:56.627388+0000","last_clean":"2026-03-09T17:26:56.627388+0000","last_became_active":"2026-03-09T17:26:30.114536+0000","last_became_peered":"2026-03-09T17:26:30.114536+0000","last_unstale":"2026-03-09T17:26:56.627388+0000","last_undegraded":"2026-03-09T17:26:56.627388+0000","last_fullsized":"2026-03-09T17:26:56.627388+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:10:41.945843+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627343+0000","last_change":"2026-03-09T17:26:28.100698+0000","last_active":"2026-03-09T17:26:56.627343+0000","last_peered":"2026-03-09T17:26:56.627343+0000","last_clean":"2026-03-09T17:26:56.627343+0000","last_became_active":"2026-03-09T17:26:28.100435+0000","last_became_peered":"2026-03-09T17:26:28.100435+0000","last_unstale":"2026-03-09T17:26:56.627343+0000","last_undegraded":"2026-03-09T17:26:56.627343+0000","last_fullsized":"2026-03-09T17:26:56.627343+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:26:03.529022+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"61'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.189268+0000","last_change":"2026-03-09T17:26:32.124930+0000","last_active":"2026-03-09T17:27:33.189268+0000","last_peered":"2026-03-09T17:27:33.189268+0000","last_clean":"2026-03-09T17:27:33.189268+0000","last_became_active":"2026-03-09T17:26:32.117415+0000","last_became_peered":"2026-03-09T17:26:32.117415+0000","last_unstale":"2026-03-09T17:27:33.189268+0000","last_undegraded":"2026-03-09T17:27:33.189268+0000","last_fullsized":"2026-03-09T17:27:33.189268+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:15:13.515517+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635105+0000","last_change":"2026-03-09T17:26:34.138808+0000","last_active":"2026-03-09T17:26:56.635105+0000","last_peered":"2026-03-09T17:26:56.635105+0000","last_clean":"2026-03-09T17:26:56.635105+0000","last_became_active":"2026-03-09T17:26:34.138616+0000","last_became_peered":"2026-03-09T17:26:34.138616+0000","last_unstale":"2026-03-09T17:26:56.635105+0000","last_undegraded":"2026-03-09T17:26:56.635105+0000","last_fullsized":"2026-03-09T17:26:56.635105+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:07:10.246033+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755260+0000","last_change":"2026-03-09T17:26:30.124973+0000","last_active":"2026-03-09T17:26:56.755260+0000","last_peered":"2026-03-09T17:26:56.755260+0000","last_clean":"2026-03-09T17:26:56.755260+0000","last_became_active":"2026-03-09T17:26:30.124758+0000","last_became_peered":"2026-03-09T17:26:30.124758+0000","last_unstale":"2026-03-09T17:26:56.755260+0000","last_undegraded":"2026-03-09T17:26:56.755260+0000","last_fullsized":"2026-03-09T17:26:56.755260+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:09:41.078271+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756831+0000","last_change":"2026-03-09T17:26:28.090218+0000","last_active":"2026-03-09T17:26:56.756831+0000","last_peered":"2026-03-09T17:26:56.756831+0000","last_clean":"2026-03-09T17:26:56.756831+0000","last_became_active":"2026-03-09T17:26:28.090115+0000","last_became_peered":"2026-03-09T17:26:28.090115+0000","last_unstale":"2026-03-09T17:26:56.756831+0000","last_undegraded":"2026-03-09T17:26:56.756831+0000","last_fullsized":"2026-03-09T17:26:56.756831+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:02:34.392200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753903+0000","last_change":"2026-03-09T17:26:32.125008+0000","last_active":"2026-03-09T17:26:56.753903+0000","last_peered":"2026-03-09T17:26:56.753903+0000","last_clean":"2026-03-09T17:26:56.753903+0000","last_became_active":"2026-03-09T17:26:32.121849+0000","last_became_peered":"2026-03-09T17:26:32.121849+0000","last_unstale":"2026-03-09T17:26:56.753903+0000","last_undegraded":"2026-03-09T17:26:56.753903+0000","last_fullsized":"2026-03-09T17:26:56.753903+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:15:12.730149+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627069+0000","last_change":"2026-03-09T17:26:34.136649+0000","last_active":"2026-03-09T17:26:56.627069+0000","last_peered":"2026-03-09T17:26:56.627069+0000","last_clean":"2026-03-09T17:26:56.627069+0000","last_became_active":"2026-03-09T17:26:34.136545+0000","last_became_peered":"2026-03-09T17:26:34.136545+0000","last_unstale":"2026-03-09T17:26:56.627069+0000","last_undegraded":"2026-03-09T17:26:56.627069+0000","last_fullsized":"2026-03-09T17:26:56.627069+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:47:36.825938+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"61'10","reported_seq":44,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634519+0000","last_change":"2026-03-09T17:26:30.124478+0000","last_active":"2026-03-09T17:26:56.634519+0000","last_peered":"2026-03-09T17:26:56.634519+0000","last_clean":"2026-03-09T17:26:56.634519+0000","last_became_active":"2026-03-09T17:26:30.124399+0000","last_became_peered":"2026-03-09T17:26:30.124399+0000","last_unstale":"2026-03-09T17:26:56.634519+0000","last_undegraded":"2026-03-09T17:26:56.634519+0000","last_fullsized":"2026-03-09T17:26:56.634519+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:07:20.313320+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748652+0000","last_change":"2026-03-09T17:26:28.087985+0000","last_active":"2026-03-09T17:26:56.748652+0000","last_peered":"2026-03-09T17:26:56.748652+0000","last_clean":"2026-03-09T17:26:56.748652+0000","last_became_active":"2026-03-09T17:26:28.087883+0000","last_became_peered":"2026-03-09T17:26:28.087883+0000","last_unstale":"2026-03-09T17:26:56.748652+0000","last_undegraded":"2026-03-09T17:26:56.748652+0000","last_fullsized":"2026-03-09T17:26:56.748652+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:04:07.028953+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748681+0000","last_change":"2026-03-09T17:26:32.130920+0000","last_active":"2026-03-09T17:26:56.748681+0000","last_peered":"2026-03-09T17:26:56.748681+0000","last_clean":"2026-03-09T17:26:56.748681+0000","last_became_active":"2026-03-09T17:26:32.130825+0000","last_became_peered":"2026-03-09T17:26:32.130825+0000","last_unstale":"2026-03-09T17:26:56.748681+0000","last_undegraded":"2026-03-09T17:26:56.748681+0000","last_fullsized":"2026-03-09T17:26:56.748681+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:17:41.176939+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753593+0000","last_change":"2026-03-09T17:26:34.138779+0000","last_active":"2026-03-09T17:26:56.753593+0000","last_peered":"2026-03-09T17:26:56.753593+0000","last_clean":"2026-03-09T17:26:56.753593+0000","last_became_active":"2026-03-09T17:26:34.137399+0000","last_became_peered":"2026-03-09T17:26:34.137399+0000","last_unstale":"2026-03-09T17:26:56.753593+0000","last_undegraded":"2026-03-09T17:26:56.753593+0000","last_fullsized":"2026-03-09T17:26:56.753593+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:51:54.899500+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"61'6","reported_seq":38,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.629577+0000","last_change":"2026-03-09T17:26:30.118289+0000","last_active":"2026-03-09T17:26:56.629577+0000","last_peered":"2026-03-09T17:26:56.629577+0000","last_clean":"2026-03-09T17:26:56.629577+0000","last_became_active":"2026-03-09T17:26:30.118123+0000","last_became_peered":"2026-03-09T17:26:30.118123+0000","last_unstale":"2026-03-09T17:26:56.629577+0000","last_undegraded":"2026-03-09T17:26:56.629577+0000","last_fullsized":"2026-03-09T17:26:56.629577+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:01:31.934885+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752685+0000","last_change":"2026-03-09T17:26:28.102496+0000","last_active":"2026-03-09T17:26:56.752685+0000","last_peered":"2026-03-09T17:26:56.752685+0000","last_clean":"2026-03-09T17:26:56.752685+0000","last_became_active":"2026-03-09T17:26:28.101626+0000","last_became_peered":"2026-03-09T17:26:28.101626+0000","last_unstale":"2026-03-09T17:26:56.752685+0000","last_undegraded":"2026-03-09T17:26:56.752685+0000","last_fullsized":"2026-03-09T17:26:56.752685+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:08:33.383371+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756565+0000","last_change":"2026-03-09T17:26:32.122511+0000","last_active":"2026-03-09T17:26:56.756565+0000","last_peered":"2026-03-09T17:26:56.756565+0000","last_clean":"2026-03-09T17:26:56.756565+0000","last_became_active":"2026-03-09T17:26:32.122425+0000","last_became_peered":"2026-03-09T17:26:32.122425+0000","last_unstale":"2026-03-09T17:26:56.756565+0000","last_undegraded":"2026-03-09T17:26:56.756565+0000","last_fullsized":"2026-03-09T17:26:56.756565+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:38:20.380150+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755427+0000","last_change":"2026-03-09T17:26:34.129228+0000","last_active":"2026-03-09T17:26:56.755427+0000","last_peered":"2026-03-09T17:26:56.755427+0000","last_clean":"2026-03-09T17:26:56.755427+0000","last_became_active":"2026-03-09T17:26:34.129084+0000","last_became_peered":"2026-03-09T17:26:34.129084+0000","last_unstale":"2026-03-09T17:26:56.755427+0000","last_undegraded":"2026-03-09T17:26:56.755427+0000","last_fullsized":"2026-03-09T17:26:56.755427+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:58:17.819480+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752476+0000","last_change":"2026-03-09T17:26:30.123001+0000","last_active":"2026-03-09T17:26:56.752476+0000","last_peered":"2026-03-09T17:26:56.752476+0000","last_clean":"2026-03-09T17:26:56.752476+0000","last_became_active":"2026-03-09T17:26:30.122809+0000","last_became_peered":"2026-03-09T17:26:30.122809+0000","last_unstale":"2026-03-09T17:26:56.752476+0000","last_undegraded":"2026-03-09T17:26:56.752476+0000","last_fullsized":"2026-03-09T17:26:56.752476+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:36:59.018906+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756820+0000","last_change":"2026-03-09T17:26:28.097068+0000","last_active":"2026-03-09T17:26:56.756820+0000","last_peered":"2026-03-09T17:26:56.756820+0000","last_clean":"2026-03-09T17:26:56.756820+0000","last_became_active":"2026-03-09T17:26:28.096921+0000","last_became_peered":"2026-03-09T17:26:28.096921+0000","last_unstale":"2026-03-09T17:26:56.756820+0000","last_undegraded":"2026-03-09T17:26:56.756820+0000","last_fullsized":"2026-03-09T17:26:56.756820+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:43:49.091200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755292+0000","last_change":"2026-03-09T17:26:32.129214+0000","last_active":"2026-03-09T17:26:56.755292+0000","last_peered":"2026-03-09T17:26:56.755292+0000","last_clean":"2026-03-09T17:26:56.755292+0000","last_became_active":"2026-03-09T17:26:32.127418+0000","last_became_peered":"2026-03-09T17:26:32.127418+0000","last_unstale":"2026-03-09T17:26:56.755292+0000","last_undegraded":"2026-03-09T17:26:56.755292+0000","last_fullsized":"2026-03-09T17:26:56.755292+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:50:38.421678+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754043+0000","last_change":"2026-03-09T17:26:34.121319+0000","last_active":"2026-03-09T17:26:56.754043+0000","last_peered":"2026-03-09T17:26:56.754043+0000","last_clean":"2026-03-09T17:26:56.754043+0000","last_became_active":"2026-03-09T17:26:34.118993+0000","last_became_peered":"2026-03-09T17:26:34.118993+0000","last_unstale":"2026-03-09T17:26:56.754043+0000","last_undegraded":"2026-03-09T17:26:56.754043+0000","last_fullsized":"2026-03-09T17:26:56.754043+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:13:15.662089+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"61'1","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754969+0000","last_change":"2026-03-09T17:26:34.135324+0000","last_active":"2026-03-09T17:26:56.754969+0000","last_peered":"2026-03-09T17:26:56.754969+0000","last_clean":"2026-03-09T17:26:56.754969+0000","last_became_active":"2026-03-09T17:26:34.135213+0000","last_became_peered":"2026-03-09T17:26:34.135213+0000","last_unstale":"2026-03-09T17:26:56.754969+0000","last_undegraded":"2026-03-09T17:26:56.754969+0000","last_fullsized":"2026-03-09T17:26:56.754969+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:56:25.608633+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"61'15","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749095+0000","last_change":"2026-03-09T17:26:30.111642+0000","last_active":"2026-03-09T17:26:56.749095+0000","last_peered":"2026-03-09T17:26:56.749095+0000","last_clean":"2026-03-09T17:26:56.749095+0000","last_became_active":"2026-03-09T17:26:30.111478+0000","last_became_peered":"2026-03-09T17:26:30.111478+0000","last_unstale":"2026-03-09T17:26:56.749095+0000","last_undegraded":"2026-03-09T17:26:56.749095+0000","last_fullsized":"2026-03-09T17:26:56.749095+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:56:12.936712+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752837+0000","last_change":"2026-03-09T17:26:28.102248+0000","last_active":"2026-03-09T17:26:56.752837+0000","last_peered":"2026-03-09T17:26:56.752837+0000","last_clean":"2026-03-09T17:26:56.752837+0000","last_became_active":"2026-03-09T17:26:28.101483+0000","last_became_peered":"2026-03-09T17:26:28.101483+0000","last_unstale":"2026-03-09T17:26:56.752837+0000","last_undegraded":"2026-03-09T17:26:56.752837+0000","last_fullsized":"2026-03-09T17:26:56.752837+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:16:38.243601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"61'11","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:33.188945+0000","last_change":"2026-03-09T17:26:32.122824+0000","last_active":"2026-03-09T17:27:33.188945+0000","last_peered":"2026-03-09T17:27:33.188945+0000","last_clean":"2026-03-09T17:27:33.188945+0000","last_became_active":"2026-03-09T17:26:32.122686+0000","last_became_peered":"2026-03-09T17:26:32.122686+0000","last_unstale":"2026-03-09T17:27:33.188945+0000","last_undegraded":"2026-03-09T17:27:33.188945+0000","last_fullsized":"2026-03-09T17:27:33.188945+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:13:26.800385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749193+0000","last_change":"2026-03-09T17:26:34.139361+0000","last_active":"2026-03-09T17:26:56.749193+0000","last_peered":"2026-03-09T17:26:56.749193+0000","last_clean":"2026-03-09T17:26:56.749193+0000","last_became_active":"2026-03-09T17:26:34.138661+0000","last_became_peered":"2026-03-09T17:26:34.138661+0000","last_unstale":"2026-03-09T17:26:56.749193+0000","last_undegraded":"2026-03-09T17:26:56.749193+0000","last_fullsized":"2026-03-09T17:26:56.749193+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:55:19.714654+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754245+0000","last_change":"2026-03-09T17:26:30.120429+0000","last_active":"2026-03-09T17:26:56.754245+0000","last_peered":"2026-03-09T17:26:56.754245+0000","last_clean":"2026-03-09T17:26:56.754245+0000","last_became_active":"2026-03-09T17:26:30.119815+0000","last_became_peered":"2026-03-09T17:26:30.119815+0000","last_unstale":"2026-03-09T17:26:56.754245+0000","last_undegraded":"2026-03-09T17:26:56.754245+0000","last_fullsized":"2026-03-09T17:26:56.754245+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:54:50.098737+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"54'1","reported_seq":34,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754244+0000","last_change":"2026-03-09T17:26:28.099713+0000","last_active":"2026-03-09T17:26:56.754244+0000","last_peered":"2026-03-09T17:26:56.754244+0000","last_clean":"2026-03-09T17:26:56.754244+0000","last_became_active":"2026-03-09T17:26:28.099482+0000","last_became_peered":"2026-03-09T17:26:28.099482+0000","last_unstale":"2026-03-09T17:26:56.754244+0000","last_undegraded":"2026-03-09T17:26:56.754244+0000","last_fullsized":"2026-03-09T17:26:56.754244+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:42:38.823670+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627736+0000","last_change":"2026-03-09T17:26:32.118586+0000","last_active":"2026-03-09T17:26:56.627736+0000","last_peered":"2026-03-09T17:26:56.627736+0000","last_clean":"2026-03-09T17:26:56.627736+0000","last_became_active":"2026-03-09T17:26:32.118387+0000","last_became_peered":"2026-03-09T17:26:32.118387+0000","last_unstale":"2026-03-09T17:26:56.627736+0000","last_undegraded":"2026-03-09T17:26:56.627736+0000","last_fullsized":"2026-03-09T17:26:56.627736+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:56:31.340982+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634915+0000","last_change":"2026-03-09T17:26:34.139676+0000","last_active":"2026-03-09T17:26:56.634915+0000","last_peered":"2026-03-09T17:26:56.634915+0000","last_clean":"2026-03-09T17:26:56.634915+0000","last_became_active":"2026-03-09T17:26:34.137914+0000","last_became_peered":"2026-03-09T17:26:34.137914+0000","last_unstale":"2026-03-09T17:26:56.634915+0000","last_undegraded":"2026-03-09T17:26:56.634915+0000","last_fullsized":"2026-03-09T17:26:56.634915+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:55:56.520253+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756727+0000","last_change":"2026-03-09T17:26:28.104009+0000","last_active":"2026-03-09T17:26:56.756727+0000","last_peered":"2026-03-09T17:26:56.756727+0000","last_clean":"2026-03-09T17:26:56.756727+0000","last_became_active":"2026-03-09T17:26:28.103897+0000","last_became_peered":"2026-03-09T17:26:28.103897+0000","last_unstale":"2026-03-09T17:26:56.756727+0000","last_undegraded":"2026-03-09T17:26:56.756727+0000","last_fullsized":"2026-03-09T17:26:56.756727+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:10:43.864112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"61'5","reported_seq":39,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.629659+0000","last_change":"2026-03-09T17:26:30.116445+0000","last_active":"2026-03-09T17:26:56.629659+0000","last_peered":"2026-03-09T17:26:56.629659+0000","last_clean":"2026-03-09T17:26:56.629659+0000","last_became_active":"2026-03-09T17:26:30.115041+0000","last_became_peered":"2026-03-09T17:26:30.115041+0000","last_unstale":"2026-03-09T17:26:56.629659+0000","last_undegraded":"2026-03-09T17:26:56.629659+0000","last_fullsized":"2026-03-09T17:26:56.629659+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:17:59.624191+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749585+0000","last_change":"2026-03-09T17:26:32.129236+0000","last_active":"2026-03-09T17:26:56.749585+0000","last_peered":"2026-03-09T17:26:56.749585+0000","last_clean":"2026-03-09T17:26:56.749585+0000","last_became_active":"2026-03-09T17:26:32.128985+0000","last_became_peered":"2026-03-09T17:26:32.128985+0000","last_unstale":"2026-03-09T17:26:56.749585+0000","last_undegraded":"2026-03-09T17:26:56.749585+0000","last_fullsized":"2026-03-09T17:26:56.749585+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:43:40.111344+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754166+0000","last_change":"2026-03-09T17:26:34.138621+0000","last_active":"2026-03-09T17:26:56.754166+0000","last_peered":"2026-03-09T17:26:56.754166+0000","last_clean":"2026-03-09T17:26:56.754166+0000","last_became_active":"2026-03-09T17:26:34.137795+0000","last_became_peered":"2026-03-09T17:26:34.137795+0000","last_unstale":"2026-03-09T17:26:56.754166+0000","last_undegraded":"2026-03-09T17:26:56.754166+0000","last_fullsized":"2026-03-09T17:26:56.754166+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:56:14.679961+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754190+0000","last_change":"2026-03-09T17:26:28.099787+0000","last_active":"2026-03-09T17:26:56.754190+0000","last_peered":"2026-03-09T17:26:56.754190+0000","last_clean":"2026-03-09T17:26:56.754190+0000","last_became_active":"2026-03-09T17:26:28.099621+0000","last_became_peered":"2026-03-09T17:26:28.099621+0000","last_unstale":"2026-03-09T17:26:56.754190+0000","last_undegraded":"2026-03-09T17:26:56.754190+0000","last_fullsized":"2026-03-09T17:26:56.754190+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:58:09.894637+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635360+0000","last_change":"2026-03-09T17:26:30.395867+0000","last_active":"2026-03-09T17:26:56.635360+0000","last_peered":"2026-03-09T17:26:56.635360+0000","last_clean":"2026-03-09T17:26:56.635360+0000","last_became_active":"2026-03-09T17:26:30.394776+0000","last_became_peered":"2026-03-09T17:26:30.394776+0000","last_unstale":"2026-03-09T17:26:56.635360+0000","last_undegraded":"2026-03-09T17:26:56.635360+0000","last_fullsized":"2026-03-09T17:26:56.635360+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:34:48.765262+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635698+0000","last_change":"2026-03-09T17:26:32.125547+0000","last_active":"2026-03-09T17:26:56.635698+0000","last_peered":"2026-03-09T17:26:56.635698+0000","last_clean":"2026-03-09T17:26:56.635698+0000","last_became_active":"2026-03-09T17:26:32.125284+0000","last_became_peered":"2026-03-09T17:26:32.125284+0000","last_unstale":"2026-03-09T17:26:56.635698+0000","last_undegraded":"2026-03-09T17:26:56.635698+0000","last_fullsized":"2026-03-09T17:26:56.635698+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:10:17.682655+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":50,"seq":214748364821,"num_pgs":59,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27952,"kb_used_data":1120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939472,"statfs":{"total":21470642176,"available":21442019328,"internally_reserved":0,"allocated":1146880,"data_stored":710774,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":44,"seq":188978561052,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27912,"kb_used_data":1080,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939512,"statfs":{"total":21470642176,"available":21442060288,"internally_reserved":0,"allocated":1105920,"data_stored":707990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":38,"seq":163208757282,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":655360,"data_stored":250541,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":32,"seq":137438953513,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27512,"kb_used_data":672,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939912,"statfs":{"total":21470642176,"available":21442469888,"internally_reserved":0,"allocated":688128,"data_stored":249668,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149744,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":655360,"data_stored":250539,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411383,"num_pgs":39,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27472,"kb_used_data":632,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939952,"statfs":{"total":21470642176,"available":21442510848,"internally_reserved":0,"allocated":647168,"data_stored":249203,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574910,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27484,"kb_used_data":644,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939940,"statfs":{"total":21470642176,"available":21442498560,"internally_reserved":0,"allocated":659456,"data_stored":249041,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738436,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27932,"kb_used_data":1096,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939492,"statfs":{"total":21470642176,"available":21442039808,"internally_reserved":0,"allocated":1122304,"data_stored":708645,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1429,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1521,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T17:27:45.520 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph pg dump --format=json 2026-03-09T17:27:45.774 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:45 vm00 bash[20770]: cluster 2026-03-09T17:27:44.659797+0000 mgr.y (mgr.14505) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:45.774 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:45 vm00 bash[20770]: cluster 2026-03-09T17:27:44.659797+0000 mgr.y (mgr.14505) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:46.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:45 vm00 bash[28333]: cluster 2026-03-09T17:27:44.659797+0000 mgr.y (mgr.14505) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:46.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:45 vm00 bash[28333]: cluster 2026-03-09T17:27:44.659797+0000 mgr.y (mgr.14505) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:46.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:45 vm02 bash[23351]: cluster 2026-03-09T17:27:44.659797+0000 mgr.y (mgr.14505) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:46.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:45 vm02 bash[23351]: cluster 2026-03-09T17:27:44.659797+0000 mgr.y (mgr.14505) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:46.537 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:27:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:27:47.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:46 vm00 bash[20770]: audit 2026-03-09T17:27:45.456765+0000 mgr.y (mgr.14505) 64 : audit [DBG] from='client.14649 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:47.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:46 vm00 bash[20770]: audit 2026-03-09T17:27:45.456765+0000 mgr.y (mgr.14505) 64 : audit [DBG] from='client.14649 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:47.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:46 vm00 bash[28333]: audit 2026-03-09T17:27:45.456765+0000 mgr.y (mgr.14505) 64 : audit [DBG] from='client.14649 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:47.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:46 vm00 bash[28333]: audit 2026-03-09T17:27:45.456765+0000 mgr.y (mgr.14505) 64 : audit [DBG] from='client.14649 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:47.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:46 vm02 bash[23351]: audit 2026-03-09T17:27:45.456765+0000 mgr.y (mgr.14505) 64 : audit [DBG] from='client.14649 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:47.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:46 vm02 bash[23351]: audit 2026-03-09T17:27:45.456765+0000 mgr.y (mgr.14505) 64 : audit [DBG] from='client.14649 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:48.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:47 vm02 bash[23351]: cluster 2026-03-09T17:27:46.660053+0000 mgr.y (mgr.14505) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:27:48.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:47 vm02 bash[23351]: cluster 2026-03-09T17:27:46.660053+0000 mgr.y (mgr.14505) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:27:48.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:47 vm00 bash[20770]: cluster 2026-03-09T17:27:46.660053+0000 mgr.y (mgr.14505) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:27:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:47 vm00 bash[20770]: cluster 2026-03-09T17:27:46.660053+0000 mgr.y (mgr.14505) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:27:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:47 vm00 bash[28333]: cluster 2026-03-09T17:27:46.660053+0000 mgr.y (mgr.14505) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:27:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:47 vm00 bash[28333]: cluster 2026-03-09T17:27:46.660053+0000 mgr.y (mgr.14505) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:27:49.240 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:49.396 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- 192.168.123.100:0/1233428140 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034077f40 msgr2=0x7f0034113640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:49.396 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 --2- 192.168.123.100:0/1233428140 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034077f40 0x7f0034113640 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f0024009a30 tx=0x7f002402f220 comp rx=0 tx=0).stop 2026-03-09T17:27:49.396 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- 192.168.123.100:0/1233428140 shutdown_connections 2026-03-09T17:27:49.396 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 --2- 192.168.123.100:0/1233428140 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0034113b80 0x7f0034115f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.396 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 --2- 192.168.123.100:0/1233428140 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034077f40 0x7f0034113640 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.396 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 --2- 192.168.123.100:0/1233428140 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0034077620 0x7f0034077a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.396 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- 192.168.123.100:0/1233428140 >> 192.168.123.100:0/1233428140 conn(0x7f00341009e0 msgr2=0x7f0034102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:49.396 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- 192.168.123.100:0/1233428140 shutdown_connections 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- 192.168.123.100:0/1233428140 wait complete. 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 Processor -- start 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- start start 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0034077620 0x7f00341a10b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0034077f40 0x7f00341a15f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034113b80 0x7f00341a5980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0034118ab0 con 0x7f0034077f40 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f0034118930 con 0x7f0034077620 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f0034118c30 con 0x7f0034113b80 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00388e4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0034077f40 0x7f00341a15f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034113b80 0x7f00341a5980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:49.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034113b80 0x7f00341a5980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:33952/0 (socket says 192.168.123.100:33952) 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 -- 192.168.123.100:0/1278802079 learned_addr learned my addr 192.168.123.100:0/1278802079 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00390e5640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0034077620 0x7f00341a10b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 -- 192.168.123.100:0/1278802079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0034077620 msgr2=0x7f00341a10b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0034077620 0x7f00341a10b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 -- 192.168.123.100:0/1278802079 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0034077f40 msgr2=0x7f00341a15f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0034077f40 0x7f00341a15f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 -- 192.168.123.100:0/1278802079 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f00341a6060 con 0x7f0034113b80 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00388e4640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0034077f40 0x7f00341a15f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00398e6640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034113b80 0x7f00341a5980 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f003000b9e0 tx=0x7f003000beb0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00390e5640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0034077620 0x7f00341a10b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:27:49.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00227fc640 1 -- 192.168.123.100:0/1278802079 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f003000c730 con 0x7f0034113b80 2026-03-09T17:27:49.399 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f00341a6350 con 0x7f0034113b80 2026-03-09T17:27:49.399 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f00341adc30 con 0x7f0034113b80 2026-03-09T17:27:49.399 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00227fc640 1 -- 192.168.123.100:0/1278802079 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0030010070 con 0x7f0034113b80 2026-03-09T17:27:49.399 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.395+0000 7f00227fc640 1 -- 192.168.123.100:0/1278802079 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f00300047f0 con 0x7f0034113b80 2026-03-09T17:27:49.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.399+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0004005180 con 0x7f0034113b80 2026-03-09T17:27:49.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.399+0000 7f00227fc640 1 -- 192.168.123.100:0/1278802079 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f003000c8d0 con 0x7f0034113b80 2026-03-09T17:27:49.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.403+0000 7f00227fc640 1 --2- 192.168.123.100:0/1278802079 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0000077710 0x7f0000079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:49.404 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.403+0000 7f00390e5640 1 --2- 192.168.123.100:0/1278802079 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0000077710 0x7f0000079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:49.404 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.403+0000 7f00227fc640 1 -- 192.168.123.100:0/1278802079 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f0030098970 con 0x7f0034113b80 2026-03-09T17:27:49.404 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.403+0000 7f00227fc640 1 -- 192.168.123.100:0/1278802079 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f003009b030 con 0x7f0034113b80 2026-03-09T17:27:49.404 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.403+0000 7f00390e5640 1 --2- 192.168.123.100:0/1278802079 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0000077710 0x7f0000079bd0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f0028008b70 tx=0x7f0028005e90 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:49.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.499+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 --> v2:192.168.123.100:6800/2673235927 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f0004002bf0 con 0x7f0000077710 2026-03-09T17:27:49.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.503+0000 7f00227fc640 1 -- 192.168.123.100:0/1278802079 <== mgr.14505 v2:192.168.123.100:6800/2673235927 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346481 (secure 0 0 0) 0x7f0004002bf0 con 0x7f0000077710 2026-03-09T17:27:49.505 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:27:49.507 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0000077710 msgr2=0x7f0000079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 --2- 192.168.123.100:0/1278802079 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0000077710 0x7f0000079bd0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f0028008b70 tx=0x7f0028005e90 comp rx=0 tx=0).stop 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034113b80 msgr2=0x7f00341a5980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034113b80 0x7f00341a5980 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f003000b9e0 tx=0x7f003000beb0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 shutdown_connections 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 --2- 192.168.123.100:0/1278802079 >> v2:192.168.123.100:6800/2673235927 conn(0x7f0000077710 0x7f0000079bd0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0034113b80 0x7f00341a5980 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0034077f40 0x7f00341a15f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 --2- 192.168.123.100:0/1278802079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0034077620 0x7f00341a10b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:49.510 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 >> 192.168.123.100:0/1278802079 conn(0x7f00341009e0 msgr2=0x7f0034102dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:49.510 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 shutdown_connections 2026-03-09T17:27:49.510 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:49.507+0000 7f003b370640 1 -- 192.168.123.100:0/1278802079 wait complete. 2026-03-09T17:27:49.563 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":29,"stamp":"2026-03-09T17:27:48.660210+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":916,"num_read_kb":775,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221224,"kb_used_data":6524,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518168,"statfs":{"total":171765137408,"available":171538604032,"internally_reserved":0,"allocated":6680576,"data_stored":3376401,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002104"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754278+0000","last_change":"2026-03-09T17:26:34.121816+0000","last_active":"2026-03-09T17:26:56.754278+0000","last_peered":"2026-03-09T17:26:56.754278+0000","last_clean":"2026-03-09T17:26:56.754278+0000","last_became_active":"2026-03-09T17:26:34.121601+0000","last_became_peered":"2026-03-09T17:26:34.121601+0000","last_unstale":"2026-03-09T17:26:56.754278+0000","last_undegraded":"2026-03-09T17:26:56.754278+0000","last_fullsized":"2026-03-09T17:26:56.754278+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:36:25.863362+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627446+0000","last_change":"2026-03-09T17:26:28.100627+0000","last_active":"2026-03-09T17:26:56.627446+0000","last_peered":"2026-03-09T17:26:56.627446+0000","last_clean":"2026-03-09T17:26:56.627446+0000","last_became_active":"2026-03-09T17:26:28.100425+0000","last_became_peered":"2026-03-09T17:26:28.100425+0000","last_unstale":"2026-03-09T17:26:56.627446+0000","last_undegraded":"2026-03-09T17:26:56.627446+0000","last_fullsized":"2026-03-09T17:26:56.627446+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:03:00.063577+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"61'10","reported_seq":44,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754280+0000","last_change":"2026-03-09T17:26:30.395835+0000","last_active":"2026-03-09T17:26:56.754280+0000","last_peered":"2026-03-09T17:26:56.754280+0000","last_clean":"2026-03-09T17:26:56.754280+0000","last_became_active":"2026-03-09T17:26:30.395688+0000","last_became_peered":"2026-03-09T17:26:30.395688+0000","last_unstale":"2026-03-09T17:26:56.754280+0000","last_undegraded":"2026-03-09T17:26:56.754280+0000","last_fullsized":"2026-03-09T17:26:56.754280+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:41:19.007158+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634715+0000","last_change":"2026-03-09T17:26:32.121819+0000","last_active":"2026-03-09T17:26:56.634715+0000","last_peered":"2026-03-09T17:26:56.634715+0000","last_clean":"2026-03-09T17:26:56.634715+0000","last_became_active":"2026-03-09T17:26:32.121693+0000","last_became_peered":"2026-03-09T17:26:32.121693+0000","last_unstale":"2026-03-09T17:26:56.634715+0000","last_undegraded":"2026-03-09T17:26:56.634715+0000","last_fullsized":"2026-03-09T17:26:56.634715+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:47:17.885391+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754096+0000","last_change":"2026-03-09T17:26:28.089505+0000","last_active":"2026-03-09T17:26:56.754096+0000","last_peered":"2026-03-09T17:26:56.754096+0000","last_clean":"2026-03-09T17:26:56.754096+0000","last_became_active":"2026-03-09T17:26:28.089308+0000","last_became_peered":"2026-03-09T17:26:28.089308+0000","last_unstale":"2026-03-09T17:26:56.754096+0000","last_undegraded":"2026-03-09T17:26:56.754096+0000","last_fullsized":"2026-03-09T17:26:56.754096+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:45:23.762126+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"61'11","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627863+0000","last_change":"2026-03-09T17:26:30.394005+0000","last_active":"2026-03-09T17:26:56.627863+0000","last_peered":"2026-03-09T17:26:56.627863+0000","last_clean":"2026-03-09T17:26:56.627863+0000","last_became_active":"2026-03-09T17:26:30.393798+0000","last_became_peered":"2026-03-09T17:26:30.393798+0000","last_unstale":"2026-03-09T17:26:56.627863+0000","last_undegraded":"2026-03-09T17:26:56.627863+0000","last_fullsized":"2026-03-09T17:26:56.627863+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:32:40.048006+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749538+0000","last_change":"2026-03-09T17:26:32.124103+0000","last_active":"2026-03-09T17:26:56.749538+0000","last_peered":"2026-03-09T17:26:56.749538+0000","last_clean":"2026-03-09T17:26:56.749538+0000","last_became_active":"2026-03-09T17:26:32.123957+0000","last_became_peered":"2026-03-09T17:26:32.123957+0000","last_unstale":"2026-03-09T17:26:56.749538+0000","last_undegraded":"2026-03-09T17:26:56.749538+0000","last_fullsized":"2026-03-09T17:26:56.749538+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:41:55.577257+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634338+0000","last_change":"2026-03-09T17:26:34.138187+0000","last_active":"2026-03-09T17:26:56.634338+0000","last_peered":"2026-03-09T17:26:56.634338+0000","last_clean":"2026-03-09T17:26:56.634338+0000","last_became_active":"2026-03-09T17:26:34.137670+0000","last_became_peered":"2026-03-09T17:26:34.137670+0000","last_unstale":"2026-03-09T17:26:56.634338+0000","last_undegraded":"2026-03-09T17:26:56.634338+0000","last_fullsized":"2026-03-09T17:26:56.634338+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:48:46.852188+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":15,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.626845+0000","last_change":"2026-03-09T17:26:43.419132+0000","last_active":"2026-03-09T17:26:56.626845+0000","last_peered":"2026-03-09T17:26:56.626845+0000","last_clean":"2026-03-09T17:26:56.626845+0000","last_became_active":"2026-03-09T17:26:43.418755+0000","last_became_peered":"2026-03-09T17:26:43.418755+0000","last_unstale":"2026-03-09T17:26:56.626845+0000","last_undegraded":"2026-03-09T17:26:56.626845+0000","last_fullsized":"2026-03-09T17:26:56.626845+0000","mapping_epoch":62,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":63,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:03:17.341663+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,6,0],"acting":[2,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.1c","version":"61'15","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753312+0000","last_change":"2026-03-09T17:26:30.123075+0000","last_active":"2026-03-09T17:26:56.753312+0000","last_peered":"2026-03-09T17:26:56.753312+0000","last_clean":"2026-03-09T17:26:56.753312+0000","last_became_active":"2026-03-09T17:26:30.122923+0000","last_became_peered":"2026-03-09T17:26:30.122923+0000","last_unstale":"2026-03-09T17:26:56.753312+0000","last_undegraded":"2026-03-09T17:26:56.753312+0000","last_fullsized":"2026-03-09T17:26:56.753312+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:27:40.685922+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755104+0000","last_change":"2026-03-09T17:26:32.126733+0000","last_active":"2026-03-09T17:26:56.755104+0000","last_peered":"2026-03-09T17:26:56.755104+0000","last_clean":"2026-03-09T17:26:56.755104+0000","last_became_active":"2026-03-09T17:26:32.126622+0000","last_became_peered":"2026-03-09T17:26:32.126622+0000","last_unstale":"2026-03-09T17:26:56.755104+0000","last_undegraded":"2026-03-09T17:26:56.755104+0000","last_fullsized":"2026-03-09T17:26:56.755104+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:01:31.280677+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753283+0000","last_change":"2026-03-09T17:26:34.135211+0000","last_active":"2026-03-09T17:26:56.753283+0000","last_peered":"2026-03-09T17:26:56.753283+0000","last_clean":"2026-03-09T17:26:56.753283+0000","last_became_active":"2026-03-09T17:26:34.135094+0000","last_became_peered":"2026-03-09T17:26:34.135094+0000","last_unstale":"2026-03-09T17:26:56.753283+0000","last_undegraded":"2026-03-09T17:26:56.753283+0000","last_fullsized":"2026-03-09T17:26:56.753283+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:23:18.847909+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754975+0000","last_change":"2026-03-09T17:26:28.102985+0000","last_active":"2026-03-09T17:26:56.754975+0000","last_peered":"2026-03-09T17:26:56.754975+0000","last_clean":"2026-03-09T17:26:56.754975+0000","last_became_active":"2026-03-09T17:26:28.102915+0000","last_became_peered":"2026-03-09T17:26:28.102915+0000","last_unstale":"2026-03-09T17:26:56.754975+0000","last_undegraded":"2026-03-09T17:26:56.754975+0000","last_fullsized":"2026-03-09T17:26:56.754975+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:16:16.389834+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"61'12","reported_seq":52,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752250+0000","last_change":"2026-03-09T17:26:30.121047+0000","last_active":"2026-03-09T17:26:56.752250+0000","last_peered":"2026-03-09T17:26:56.752250+0000","last_clean":"2026-03-09T17:26:56.752250+0000","last_became_active":"2026-03-09T17:26:30.120947+0000","last_became_peered":"2026-03-09T17:26:30.120947+0000","last_unstale":"2026-03-09T17:26:56.752250+0000","last_undegraded":"2026-03-09T17:26:56.752250+0000","last_fullsized":"2026-03-09T17:26:56.752250+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:13:14.791683+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752198+0000","last_change":"2026-03-09T17:26:32.120033+0000","last_active":"2026-03-09T17:26:56.752198+0000","last_peered":"2026-03-09T17:26:56.752198+0000","last_clean":"2026-03-09T17:26:56.752198+0000","last_became_active":"2026-03-09T17:26:32.119868+0000","last_became_peered":"2026-03-09T17:26:32.119868+0000","last_unstale":"2026-03-09T17:26:56.752198+0000","last_undegraded":"2026-03-09T17:26:56.752198+0000","last_fullsized":"2026-03-09T17:26:56.752198+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:04:54.592179+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627168+0000","last_change":"2026-03-09T17:26:34.121326+0000","last_active":"2026-03-09T17:26:56.627168+0000","last_peered":"2026-03-09T17:26:56.627168+0000","last_clean":"2026-03-09T17:26:56.627168+0000","last_became_active":"2026-03-09T17:26:34.121259+0000","last_became_peered":"2026-03-09T17:26:34.121259+0000","last_unstale":"2026-03-09T17:26:56.627168+0000","last_undegraded":"2026-03-09T17:26:56.627168+0000","last_fullsized":"2026-03-09T17:26:56.627168+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:32:35.534173+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"61'19","reported_seq":60,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.757293+0000","last_change":"2026-03-09T17:26:30.122084+0000","last_active":"2026-03-09T17:26:56.757293+0000","last_peered":"2026-03-09T17:26:56.757293+0000","last_clean":"2026-03-09T17:26:56.757293+0000","last_became_active":"2026-03-09T17:26:30.121995+0000","last_became_peered":"2026-03-09T17:26:30.121995+0000","last_unstale":"2026-03-09T17:26:56.757293+0000","last_undegraded":"2026-03-09T17:26:56.757293+0000","last_fullsized":"2026-03-09T17:26:56.757293+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:59:24.400840+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754883+0000","last_change":"2026-03-09T17:26:28.110689+0000","last_active":"2026-03-09T17:26:56.754883+0000","last_peered":"2026-03-09T17:26:56.754883+0000","last_clean":"2026-03-09T17:26:56.754883+0000","last_became_active":"2026-03-09T17:26:28.110571+0000","last_became_peered":"2026-03-09T17:26:28.110571+0000","last_unstale":"2026-03-09T17:26:56.754883+0000","last_undegraded":"2026-03-09T17:26:56.754883+0000","last_fullsized":"2026-03-09T17:26:56.754883+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:03:24.475506+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749446+0000","last_change":"2026-03-09T17:26:32.129148+0000","last_active":"2026-03-09T17:26:56.749446+0000","last_peered":"2026-03-09T17:26:56.749446+0000","last_clean":"2026-03-09T17:26:56.749446+0000","last_became_active":"2026-03-09T17:26:32.128978+0000","last_became_peered":"2026-03-09T17:26:32.128978+0000","last_unstale":"2026-03-09T17:26:56.749446+0000","last_undegraded":"2026-03-09T17:26:56.749446+0000","last_fullsized":"2026-03-09T17:26:56.749446+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:35.264837+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627745+0000","last_change":"2026-03-09T17:26:34.123007+0000","last_active":"2026-03-09T17:26:56.627745+0000","last_peered":"2026-03-09T17:26:56.627745+0000","last_clean":"2026-03-09T17:26:56.627745+0000","last_became_active":"2026-03-09T17:26:34.122830+0000","last_became_peered":"2026-03-09T17:26:34.122830+0000","last_unstale":"2026-03-09T17:26:56.627745+0000","last_undegraded":"2026-03-09T17:26:56.627745+0000","last_fullsized":"2026-03-09T17:26:56.627745+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:54:41.840250+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753940+0000","last_change":"2026-03-09T17:26:30.122030+0000","last_active":"2026-03-09T17:26:56.753940+0000","last_peered":"2026-03-09T17:26:56.753940+0000","last_clean":"2026-03-09T17:26:56.753940+0000","last_became_active":"2026-03-09T17:26:30.121102+0000","last_became_peered":"2026-03-09T17:26:30.121102+0000","last_unstale":"2026-03-09T17:26:56.753940+0000","last_undegraded":"2026-03-09T17:26:56.753940+0000","last_fullsized":"2026-03-09T17:26:56.753940+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:11:00.033702+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748877+0000","last_change":"2026-03-09T17:26:28.096234+0000","last_active":"2026-03-09T17:26:56.748877+0000","last_peered":"2026-03-09T17:26:56.748877+0000","last_clean":"2026-03-09T17:26:56.748877+0000","last_became_active":"2026-03-09T17:26:28.096106+0000","last_became_peered":"2026-03-09T17:26:28.096106+0000","last_unstale":"2026-03-09T17:26:56.748877+0000","last_undegraded":"2026-03-09T17:26:56.748877+0000","last_fullsized":"2026-03-09T17:26:56.748877+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:45.854747+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"61'11","reported_seq":51,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.189301+0000","last_change":"2026-03-09T17:26:32.119046+0000","last_active":"2026-03-09T17:27:38.189301+0000","last_peered":"2026-03-09T17:27:38.189301+0000","last_clean":"2026-03-09T17:27:38.189301+0000","last_became_active":"2026-03-09T17:26:32.116392+0000","last_became_peered":"2026-03-09T17:26:32.116392+0000","last_unstale":"2026-03-09T17:27:38.189301+0000","last_undegraded":"2026-03-09T17:27:38.189301+0000","last_fullsized":"2026-03-09T17:27:38.189301+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:34:40.418353+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635522+0000","last_change":"2026-03-09T17:26:34.126734+0000","last_active":"2026-03-09T17:26:56.635522+0000","last_peered":"2026-03-09T17:26:56.635522+0000","last_clean":"2026-03-09T17:26:56.635522+0000","last_became_active":"2026-03-09T17:26:34.126366+0000","last_became_peered":"2026-03-09T17:26:34.126366+0000","last_unstale":"2026-03-09T17:26:56.635522+0000","last_undegraded":"2026-03-09T17:26:56.635522+0000","last_fullsized":"2026-03-09T17:26:56.635522+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:08:27.868690+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"61'15","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754005+0000","last_change":"2026-03-09T17:26:30.114507+0000","last_active":"2026-03-09T17:26:56.754005+0000","last_peered":"2026-03-09T17:26:56.754005+0000","last_clean":"2026-03-09T17:26:56.754005+0000","last_became_active":"2026-03-09T17:26:30.114417+0000","last_became_peered":"2026-03-09T17:26:30.114417+0000","last_unstale":"2026-03-09T17:26:56.754005+0000","last_undegraded":"2026-03-09T17:26:56.754005+0000","last_fullsized":"2026-03-09T17:26:56.754005+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:45:45.268689+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748919+0000","last_change":"2026-03-09T17:26:28.096195+0000","last_active":"2026-03-09T17:26:56.748919+0000","last_peered":"2026-03-09T17:26:56.748919+0000","last_clean":"2026-03-09T17:26:56.748919+0000","last_became_active":"2026-03-09T17:26:28.096106+0000","last_became_peered":"2026-03-09T17:26:28.096106+0000","last_unstale":"2026-03-09T17:26:56.748919+0000","last_undegraded":"2026-03-09T17:26:56.748919+0000","last_fullsized":"2026-03-09T17:26:56.748919+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:33:12.974014+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"61'11","reported_seq":51,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.189219+0000","last_change":"2026-03-09T17:26:32.126691+0000","last_active":"2026-03-09T17:27:38.189219+0000","last_peered":"2026-03-09T17:27:38.189219+0000","last_clean":"2026-03-09T17:27:38.189219+0000","last_became_active":"2026-03-09T17:26:32.126603+0000","last_became_peered":"2026-03-09T17:26:32.126603+0000","last_unstale":"2026-03-09T17:27:38.189219+0000","last_undegraded":"2026-03-09T17:27:38.189219+0000","last_fullsized":"2026-03-09T17:27:38.189219+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:35:31.811392+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752623+0000","last_change":"2026-03-09T17:26:34.140679+0000","last_active":"2026-03-09T17:26:56.752623+0000","last_peered":"2026-03-09T17:26:56.752623+0000","last_clean":"2026-03-09T17:26:56.752623+0000","last_became_active":"2026-03-09T17:26:34.138814+0000","last_became_peered":"2026-03-09T17:26:34.138814+0000","last_unstale":"2026-03-09T17:26:56.752623+0000","last_undegraded":"2026-03-09T17:26:56.752623+0000","last_fullsized":"2026-03-09T17:26:56.752623+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:23:16.209889+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"61'12","reported_seq":52,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634581+0000","last_change":"2026-03-09T17:26:30.395904+0000","last_active":"2026-03-09T17:26:56.634581+0000","last_peered":"2026-03-09T17:26:56.634581+0000","last_clean":"2026-03-09T17:26:56.634581+0000","last_became_active":"2026-03-09T17:26:30.394804+0000","last_became_peered":"2026-03-09T17:26:30.394804+0000","last_unstale":"2026-03-09T17:26:56.634581+0000","last_undegraded":"2026-03-09T17:26:56.634581+0000","last_fullsized":"2026-03-09T17:26:56.634581+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:35:31.607941+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754869+0000","last_change":"2026-03-09T17:26:28.097744+0000","last_active":"2026-03-09T17:26:56.754869+0000","last_peered":"2026-03-09T17:26:56.754869+0000","last_clean":"2026-03-09T17:26:56.754869+0000","last_became_active":"2026-03-09T17:26:28.097670+0000","last_became_peered":"2026-03-09T17:26:28.097670+0000","last_unstale":"2026-03-09T17:26:56.754869+0000","last_undegraded":"2026-03-09T17:26:56.754869+0000","last_fullsized":"2026-03-09T17:26:56.754869+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:13:21.154249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753454+0000","last_change":"2026-03-09T17:26:32.121492+0000","last_active":"2026-03-09T17:26:56.753454+0000","last_peered":"2026-03-09T17:26:56.753454+0000","last_clean":"2026-03-09T17:26:56.753454+0000","last_became_active":"2026-03-09T17:26:32.121400+0000","last_became_peered":"2026-03-09T17:26:32.121400+0000","last_unstale":"2026-03-09T17:26:56.753454+0000","last_undegraded":"2026-03-09T17:26:56.753454+0000","last_fullsized":"2026-03-09T17:26:56.753454+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:31:21.190162+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753530+0000","last_change":"2026-03-09T17:26:34.135457+0000","last_active":"2026-03-09T17:26:56.753530+0000","last_peered":"2026-03-09T17:26:56.753530+0000","last_clean":"2026-03-09T17:26:56.753530+0000","last_became_active":"2026-03-09T17:26:34.135379+0000","last_became_peered":"2026-03-09T17:26:34.135379+0000","last_unstale":"2026-03-09T17:26:56.753530+0000","last_undegraded":"2026-03-09T17:26:56.753530+0000","last_fullsized":"2026-03-09T17:26:56.753530+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:22:23.849646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"61'12","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.629513+0000","last_change":"2026-03-09T17:26:30.097305+0000","last_active":"2026-03-09T17:26:56.629513+0000","last_peered":"2026-03-09T17:26:56.629513+0000","last_clean":"2026-03-09T17:26:56.629513+0000","last_became_active":"2026-03-09T17:26:30.097200+0000","last_became_peered":"2026-03-09T17:26:30.097200+0000","last_unstale":"2026-03-09T17:26:56.629513+0000","last_undegraded":"2026-03-09T17:26:56.629513+0000","last_fullsized":"2026-03-09T17:26:56.629513+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:54:21.042441+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756964+0000","last_change":"2026-03-09T17:26:28.097003+0000","last_active":"2026-03-09T17:26:56.756964+0000","last_peered":"2026-03-09T17:26:56.756964+0000","last_clean":"2026-03-09T17:26:56.756964+0000","last_became_active":"2026-03-09T17:26:28.096797+0000","last_became_peered":"2026-03-09T17:26:28.096797+0000","last_unstale":"2026-03-09T17:26:56.756964+0000","last_undegraded":"2026-03-09T17:26:56.756964+0000","last_fullsized":"2026-03-09T17:26:56.756964+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:46:24.631257+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"59'1","reported_seq":37,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635422+0000","last_change":"2026-03-09T17:26:35.110853+0000","last_active":"2026-03-09T17:26:56.635422+0000","last_peered":"2026-03-09T17:26:56.635422+0000","last_clean":"2026-03-09T17:26:56.635422+0000","last_became_active":"2026-03-09T17:26:30.119547+0000","last_became_peered":"2026-03-09T17:26:30.119547+0000","last_unstale":"2026-03-09T17:26:56.635422+0000","last_undegraded":"2026-03-09T17:26:56.635422+0000","last_fullsized":"2026-03-09T17:26:56.635422+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:21:29.186205+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00025376499999999998,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"61'11","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.189272+0000","last_change":"2026-03-09T17:26:32.120838+0000","last_active":"2026-03-09T17:27:38.189272+0000","last_peered":"2026-03-09T17:27:38.189272+0000","last_clean":"2026-03-09T17:27:38.189272+0000","last_became_active":"2026-03-09T17:26:32.116128+0000","last_became_peered":"2026-03-09T17:26:32.116128+0000","last_unstale":"2026-03-09T17:27:38.189272+0000","last_undegraded":"2026-03-09T17:27:38.189272+0000","last_fullsized":"2026-03-09T17:27:38.189272+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:54:10.852902+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754752+0000","last_change":"2026-03-09T17:26:34.122898+0000","last_active":"2026-03-09T17:26:56.754752+0000","last_peered":"2026-03-09T17:26:56.754752+0000","last_clean":"2026-03-09T17:26:56.754752+0000","last_became_active":"2026-03-09T17:26:34.122815+0000","last_became_peered":"2026-03-09T17:26:34.122815+0000","last_unstale":"2026-03-09T17:26:56.754752+0000","last_undegraded":"2026-03-09T17:26:56.754752+0000","last_fullsized":"2026-03-09T17:26:56.754752+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:26:51.974918+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"61'13","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753753+0000","last_change":"2026-03-09T17:26:30.114718+0000","last_active":"2026-03-09T17:26:56.753753+0000","last_peered":"2026-03-09T17:26:56.753753+0000","last_clean":"2026-03-09T17:26:56.753753+0000","last_became_active":"2026-03-09T17:26:30.114362+0000","last_became_peered":"2026-03-09T17:26:30.114362+0000","last_unstale":"2026-03-09T17:26:56.753753+0000","last_undegraded":"2026-03-09T17:26:56.753753+0000","last_fullsized":"2026-03-09T17:26:56.753753+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:24:10.950068+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"54'1","reported_seq":34,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748963+0000","last_change":"2026-03-09T17:26:28.108575+0000","last_active":"2026-03-09T17:26:56.748963+0000","last_peered":"2026-03-09T17:26:56.748963+0000","last_clean":"2026-03-09T17:26:56.748963+0000","last_became_active":"2026-03-09T17:26:28.108288+0000","last_became_peered":"2026-03-09T17:26:28.108288+0000","last_unstale":"2026-03-09T17:26:56.748963+0000","last_undegraded":"2026-03-09T17:26:56.748963+0000","last_fullsized":"2026-03-09T17:26:56.748963+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:21:11.470021+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"61'5","reported_seq":111,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:46.507075+0000","last_change":"2026-03-09T17:26:35.111503+0000","last_active":"2026-03-09T17:27:46.507075+0000","last_peered":"2026-03-09T17:27:46.507075+0000","last_clean":"2026-03-09T17:27:46.507075+0000","last_became_active":"2026-03-09T17:26:30.119247+0000","last_became_peered":"2026-03-09T17:26:30.119247+0000","last_unstale":"2026-03-09T17:27:46.507075+0000","last_undegraded":"2026-03-09T17:27:46.507075+0000","last_fullsized":"2026-03-09T17:27:46.507075+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:07:34.738017+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000233787,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":72,"num_read_kb":67,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635179+0000","last_change":"2026-03-09T17:26:32.125970+0000","last_active":"2026-03-09T17:26:56.635179+0000","last_peered":"2026-03-09T17:26:56.635179+0000","last_clean":"2026-03-09T17:26:56.635179+0000","last_became_active":"2026-03-09T17:26:32.125422+0000","last_became_peered":"2026-03-09T17:26:32.125422+0000","last_unstale":"2026-03-09T17:26:56.635179+0000","last_undegraded":"2026-03-09T17:26:56.635179+0000","last_fullsized":"2026-03-09T17:26:56.635179+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:10:27.368722+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635121+0000","last_change":"2026-03-09T17:26:34.126909+0000","last_active":"2026-03-09T17:26:56.635121+0000","last_peered":"2026-03-09T17:26:56.635121+0000","last_clean":"2026-03-09T17:26:56.635121+0000","last_became_active":"2026-03-09T17:26:34.125303+0000","last_became_peered":"2026-03-09T17:26:34.125303+0000","last_unstale":"2026-03-09T17:26:56.635121+0000","last_undegraded":"2026-03-09T17:26:56.635121+0000","last_fullsized":"2026-03-09T17:26:56.635121+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:33:27.610543+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"61'30","reported_seq":95,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.189842+0000","last_change":"2026-03-09T17:26:30.398325+0000","last_active":"2026-03-09T17:27:38.189842+0000","last_peered":"2026-03-09T17:27:38.189842+0000","last_clean":"2026-03-09T17:27:38.189842+0000","last_became_active":"2026-03-09T17:26:30.398128+0000","last_became_peered":"2026-03-09T17:26:30.398128+0000","last_unstale":"2026-03-09T17:27:38.189842+0000","last_undegraded":"2026-03-09T17:27:38.189842+0000","last_fullsized":"2026-03-09T17:27:38.189842+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:31:02.151892+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754890+0000","last_change":"2026-03-09T17:26:28.108063+0000","last_active":"2026-03-09T17:26:56.754890+0000","last_peered":"2026-03-09T17:26:56.754890+0000","last_clean":"2026-03-09T17:26:56.754890+0000","last_became_active":"2026-03-09T17:26:28.107959+0000","last_became_peered":"2026-03-09T17:26:28.107959+0000","last_unstale":"2026-03-09T17:26:56.754890+0000","last_undegraded":"2026-03-09T17:26:56.754890+0000","last_fullsized":"2026-03-09T17:26:56.754890+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:23:27.430232+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.757349+0000","last_change":"2026-03-09T17:26:32.119154+0000","last_active":"2026-03-09T17:26:56.757349+0000","last_peered":"2026-03-09T17:26:56.757349+0000","last_clean":"2026-03-09T17:26:56.757349+0000","last_became_active":"2026-03-09T17:26:32.118877+0000","last_became_peered":"2026-03-09T17:26:32.118877+0000","last_unstale":"2026-03-09T17:26:56.757349+0000","last_undegraded":"2026-03-09T17:26:56.757349+0000","last_fullsized":"2026-03-09T17:26:56.757349+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:50:52.389723+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748240+0000","last_change":"2026-03-09T17:26:34.120382+0000","last_active":"2026-03-09T17:26:56.748240+0000","last_peered":"2026-03-09T17:26:56.748240+0000","last_clean":"2026-03-09T17:26:56.748240+0000","last_became_active":"2026-03-09T17:26:34.120282+0000","last_became_peered":"2026-03-09T17:26:34.120282+0000","last_unstale":"2026-03-09T17:26:56.748240+0000","last_undegraded":"2026-03-09T17:26:56.748240+0000","last_fullsized":"2026-03-09T17:26:56.748240+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:17:00.910205+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"61'16","reported_seq":67,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.189241+0000","last_change":"2026-03-09T17:26:30.396968+0000","last_active":"2026-03-09T17:27:38.189241+0000","last_peered":"2026-03-09T17:27:38.189241+0000","last_clean":"2026-03-09T17:27:38.189241+0000","last_became_active":"2026-03-09T17:26:30.396841+0000","last_became_peered":"2026-03-09T17:26:30.396841+0000","last_unstale":"2026-03-09T17:27:38.189241+0000","last_undegraded":"2026-03-09T17:27:38.189241+0000","last_fullsized":"2026-03-09T17:27:38.189241+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:03:12.965142+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749023+0000","last_change":"2026-03-09T17:26:28.095980+0000","last_active":"2026-03-09T17:26:56.749023+0000","last_peered":"2026-03-09T17:26:56.749023+0000","last_clean":"2026-03-09T17:26:56.749023+0000","last_became_active":"2026-03-09T17:26:28.095893+0000","last_became_peered":"2026-03-09T17:26:28.095893+0000","last_unstale":"2026-03-09T17:26:56.749023+0000","last_undegraded":"2026-03-09T17:26:56.749023+0000","last_fullsized":"2026-03-09T17:26:56.749023+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:06:56.897212+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"61'2","reported_seq":38,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749059+0000","last_change":"2026-03-09T17:26:35.119940+0000","last_active":"2026-03-09T17:26:56.749059+0000","last_peered":"2026-03-09T17:26:56.749059+0000","last_clean":"2026-03-09T17:26:56.749059+0000","last_became_active":"2026-03-09T17:26:30.118422+0000","last_became_peered":"2026-03-09T17:26:30.118422+0000","last_unstale":"2026-03-09T17:26:56.749059+0000","last_undegraded":"2026-03-09T17:26:56.749059+0000","last_fullsized":"2026-03-09T17:26:56.749059+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:11:58.236454+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00054771400000000004,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"61'11","reported_seq":51,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.189604+0000","last_change":"2026-03-09T17:26:32.119742+0000","last_active":"2026-03-09T17:27:38.189604+0000","last_peered":"2026-03-09T17:27:38.189604+0000","last_clean":"2026-03-09T17:27:38.189604+0000","last_became_active":"2026-03-09T17:26:32.119655+0000","last_became_peered":"2026-03-09T17:26:32.119655+0000","last_unstale":"2026-03-09T17:27:38.189604+0000","last_undegraded":"2026-03-09T17:27:38.189604+0000","last_fullsized":"2026-03-09T17:27:38.189604+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:50:57.099697+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627619+0000","last_change":"2026-03-09T17:26:34.119403+0000","last_active":"2026-03-09T17:26:56.627619+0000","last_peered":"2026-03-09T17:26:56.627619+0000","last_clean":"2026-03-09T17:26:56.627619+0000","last_became_active":"2026-03-09T17:26:34.119266+0000","last_became_peered":"2026-03-09T17:26:34.119266+0000","last_unstale":"2026-03-09T17:26:56.627619+0000","last_undegraded":"2026-03-09T17:26:56.627619+0000","last_fullsized":"2026-03-09T17:26:56.627619+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:34:37.398165+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"61'19","reported_seq":65,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634804+0000","last_change":"2026-03-09T17:26:30.118050+0000","last_active":"2026-03-09T17:26:56.634804+0000","last_peered":"2026-03-09T17:26:56.634804+0000","last_clean":"2026-03-09T17:26:56.634804+0000","last_became_active":"2026-03-09T17:26:30.117283+0000","last_became_peered":"2026-03-09T17:26:30.117283+0000","last_unstale":"2026-03-09T17:26:56.634804+0000","last_undegraded":"2026-03-09T17:26:56.634804+0000","last_fullsized":"2026-03-09T17:26:56.634804+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:49:37.828390+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753069+0000","last_change":"2026-03-09T17:26:28.095537+0000","last_active":"2026-03-09T17:26:56.753069+0000","last_peered":"2026-03-09T17:26:56.753069+0000","last_clean":"2026-03-09T17:26:56.753069+0000","last_became_active":"2026-03-09T17:26:28.090595+0000","last_became_peered":"2026-03-09T17:26:28.090595+0000","last_unstale":"2026-03-09T17:26:56.753069+0000","last_undegraded":"2026-03-09T17:26:56.753069+0000","last_fullsized":"2026-03-09T17:26:56.753069+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:05:44.882119+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627487+0000","last_change":"2026-03-09T17:26:32.128057+0000","last_active":"2026-03-09T17:26:56.627487+0000","last_peered":"2026-03-09T17:26:56.627487+0000","last_clean":"2026-03-09T17:26:56.627487+0000","last_became_active":"2026-03-09T17:26:32.126612+0000","last_became_peered":"2026-03-09T17:26:32.126612+0000","last_unstale":"2026-03-09T17:26:56.627487+0000","last_undegraded":"2026-03-09T17:26:56.627487+0000","last_fullsized":"2026-03-09T17:26:56.627487+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:20:37.659788+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"61'1","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754002+0000","last_change":"2026-03-09T17:26:34.125838+0000","last_active":"2026-03-09T17:26:56.754002+0000","last_peered":"2026-03-09T17:26:56.754002+0000","last_clean":"2026-03-09T17:26:56.754002+0000","last_became_active":"2026-03-09T17:26:34.125770+0000","last_became_peered":"2026-03-09T17:26:34.125770+0000","last_unstale":"2026-03-09T17:26:56.754002+0000","last_undegraded":"2026-03-09T17:26:56.754002+0000","last_fullsized":"2026-03-09T17:26:56.754002+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:09:56.884477+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"61'18","reported_seq":61,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748559+0000","last_change":"2026-03-09T17:26:30.395271+0000","last_active":"2026-03-09T17:26:56.748559+0000","last_peered":"2026-03-09T17:26:56.748559+0000","last_clean":"2026-03-09T17:26:56.748559+0000","last_became_active":"2026-03-09T17:26:30.395032+0000","last_became_peered":"2026-03-09T17:26:30.395032+0000","last_unstale":"2026-03-09T17:26:56.748559+0000","last_undegraded":"2026-03-09T17:26:56.748559+0000","last_fullsized":"2026-03-09T17:26:56.748559+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:08:42.103405+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627425+0000","last_change":"2026-03-09T17:26:28.104215+0000","last_active":"2026-03-09T17:26:56.627425+0000","last_peered":"2026-03-09T17:26:56.627425+0000","last_clean":"2026-03-09T17:26:56.627425+0000","last_became_active":"2026-03-09T17:26:28.104051+0000","last_became_peered":"2026-03-09T17:26:28.104051+0000","last_unstale":"2026-03-09T17:26:56.627425+0000","last_undegraded":"2026-03-09T17:26:56.627425+0000","last_fullsized":"2026-03-09T17:26:56.627425+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:39:20.566288+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627470+0000","last_change":"2026-03-09T17:26:32.116470+0000","last_active":"2026-03-09T17:26:56.627470+0000","last_peered":"2026-03-09T17:26:56.627470+0000","last_clean":"2026-03-09T17:26:56.627470+0000","last_became_active":"2026-03-09T17:26:32.116379+0000","last_became_peered":"2026-03-09T17:26:32.116379+0000","last_unstale":"2026-03-09T17:26:56.627470+0000","last_undegraded":"2026-03-09T17:26:56.627470+0000","last_fullsized":"2026-03-09T17:26:56.627470+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:32:27.813666+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755310+0000","last_change":"2026-03-09T17:26:34.129611+0000","last_active":"2026-03-09T17:26:56.755310+0000","last_peered":"2026-03-09T17:26:56.755310+0000","last_clean":"2026-03-09T17:26:56.755310+0000","last_became_active":"2026-03-09T17:26:34.127502+0000","last_became_peered":"2026-03-09T17:26:34.127502+0000","last_unstale":"2026-03-09T17:26:56.755310+0000","last_undegraded":"2026-03-09T17:26:56.755310+0000","last_fullsized":"2026-03-09T17:26:56.755310+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:04:05.431518+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"61'14","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.629443+0000","last_change":"2026-03-09T17:26:30.113476+0000","last_active":"2026-03-09T17:26:56.629443+0000","last_peered":"2026-03-09T17:26:56.629443+0000","last_clean":"2026-03-09T17:26:56.629443+0000","last_became_active":"2026-03-09T17:26:30.113360+0000","last_became_peered":"2026-03-09T17:26:30.113360+0000","last_unstale":"2026-03-09T17:26:56.629443+0000","last_undegraded":"2026-03-09T17:26:56.629443+0000","last_fullsized":"2026-03-09T17:26:56.629443+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:54:38.826467+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755093+0000","last_change":"2026-03-09T17:26:28.097840+0000","last_active":"2026-03-09T17:26:56.755093+0000","last_peered":"2026-03-09T17:26:56.755093+0000","last_clean":"2026-03-09T17:26:56.755093+0000","last_became_active":"2026-03-09T17:26:28.097684+0000","last_became_peered":"2026-03-09T17:26:28.097684+0000","last_unstale":"2026-03-09T17:26:56.755093+0000","last_undegraded":"2026-03-09T17:26:56.755093+0000","last_fullsized":"2026-03-09T17:26:56.755093+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:38:41.639436+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753370+0000","last_change":"2026-03-09T17:26:32.117255+0000","last_active":"2026-03-09T17:26:56.753370+0000","last_peered":"2026-03-09T17:26:56.753370+0000","last_clean":"2026-03-09T17:26:56.753370+0000","last_became_active":"2026-03-09T17:26:32.117149+0000","last_became_peered":"2026-03-09T17:26:32.117149+0000","last_unstale":"2026-03-09T17:26:56.753370+0000","last_undegraded":"2026-03-09T17:26:56.753370+0000","last_fullsized":"2026-03-09T17:26:56.753370+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:41:27.840394+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748724+0000","last_change":"2026-03-09T17:26:34.135460+0000","last_active":"2026-03-09T17:26:56.748724+0000","last_peered":"2026-03-09T17:26:56.748724+0000","last_clean":"2026-03-09T17:26:56.748724+0000","last_became_active":"2026-03-09T17:26:34.135369+0000","last_became_peered":"2026-03-09T17:26:34.135369+0000","last_unstale":"2026-03-09T17:26:56.748724+0000","last_undegraded":"2026-03-09T17:26:56.748724+0000","last_fullsized":"2026-03-09T17:26:56.748724+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:09:12.545572+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"61'10","reported_seq":44,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754389+0000","last_change":"2026-03-09T17:26:30.120511+0000","last_active":"2026-03-09T17:26:56.754389+0000","last_peered":"2026-03-09T17:26:56.754389+0000","last_clean":"2026-03-09T17:26:56.754389+0000","last_became_active":"2026-03-09T17:26:30.119388+0000","last_became_peered":"2026-03-09T17:26:30.119388+0000","last_unstale":"2026-03-09T17:26:56.754389+0000","last_undegraded":"2026-03-09T17:26:56.754389+0000","last_fullsized":"2026-03-09T17:26:56.754389+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:40:36.245642+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752340+0000","last_change":"2026-03-09T17:26:28.098622+0000","last_active":"2026-03-09T17:26:56.752340+0000","last_peered":"2026-03-09T17:26:56.752340+0000","last_clean":"2026-03-09T17:26:56.752340+0000","last_became_active":"2026-03-09T17:26:28.098425+0000","last_became_peered":"2026-03-09T17:26:28.098425+0000","last_unstale":"2026-03-09T17:26:56.752340+0000","last_undegraded":"2026-03-09T17:26:56.752340+0000","last_fullsized":"2026-03-09T17:26:56.752340+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:26:29.688318+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"65'39","reported_seq":68,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:58.745207+0000","last_change":"2026-03-09T17:26:08.617223+0000","last_active":"2026-03-09T17:26:58.745207+0000","last_peered":"2026-03-09T17:26:58.745207+0000","last_clean":"2026-03-09T17:26:58.745207+0000","last_became_active":"2026-03-09T17:26:08.308356+0000","last_became_peered":"2026-03-09T17:26:08.308356+0000","last_unstale":"2026-03-09T17:26:58.745207+0000","last_undegraded":"2026-03-09T17:26:58.745207+0000","last_fullsized":"2026-03-09T17:26:58.745207+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:23:15.924968+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:23:15.924968+0000","last_clean_scrub_stamp":"2026-03-09T17:23:15.924968+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:32:39.998590+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754946+0000","last_change":"2026-03-09T17:26:32.129576+0000","last_active":"2026-03-09T17:26:56.754946+0000","last_peered":"2026-03-09T17:26:56.754946+0000","last_clean":"2026-03-09T17:26:56.754946+0000","last_became_active":"2026-03-09T17:26:32.129477+0000","last_became_peered":"2026-03-09T17:26:32.129477+0000","last_unstale":"2026-03-09T17:26:56.754946+0000","last_undegraded":"2026-03-09T17:26:56.754946+0000","last_fullsized":"2026-03-09T17:26:56.754946+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:19:06.048193+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752290+0000","last_change":"2026-03-09T17:26:34.135869+0000","last_active":"2026-03-09T17:26:56.752290+0000","last_peered":"2026-03-09T17:26:56.752290+0000","last_clean":"2026-03-09T17:26:56.752290+0000","last_became_active":"2026-03-09T17:26:34.135338+0000","last_became_peered":"2026-03-09T17:26:34.135338+0000","last_unstale":"2026-03-09T17:26:56.752290+0000","last_undegraded":"2026-03-09T17:26:56.752290+0000","last_fullsized":"2026-03-09T17:26:56.752290+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:26:58.210940+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"61'17","reported_seq":57,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754379+0000","last_change":"2026-03-09T17:26:30.118758+0000","last_active":"2026-03-09T17:26:56.754379+0000","last_peered":"2026-03-09T17:26:56.754379+0000","last_clean":"2026-03-09T17:26:56.754379+0000","last_became_active":"2026-03-09T17:26:30.118676+0000","last_became_peered":"2026-03-09T17:26:30.118676+0000","last_unstale":"2026-03-09T17:26:56.754379+0000","last_undegraded":"2026-03-09T17:26:56.754379+0000","last_fullsized":"2026-03-09T17:26:56.754379+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:38:49.179914+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627372+0000","last_change":"2026-03-09T17:26:28.102544+0000","last_active":"2026-03-09T17:26:56.627372+0000","last_peered":"2026-03-09T17:26:56.627372+0000","last_clean":"2026-03-09T17:26:56.627372+0000","last_became_active":"2026-03-09T17:26:28.102447+0000","last_became_peered":"2026-03-09T17:26:28.102447+0000","last_unstale":"2026-03-09T17:26:56.627372+0000","last_undegraded":"2026-03-09T17:26:56.627372+0000","last_fullsized":"2026-03-09T17:26:56.627372+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:43:29.591411+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627399+0000","last_change":"2026-03-09T17:26:32.107460+0000","last_active":"2026-03-09T17:26:56.627399+0000","last_peered":"2026-03-09T17:26:56.627399+0000","last_clean":"2026-03-09T17:26:56.627399+0000","last_became_active":"2026-03-09T17:26:32.107332+0000","last_became_peered":"2026-03-09T17:26:32.107332+0000","last_unstale":"2026-03-09T17:26:56.627399+0000","last_undegraded":"2026-03-09T17:26:56.627399+0000","last_fullsized":"2026-03-09T17:26:56.627399+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:32:26.845186+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754433+0000","last_change":"2026-03-09T17:26:34.124631+0000","last_active":"2026-03-09T17:26:56.754433+0000","last_peered":"2026-03-09T17:26:56.754433+0000","last_clean":"2026-03-09T17:26:56.754433+0000","last_became_active":"2026-03-09T17:26:34.124564+0000","last_became_peered":"2026-03-09T17:26:34.124564+0000","last_unstale":"2026-03-09T17:26:56.754433+0000","last_undegraded":"2026-03-09T17:26:56.754433+0000","last_fullsized":"2026-03-09T17:26:56.754433+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:48:15.865249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"61'10","reported_seq":44,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753247+0000","last_change":"2026-03-09T17:26:30.122443+0000","last_active":"2026-03-09T17:26:56.753247+0000","last_peered":"2026-03-09T17:26:56.753247+0000","last_clean":"2026-03-09T17:26:56.753247+0000","last_became_active":"2026-03-09T17:26:30.121255+0000","last_became_peered":"2026-03-09T17:26:30.121255+0000","last_unstale":"2026-03-09T17:26:56.753247+0000","last_undegraded":"2026-03-09T17:26:56.753247+0000","last_fullsized":"2026-03-09T17:26:56.753247+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:11:21.072195+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748833+0000","last_change":"2026-03-09T17:26:28.108511+0000","last_active":"2026-03-09T17:26:56.748833+0000","last_peered":"2026-03-09T17:26:56.748833+0000","last_clean":"2026-03-09T17:26:56.748833+0000","last_became_active":"2026-03-09T17:26:28.108189+0000","last_became_peered":"2026-03-09T17:26:28.108189+0000","last_unstale":"2026-03-09T17:26:56.748833+0000","last_undegraded":"2026-03-09T17:26:56.748833+0000","last_fullsized":"2026-03-09T17:26:56.748833+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:56:03.838251+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627613+0000","last_change":"2026-03-09T17:26:32.120354+0000","last_active":"2026-03-09T17:26:56.627613+0000","last_peered":"2026-03-09T17:26:56.627613+0000","last_clean":"2026-03-09T17:26:56.627613+0000","last_became_active":"2026-03-09T17:26:32.120229+0000","last_became_peered":"2026-03-09T17:26:32.120229+0000","last_unstale":"2026-03-09T17:26:56.627613+0000","last_undegraded":"2026-03-09T17:26:56.627613+0000","last_fullsized":"2026-03-09T17:26:56.627613+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:46:50.708907+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.626931+0000","last_change":"2026-03-09T17:26:34.120871+0000","last_active":"2026-03-09T17:26:56.626931+0000","last_peered":"2026-03-09T17:26:56.626931+0000","last_clean":"2026-03-09T17:26:56.626931+0000","last_became_active":"2026-03-09T17:26:34.120611+0000","last_became_peered":"2026-03-09T17:26:34.120611+0000","last_unstale":"2026-03-09T17:26:56.626931+0000","last_undegraded":"2026-03-09T17:26:56.626931+0000","last_fullsized":"2026-03-09T17:26:56.626931+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:18:24.312859+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"61'15","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754539+0000","last_change":"2026-03-09T17:26:30.125043+0000","last_active":"2026-03-09T17:26:56.754539+0000","last_peered":"2026-03-09T17:26:56.754539+0000","last_clean":"2026-03-09T17:26:56.754539+0000","last_became_active":"2026-03-09T17:26:30.124914+0000","last_became_peered":"2026-03-09T17:26:30.124914+0000","last_unstale":"2026-03-09T17:26:56.754539+0000","last_undegraded":"2026-03-09T17:26:56.754539+0000","last_fullsized":"2026-03-09T17:26:56.754539+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:19:05.151893+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627325+0000","last_change":"2026-03-09T17:26:28.104230+0000","last_active":"2026-03-09T17:26:56.627325+0000","last_peered":"2026-03-09T17:26:56.627325+0000","last_clean":"2026-03-09T17:26:56.627325+0000","last_became_active":"2026-03-09T17:26:28.104027+0000","last_became_peered":"2026-03-09T17:26:28.104027+0000","last_unstale":"2026-03-09T17:26:56.627325+0000","last_undegraded":"2026-03-09T17:26:56.627325+0000","last_fullsized":"2026-03-09T17:26:56.627325+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:05.555677+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"61'11","reported_seq":51,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.188960+0000","last_change":"2026-03-09T17:26:32.129127+0000","last_active":"2026-03-09T17:27:38.188960+0000","last_peered":"2026-03-09T17:27:38.188960+0000","last_clean":"2026-03-09T17:27:38.188960+0000","last_became_active":"2026-03-09T17:26:32.127304+0000","last_became_peered":"2026-03-09T17:26:32.127304+0000","last_unstale":"2026-03-09T17:27:38.188960+0000","last_undegraded":"2026-03-09T17:27:38.188960+0000","last_fullsized":"2026-03-09T17:27:38.188960+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:12:24.681468+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752599+0000","last_change":"2026-03-09T17:26:34.143981+0000","last_active":"2026-03-09T17:26:56.752599+0000","last_peered":"2026-03-09T17:26:56.752599+0000","last_clean":"2026-03-09T17:26:56.752599+0000","last_became_active":"2026-03-09T17:26:34.140600+0000","last_became_peered":"2026-03-09T17:26:34.140600+0000","last_unstale":"2026-03-09T17:26:56.752599+0000","last_undegraded":"2026-03-09T17:26:56.752599+0000","last_fullsized":"2026-03-09T17:26:56.752599+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:28:00.404167+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"61'11","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754954+0000","last_change":"2026-03-09T17:26:30.125033+0000","last_active":"2026-03-09T17:26:56.754954+0000","last_peered":"2026-03-09T17:26:56.754954+0000","last_clean":"2026-03-09T17:26:56.754954+0000","last_became_active":"2026-03-09T17:26:30.124886+0000","last_became_peered":"2026-03-09T17:26:30.124886+0000","last_unstale":"2026-03-09T17:26:56.754954+0000","last_undegraded":"2026-03-09T17:26:56.754954+0000","last_fullsized":"2026-03-09T17:26:56.754954+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:10:56.157992+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"54'2","reported_seq":49,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635328+0000","last_change":"2026-03-09T17:26:28.107361+0000","last_active":"2026-03-09T17:26:56.635328+0000","last_peered":"2026-03-09T17:26:56.635328+0000","last_clean":"2026-03-09T17:26:56.635328+0000","last_became_active":"2026-03-09T17:26:28.107284+0000","last_became_peered":"2026-03-09T17:26:28.107284+0000","last_unstale":"2026-03-09T17:26:56.635328+0000","last_undegraded":"2026-03-09T17:26:56.635328+0000","last_fullsized":"2026-03-09T17:26:56.635328+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:18:02.260389+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627571+0000","last_change":"2026-03-09T17:26:32.113566+0000","last_active":"2026-03-09T17:26:56.627571+0000","last_peered":"2026-03-09T17:26:56.627571+0000","last_clean":"2026-03-09T17:26:56.627571+0000","last_became_active":"2026-03-09T17:26:32.113454+0000","last_became_peered":"2026-03-09T17:26:32.113454+0000","last_unstale":"2026-03-09T17:26:56.627571+0000","last_undegraded":"2026-03-09T17:26:56.627571+0000","last_fullsized":"2026-03-09T17:26:56.627571+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:13:31.629318+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754131+0000","last_change":"2026-03-09T17:26:34.121865+0000","last_active":"2026-03-09T17:26:56.754131+0000","last_peered":"2026-03-09T17:26:56.754131+0000","last_clean":"2026-03-09T17:26:56.754131+0000","last_became_active":"2026-03-09T17:26:34.121751+0000","last_became_peered":"2026-03-09T17:26:34.121751+0000","last_unstale":"2026-03-09T17:26:56.754131+0000","last_undegraded":"2026-03-09T17:26:56.754131+0000","last_fullsized":"2026-03-09T17:26:56.754131+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:56:20.301200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"61'11","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754326+0000","last_change":"2026-03-09T17:26:30.124988+0000","last_active":"2026-03-09T17:26:56.754326+0000","last_peered":"2026-03-09T17:26:56.754326+0000","last_clean":"2026-03-09T17:26:56.754326+0000","last_became_active":"2026-03-09T17:26:30.124810+0000","last_became_peered":"2026-03-09T17:26:30.124810+0000","last_unstale":"2026-03-09T17:26:56.754326+0000","last_undegraded":"2026-03-09T17:26:56.754326+0000","last_fullsized":"2026-03-09T17:26:56.754326+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:58:27.927821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627230+0000","last_change":"2026-03-09T17:26:28.097881+0000","last_active":"2026-03-09T17:26:56.627230+0000","last_peered":"2026-03-09T17:26:56.627230+0000","last_clean":"2026-03-09T17:26:56.627230+0000","last_became_active":"2026-03-09T17:26:28.097778+0000","last_became_peered":"2026-03-09T17:26:28.097778+0000","last_unstale":"2026-03-09T17:26:56.627230+0000","last_undegraded":"2026-03-09T17:26:56.627230+0000","last_fullsized":"2026-03-09T17:26:56.627230+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:35:44.117390+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754454+0000","last_change":"2026-03-09T17:26:32.125143+0000","last_active":"2026-03-09T17:26:56.754454+0000","last_peered":"2026-03-09T17:26:56.754454+0000","last_clean":"2026-03-09T17:26:56.754454+0000","last_became_active":"2026-03-09T17:26:32.114480+0000","last_became_peered":"2026-03-09T17:26:32.114480+0000","last_unstale":"2026-03-09T17:26:56.754454+0000","last_undegraded":"2026-03-09T17:26:56.754454+0000","last_fullsized":"2026-03-09T17:26:56.754454+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:20:22.944515+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627260+0000","last_change":"2026-03-09T17:26:34.123068+0000","last_active":"2026-03-09T17:26:56.627260+0000","last_peered":"2026-03-09T17:26:56.627260+0000","last_clean":"2026-03-09T17:26:56.627260+0000","last_became_active":"2026-03-09T17:26:34.122938+0000","last_became_peered":"2026-03-09T17:26:34.122938+0000","last_unstale":"2026-03-09T17:26:56.627260+0000","last_undegraded":"2026-03-09T17:26:56.627260+0000","last_fullsized":"2026-03-09T17:26:56.627260+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:05:10.094914+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"61'4","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.757011+0000","last_change":"2026-03-09T17:26:30.120153+0000","last_active":"2026-03-09T17:26:56.757011+0000","last_peered":"2026-03-09T17:26:56.757011+0000","last_clean":"2026-03-09T17:26:56.757011+0000","last_became_active":"2026-03-09T17:26:30.120047+0000","last_became_peered":"2026-03-09T17:26:30.120047+0000","last_unstale":"2026-03-09T17:26:56.757011+0000","last_undegraded":"2026-03-09T17:26:56.757011+0000","last_fullsized":"2026-03-09T17:26:56.757011+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:28:29.528471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756898+0000","last_change":"2026-03-09T17:26:28.104471+0000","last_active":"2026-03-09T17:26:56.756898+0000","last_peered":"2026-03-09T17:26:56.756898+0000","last_clean":"2026-03-09T17:26:56.756898+0000","last_became_active":"2026-03-09T17:26:28.104117+0000","last_became_peered":"2026-03-09T17:26:28.104117+0000","last_unstale":"2026-03-09T17:26:56.756898+0000","last_undegraded":"2026-03-09T17:26:56.756898+0000","last_fullsized":"2026-03-09T17:26:56.756898+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:16:36.518485+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753413+0000","last_change":"2026-03-09T17:26:32.117750+0000","last_active":"2026-03-09T17:26:56.753413+0000","last_peered":"2026-03-09T17:26:56.753413+0000","last_clean":"2026-03-09T17:26:56.753413+0000","last_became_active":"2026-03-09T17:26:32.117664+0000","last_became_peered":"2026-03-09T17:26:32.117664+0000","last_unstale":"2026-03-09T17:26:56.753413+0000","last_undegraded":"2026-03-09T17:26:56.753413+0000","last_fullsized":"2026-03-09T17:26:56.753413+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:11:52.597538+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755402+0000","last_change":"2026-03-09T17:26:34.130232+0000","last_active":"2026-03-09T17:26:56.755402+0000","last_peered":"2026-03-09T17:26:56.755402+0000","last_clean":"2026-03-09T17:26:56.755402+0000","last_became_active":"2026-03-09T17:26:34.130084+0000","last_became_peered":"2026-03-09T17:26:34.130084+0000","last_unstale":"2026-03-09T17:26:56.755402+0000","last_undegraded":"2026-03-09T17:26:56.755402+0000","last_fullsized":"2026-03-09T17:26:56.755402+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:36:39.812867+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"61'11","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754701+0000","last_change":"2026-03-09T17:26:30.395252+0000","last_active":"2026-03-09T17:26:56.754701+0000","last_peered":"2026-03-09T17:26:56.754701+0000","last_clean":"2026-03-09T17:26:56.754701+0000","last_became_active":"2026-03-09T17:26:30.395123+0000","last_became_peered":"2026-03-09T17:26:30.395123+0000","last_unstale":"2026-03-09T17:26:56.754701+0000","last_undegraded":"2026-03-09T17:26:56.754701+0000","last_fullsized":"2026-03-09T17:26:56.754701+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:48:44.876719+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"54'2","reported_seq":49,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752709+0000","last_change":"2026-03-09T17:26:28.096054+0000","last_active":"2026-03-09T17:26:56.752709+0000","last_peered":"2026-03-09T17:26:56.752709+0000","last_clean":"2026-03-09T17:26:56.752709+0000","last_became_active":"2026-03-09T17:26:28.095719+0000","last_became_peered":"2026-03-09T17:26:28.095719+0000","last_unstale":"2026-03-09T17:26:56.752709+0000","last_undegraded":"2026-03-09T17:26:56.752709+0000","last_fullsized":"2026-03-09T17:26:56.752709+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:47:51.318385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":1429,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":20,"num_read_kb":20,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"61'11","reported_seq":51,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.189267+0000","last_change":"2026-03-09T17:26:32.120173+0000","last_active":"2026-03-09T17:27:38.189267+0000","last_peered":"2026-03-09T17:27:38.189267+0000","last_clean":"2026-03-09T17:27:38.189267+0000","last_became_active":"2026-03-09T17:26:32.120055+0000","last_became_peered":"2026-03-09T17:26:32.120055+0000","last_unstale":"2026-03-09T17:27:38.189267+0000","last_undegraded":"2026-03-09T17:27:38.189267+0000","last_fullsized":"2026-03-09T17:27:38.189267+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:10:39.114276+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627238+0000","last_change":"2026-03-09T17:26:34.121045+0000","last_active":"2026-03-09T17:26:56.627238+0000","last_peered":"2026-03-09T17:26:56.627238+0000","last_clean":"2026-03-09T17:26:56.627238+0000","last_became_active":"2026-03-09T17:26:34.120741+0000","last_became_peered":"2026-03-09T17:26:34.120741+0000","last_unstale":"2026-03-09T17:26:56.627238+0000","last_undegraded":"2026-03-09T17:26:56.627238+0000","last_fullsized":"2026-03-09T17:26:56.627238+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:35:36.246159+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627388+0000","last_change":"2026-03-09T17:26:30.115342+0000","last_active":"2026-03-09T17:26:56.627388+0000","last_peered":"2026-03-09T17:26:56.627388+0000","last_clean":"2026-03-09T17:26:56.627388+0000","last_became_active":"2026-03-09T17:26:30.114536+0000","last_became_peered":"2026-03-09T17:26:30.114536+0000","last_unstale":"2026-03-09T17:26:56.627388+0000","last_undegraded":"2026-03-09T17:26:56.627388+0000","last_fullsized":"2026-03-09T17:26:56.627388+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:10:41.945843+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627343+0000","last_change":"2026-03-09T17:26:28.100698+0000","last_active":"2026-03-09T17:26:56.627343+0000","last_peered":"2026-03-09T17:26:56.627343+0000","last_clean":"2026-03-09T17:26:56.627343+0000","last_became_active":"2026-03-09T17:26:28.100435+0000","last_became_peered":"2026-03-09T17:26:28.100435+0000","last_unstale":"2026-03-09T17:26:56.627343+0000","last_undegraded":"2026-03-09T17:26:56.627343+0000","last_fullsized":"2026-03-09T17:26:56.627343+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:26:03.529022+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"61'11","reported_seq":51,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.189507+0000","last_change":"2026-03-09T17:26:32.124930+0000","last_active":"2026-03-09T17:27:38.189507+0000","last_peered":"2026-03-09T17:27:38.189507+0000","last_clean":"2026-03-09T17:27:38.189507+0000","last_became_active":"2026-03-09T17:26:32.117415+0000","last_became_peered":"2026-03-09T17:26:32.117415+0000","last_unstale":"2026-03-09T17:27:38.189507+0000","last_undegraded":"2026-03-09T17:27:38.189507+0000","last_fullsized":"2026-03-09T17:27:38.189507+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:15:13.515517+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635105+0000","last_change":"2026-03-09T17:26:34.138808+0000","last_active":"2026-03-09T17:26:56.635105+0000","last_peered":"2026-03-09T17:26:56.635105+0000","last_clean":"2026-03-09T17:26:56.635105+0000","last_became_active":"2026-03-09T17:26:34.138616+0000","last_became_peered":"2026-03-09T17:26:34.138616+0000","last_unstale":"2026-03-09T17:26:56.635105+0000","last_undegraded":"2026-03-09T17:26:56.635105+0000","last_fullsized":"2026-03-09T17:26:56.635105+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:07:10.246033+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755260+0000","last_change":"2026-03-09T17:26:30.124973+0000","last_active":"2026-03-09T17:26:56.755260+0000","last_peered":"2026-03-09T17:26:56.755260+0000","last_clean":"2026-03-09T17:26:56.755260+0000","last_became_active":"2026-03-09T17:26:30.124758+0000","last_became_peered":"2026-03-09T17:26:30.124758+0000","last_unstale":"2026-03-09T17:26:56.755260+0000","last_undegraded":"2026-03-09T17:26:56.755260+0000","last_fullsized":"2026-03-09T17:26:56.755260+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:09:41.078271+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756831+0000","last_change":"2026-03-09T17:26:28.090218+0000","last_active":"2026-03-09T17:26:56.756831+0000","last_peered":"2026-03-09T17:26:56.756831+0000","last_clean":"2026-03-09T17:26:56.756831+0000","last_became_active":"2026-03-09T17:26:28.090115+0000","last_became_peered":"2026-03-09T17:26:28.090115+0000","last_unstale":"2026-03-09T17:26:56.756831+0000","last_undegraded":"2026-03-09T17:26:56.756831+0000","last_fullsized":"2026-03-09T17:26:56.756831+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:02:34.392200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753903+0000","last_change":"2026-03-09T17:26:32.125008+0000","last_active":"2026-03-09T17:26:56.753903+0000","last_peered":"2026-03-09T17:26:56.753903+0000","last_clean":"2026-03-09T17:26:56.753903+0000","last_became_active":"2026-03-09T17:26:32.121849+0000","last_became_peered":"2026-03-09T17:26:32.121849+0000","last_unstale":"2026-03-09T17:26:56.753903+0000","last_undegraded":"2026-03-09T17:26:56.753903+0000","last_fullsized":"2026-03-09T17:26:56.753903+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:15:12.730149+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627069+0000","last_change":"2026-03-09T17:26:34.136649+0000","last_active":"2026-03-09T17:26:56.627069+0000","last_peered":"2026-03-09T17:26:56.627069+0000","last_clean":"2026-03-09T17:26:56.627069+0000","last_became_active":"2026-03-09T17:26:34.136545+0000","last_became_peered":"2026-03-09T17:26:34.136545+0000","last_unstale":"2026-03-09T17:26:56.627069+0000","last_undegraded":"2026-03-09T17:26:56.627069+0000","last_fullsized":"2026-03-09T17:26:56.627069+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:47:36.825938+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"61'10","reported_seq":44,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634519+0000","last_change":"2026-03-09T17:26:30.124478+0000","last_active":"2026-03-09T17:26:56.634519+0000","last_peered":"2026-03-09T17:26:56.634519+0000","last_clean":"2026-03-09T17:26:56.634519+0000","last_became_active":"2026-03-09T17:26:30.124399+0000","last_became_peered":"2026-03-09T17:26:30.124399+0000","last_unstale":"2026-03-09T17:26:56.634519+0000","last_undegraded":"2026-03-09T17:26:56.634519+0000","last_fullsized":"2026-03-09T17:26:56.634519+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:07:20.313320+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748652+0000","last_change":"2026-03-09T17:26:28.087985+0000","last_active":"2026-03-09T17:26:56.748652+0000","last_peered":"2026-03-09T17:26:56.748652+0000","last_clean":"2026-03-09T17:26:56.748652+0000","last_became_active":"2026-03-09T17:26:28.087883+0000","last_became_peered":"2026-03-09T17:26:28.087883+0000","last_unstale":"2026-03-09T17:26:56.748652+0000","last_undegraded":"2026-03-09T17:26:56.748652+0000","last_fullsized":"2026-03-09T17:26:56.748652+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:04:07.028953+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.748681+0000","last_change":"2026-03-09T17:26:32.130920+0000","last_active":"2026-03-09T17:26:56.748681+0000","last_peered":"2026-03-09T17:26:56.748681+0000","last_clean":"2026-03-09T17:26:56.748681+0000","last_became_active":"2026-03-09T17:26:32.130825+0000","last_became_peered":"2026-03-09T17:26:32.130825+0000","last_unstale":"2026-03-09T17:26:56.748681+0000","last_undegraded":"2026-03-09T17:26:56.748681+0000","last_fullsized":"2026-03-09T17:26:56.748681+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:17:41.176939+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.753593+0000","last_change":"2026-03-09T17:26:34.138779+0000","last_active":"2026-03-09T17:26:56.753593+0000","last_peered":"2026-03-09T17:26:56.753593+0000","last_clean":"2026-03-09T17:26:56.753593+0000","last_became_active":"2026-03-09T17:26:34.137399+0000","last_became_peered":"2026-03-09T17:26:34.137399+0000","last_unstale":"2026-03-09T17:26:56.753593+0000","last_undegraded":"2026-03-09T17:26:56.753593+0000","last_fullsized":"2026-03-09T17:26:56.753593+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:51:54.899500+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"61'6","reported_seq":38,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.629577+0000","last_change":"2026-03-09T17:26:30.118289+0000","last_active":"2026-03-09T17:26:56.629577+0000","last_peered":"2026-03-09T17:26:56.629577+0000","last_clean":"2026-03-09T17:26:56.629577+0000","last_became_active":"2026-03-09T17:26:30.118123+0000","last_became_peered":"2026-03-09T17:26:30.118123+0000","last_unstale":"2026-03-09T17:26:56.629577+0000","last_undegraded":"2026-03-09T17:26:56.629577+0000","last_fullsized":"2026-03-09T17:26:56.629577+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:01:31.934885+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752685+0000","last_change":"2026-03-09T17:26:28.102496+0000","last_active":"2026-03-09T17:26:56.752685+0000","last_peered":"2026-03-09T17:26:56.752685+0000","last_clean":"2026-03-09T17:26:56.752685+0000","last_became_active":"2026-03-09T17:26:28.101626+0000","last_became_peered":"2026-03-09T17:26:28.101626+0000","last_unstale":"2026-03-09T17:26:56.752685+0000","last_undegraded":"2026-03-09T17:26:56.752685+0000","last_fullsized":"2026-03-09T17:26:56.752685+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:08:33.383371+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756565+0000","last_change":"2026-03-09T17:26:32.122511+0000","last_active":"2026-03-09T17:26:56.756565+0000","last_peered":"2026-03-09T17:26:56.756565+0000","last_clean":"2026-03-09T17:26:56.756565+0000","last_became_active":"2026-03-09T17:26:32.122425+0000","last_became_peered":"2026-03-09T17:26:32.122425+0000","last_unstale":"2026-03-09T17:26:56.756565+0000","last_undegraded":"2026-03-09T17:26:56.756565+0000","last_fullsized":"2026-03-09T17:26:56.756565+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:38:20.380150+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755427+0000","last_change":"2026-03-09T17:26:34.129228+0000","last_active":"2026-03-09T17:26:56.755427+0000","last_peered":"2026-03-09T17:26:56.755427+0000","last_clean":"2026-03-09T17:26:56.755427+0000","last_became_active":"2026-03-09T17:26:34.129084+0000","last_became_peered":"2026-03-09T17:26:34.129084+0000","last_unstale":"2026-03-09T17:26:56.755427+0000","last_undegraded":"2026-03-09T17:26:56.755427+0000","last_fullsized":"2026-03-09T17:26:56.755427+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:58:17.819480+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752476+0000","last_change":"2026-03-09T17:26:30.123001+0000","last_active":"2026-03-09T17:26:56.752476+0000","last_peered":"2026-03-09T17:26:56.752476+0000","last_clean":"2026-03-09T17:26:56.752476+0000","last_became_active":"2026-03-09T17:26:30.122809+0000","last_became_peered":"2026-03-09T17:26:30.122809+0000","last_unstale":"2026-03-09T17:26:56.752476+0000","last_undegraded":"2026-03-09T17:26:56.752476+0000","last_fullsized":"2026-03-09T17:26:56.752476+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:36:59.018906+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756820+0000","last_change":"2026-03-09T17:26:28.097068+0000","last_active":"2026-03-09T17:26:56.756820+0000","last_peered":"2026-03-09T17:26:56.756820+0000","last_clean":"2026-03-09T17:26:56.756820+0000","last_became_active":"2026-03-09T17:26:28.096921+0000","last_became_peered":"2026-03-09T17:26:28.096921+0000","last_unstale":"2026-03-09T17:26:56.756820+0000","last_undegraded":"2026-03-09T17:26:56.756820+0000","last_fullsized":"2026-03-09T17:26:56.756820+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:43:49.091200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.755292+0000","last_change":"2026-03-09T17:26:32.129214+0000","last_active":"2026-03-09T17:26:56.755292+0000","last_peered":"2026-03-09T17:26:56.755292+0000","last_clean":"2026-03-09T17:26:56.755292+0000","last_became_active":"2026-03-09T17:26:32.127418+0000","last_became_peered":"2026-03-09T17:26:32.127418+0000","last_unstale":"2026-03-09T17:26:56.755292+0000","last_undegraded":"2026-03-09T17:26:56.755292+0000","last_fullsized":"2026-03-09T17:26:56.755292+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:50:38.421678+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754043+0000","last_change":"2026-03-09T17:26:34.121319+0000","last_active":"2026-03-09T17:26:56.754043+0000","last_peered":"2026-03-09T17:26:56.754043+0000","last_clean":"2026-03-09T17:26:56.754043+0000","last_became_active":"2026-03-09T17:26:34.118993+0000","last_became_peered":"2026-03-09T17:26:34.118993+0000","last_unstale":"2026-03-09T17:26:56.754043+0000","last_undegraded":"2026-03-09T17:26:56.754043+0000","last_fullsized":"2026-03-09T17:26:56.754043+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:13:15.662089+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"61'1","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754969+0000","last_change":"2026-03-09T17:26:34.135324+0000","last_active":"2026-03-09T17:26:56.754969+0000","last_peered":"2026-03-09T17:26:56.754969+0000","last_clean":"2026-03-09T17:26:56.754969+0000","last_became_active":"2026-03-09T17:26:34.135213+0000","last_became_peered":"2026-03-09T17:26:34.135213+0000","last_unstale":"2026-03-09T17:26:56.754969+0000","last_undegraded":"2026-03-09T17:26:56.754969+0000","last_fullsized":"2026-03-09T17:26:56.754969+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:56:25.608633+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"61'15","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749095+0000","last_change":"2026-03-09T17:26:30.111642+0000","last_active":"2026-03-09T17:26:56.749095+0000","last_peered":"2026-03-09T17:26:56.749095+0000","last_clean":"2026-03-09T17:26:56.749095+0000","last_became_active":"2026-03-09T17:26:30.111478+0000","last_became_peered":"2026-03-09T17:26:30.111478+0000","last_unstale":"2026-03-09T17:26:56.749095+0000","last_undegraded":"2026-03-09T17:26:56.749095+0000","last_fullsized":"2026-03-09T17:26:56.749095+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:56:12.936712+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.752837+0000","last_change":"2026-03-09T17:26:28.102248+0000","last_active":"2026-03-09T17:26:56.752837+0000","last_peered":"2026-03-09T17:26:56.752837+0000","last_clean":"2026-03-09T17:26:56.752837+0000","last_became_active":"2026-03-09T17:26:28.101483+0000","last_became_peered":"2026-03-09T17:26:28.101483+0000","last_unstale":"2026-03-09T17:26:56.752837+0000","last_undegraded":"2026-03-09T17:26:56.752837+0000","last_fullsized":"2026-03-09T17:26:56.752837+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:16:38.243601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"61'11","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:27:38.188973+0000","last_change":"2026-03-09T17:26:32.122824+0000","last_active":"2026-03-09T17:27:38.188973+0000","last_peered":"2026-03-09T17:27:38.188973+0000","last_clean":"2026-03-09T17:27:38.188973+0000","last_became_active":"2026-03-09T17:26:32.122686+0000","last_became_peered":"2026-03-09T17:26:32.122686+0000","last_unstale":"2026-03-09T17:27:38.188973+0000","last_undegraded":"2026-03-09T17:27:38.188973+0000","last_fullsized":"2026-03-09T17:27:38.188973+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:13:26.800385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749193+0000","last_change":"2026-03-09T17:26:34.139361+0000","last_active":"2026-03-09T17:26:56.749193+0000","last_peered":"2026-03-09T17:26:56.749193+0000","last_clean":"2026-03-09T17:26:56.749193+0000","last_became_active":"2026-03-09T17:26:34.138661+0000","last_became_peered":"2026-03-09T17:26:34.138661+0000","last_unstale":"2026-03-09T17:26:56.749193+0000","last_undegraded":"2026-03-09T17:26:56.749193+0000","last_fullsized":"2026-03-09T17:26:56.749193+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:55:19.714654+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754245+0000","last_change":"2026-03-09T17:26:30.120429+0000","last_active":"2026-03-09T17:26:56.754245+0000","last_peered":"2026-03-09T17:26:56.754245+0000","last_clean":"2026-03-09T17:26:56.754245+0000","last_became_active":"2026-03-09T17:26:30.119815+0000","last_became_peered":"2026-03-09T17:26:30.119815+0000","last_unstale":"2026-03-09T17:26:56.754245+0000","last_undegraded":"2026-03-09T17:26:56.754245+0000","last_fullsized":"2026-03-09T17:26:56.754245+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:54:50.098737+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"54'1","reported_seq":34,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754244+0000","last_change":"2026-03-09T17:26:28.099713+0000","last_active":"2026-03-09T17:26:56.754244+0000","last_peered":"2026-03-09T17:26:56.754244+0000","last_clean":"2026-03-09T17:26:56.754244+0000","last_became_active":"2026-03-09T17:26:28.099482+0000","last_became_peered":"2026-03-09T17:26:28.099482+0000","last_unstale":"2026-03-09T17:26:56.754244+0000","last_undegraded":"2026-03-09T17:26:56.754244+0000","last_fullsized":"2026-03-09T17:26:56.754244+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:42:38.823670+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.627736+0000","last_change":"2026-03-09T17:26:32.118586+0000","last_active":"2026-03-09T17:26:56.627736+0000","last_peered":"2026-03-09T17:26:56.627736+0000","last_clean":"2026-03-09T17:26:56.627736+0000","last_became_active":"2026-03-09T17:26:32.118387+0000","last_became_peered":"2026-03-09T17:26:32.118387+0000","last_unstale":"2026-03-09T17:26:56.627736+0000","last_undegraded":"2026-03-09T17:26:56.627736+0000","last_fullsized":"2026-03-09T17:26:56.627736+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:56:31.340982+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.634915+0000","last_change":"2026-03-09T17:26:34.139676+0000","last_active":"2026-03-09T17:26:56.634915+0000","last_peered":"2026-03-09T17:26:56.634915+0000","last_clean":"2026-03-09T17:26:56.634915+0000","last_became_active":"2026-03-09T17:26:34.137914+0000","last_became_peered":"2026-03-09T17:26:34.137914+0000","last_unstale":"2026-03-09T17:26:56.634915+0000","last_undegraded":"2026-03-09T17:26:56.634915+0000","last_fullsized":"2026-03-09T17:26:56.634915+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:55:56.520253+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.756727+0000","last_change":"2026-03-09T17:26:28.104009+0000","last_active":"2026-03-09T17:26:56.756727+0000","last_peered":"2026-03-09T17:26:56.756727+0000","last_clean":"2026-03-09T17:26:56.756727+0000","last_became_active":"2026-03-09T17:26:28.103897+0000","last_became_peered":"2026-03-09T17:26:28.103897+0000","last_unstale":"2026-03-09T17:26:56.756727+0000","last_undegraded":"2026-03-09T17:26:56.756727+0000","last_fullsized":"2026-03-09T17:26:56.756727+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:10:43.864112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"61'5","reported_seq":39,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.629659+0000","last_change":"2026-03-09T17:26:30.116445+0000","last_active":"2026-03-09T17:26:56.629659+0000","last_peered":"2026-03-09T17:26:56.629659+0000","last_clean":"2026-03-09T17:26:56.629659+0000","last_became_active":"2026-03-09T17:26:30.115041+0000","last_became_peered":"2026-03-09T17:26:30.115041+0000","last_unstale":"2026-03-09T17:26:56.629659+0000","last_undegraded":"2026-03-09T17:26:56.629659+0000","last_fullsized":"2026-03-09T17:26:56.629659+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:17:59.624191+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.749585+0000","last_change":"2026-03-09T17:26:32.129236+0000","last_active":"2026-03-09T17:26:56.749585+0000","last_peered":"2026-03-09T17:26:56.749585+0000","last_clean":"2026-03-09T17:26:56.749585+0000","last_became_active":"2026-03-09T17:26:32.128985+0000","last_became_peered":"2026-03-09T17:26:32.128985+0000","last_unstale":"2026-03-09T17:26:56.749585+0000","last_undegraded":"2026-03-09T17:26:56.749585+0000","last_fullsized":"2026-03-09T17:26:56.749585+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:43:40.111344+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":21,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754166+0000","last_change":"2026-03-09T17:26:34.138621+0000","last_active":"2026-03-09T17:26:56.754166+0000","last_peered":"2026-03-09T17:26:56.754166+0000","last_clean":"2026-03-09T17:26:56.754166+0000","last_became_active":"2026-03-09T17:26:34.137795+0000","last_became_peered":"2026-03-09T17:26:34.137795+0000","last_unstale":"2026-03-09T17:26:56.754166+0000","last_undegraded":"2026-03-09T17:26:56.754166+0000","last_fullsized":"2026-03-09T17:26:56.754166+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:33.096529+0000","last_clean_scrub_stamp":"2026-03-09T17:26:33.096529+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:56:14.679961+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":33,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.754190+0000","last_change":"2026-03-09T17:26:28.099787+0000","last_active":"2026-03-09T17:26:56.754190+0000","last_peered":"2026-03-09T17:26:56.754190+0000","last_clean":"2026-03-09T17:26:56.754190+0000","last_became_active":"2026-03-09T17:26:28.099621+0000","last_became_peered":"2026-03-09T17:26:28.099621+0000","last_unstale":"2026-03-09T17:26:56.754190+0000","last_undegraded":"2026-03-09T17:26:56.754190+0000","last_fullsized":"2026-03-09T17:26:56.754190+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:27.050298+0000","last_clean_scrub_stamp":"2026-03-09T17:26:27.050298+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:58:09.894637+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"61'9","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635360+0000","last_change":"2026-03-09T17:26:30.395867+0000","last_active":"2026-03-09T17:26:56.635360+0000","last_peered":"2026-03-09T17:26:56.635360+0000","last_clean":"2026-03-09T17:26:56.635360+0000","last_became_active":"2026-03-09T17:26:30.394776+0000","last_became_peered":"2026-03-09T17:26:30.394776+0000","last_unstale":"2026-03-09T17:26:56.635360+0000","last_undegraded":"2026-03-09T17:26:56.635360+0000","last_fullsized":"2026-03-09T17:26:56.635360+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:29.069840+0000","last_clean_scrub_stamp":"2026-03-09T17:26:29.069840+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:34:48.765262+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T17:26:56.635698+0000","last_change":"2026-03-09T17:26:32.125547+0000","last_active":"2026-03-09T17:26:56.635698+0000","last_peered":"2026-03-09T17:26:56.635698+0000","last_clean":"2026-03-09T17:26:56.635698+0000","last_became_active":"2026-03-09T17:26:32.125284+0000","last_became_peered":"2026-03-09T17:26:32.125284+0000","last_unstale":"2026-03-09T17:26:56.635698+0000","last_undegraded":"2026-03-09T17:26:56.635698+0000","last_fullsized":"2026-03-09T17:26:56.635698+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T17:26:31.088022+0000","last_clean_scrub_stamp":"2026-03-09T17:26:31.088022+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:10:17.682655+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":72,"num_read_kb":67,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":50,"seq":214748364822,"num_pgs":59,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27952,"kb_used_data":1120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939472,"statfs":{"total":21470642176,"available":21442019328,"internally_reserved":0,"allocated":1146880,"data_stored":710774,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":44,"seq":188978561053,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27912,"kb_used_data":1080,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939512,"statfs":{"total":21470642176,"available":21442060288,"internally_reserved":0,"allocated":1105920,"data_stored":707990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":38,"seq":163208757283,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":655360,"data_stored":250541,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":32,"seq":137438953514,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27512,"kb_used_data":672,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939912,"statfs":{"total":21470642176,"available":21442469888,"internally_reserved":0,"allocated":688128,"data_stored":249668,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149745,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":655360,"data_stored":250539,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411384,"num_pgs":39,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27472,"kb_used_data":632,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939952,"statfs":{"total":21470642176,"available":21442510848,"internally_reserved":0,"allocated":647168,"data_stored":249203,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574911,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27484,"kb_used_data":644,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939940,"statfs":{"total":21470642176,"available":21442498560,"internally_reserved":0,"allocated":659456,"data_stored":249041,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738437,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27932,"kb_used_data":1096,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939492,"statfs":{"total":21470642176,"available":21442039808,"internally_reserved":0,"allocated":1122304,"data_stored":708645,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1429,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1521,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T17:27:49.565 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T17:27:49.565 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T17:27:49.565 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T17:27:49.565 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph health --format=json 2026-03-09T17:27:50.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:49 vm02 bash[23351]: cluster 2026-03-09T17:27:48.660536+0000 mgr.y (mgr.14505) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:50.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:49 vm02 bash[23351]: cluster 2026-03-09T17:27:48.660536+0000 mgr.y (mgr.14505) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:50.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:49 vm00 bash[28333]: cluster 2026-03-09T17:27:48.660536+0000 mgr.y (mgr.14505) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:50.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:49 vm00 bash[28333]: cluster 2026-03-09T17:27:48.660536+0000 mgr.y (mgr.14505) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:50.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:49 vm00 bash[20770]: cluster 2026-03-09T17:27:48.660536+0000 mgr.y (mgr.14505) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:50.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:49 vm00 bash[20770]: cluster 2026-03-09T17:27:48.660536+0000 mgr.y (mgr.14505) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:51.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:50 vm02 bash[23351]: audit 2026-03-09T17:27:49.501096+0000 mgr.y (mgr.14505) 67 : audit [DBG] from='client.24539 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:51.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:50 vm02 bash[23351]: audit 2026-03-09T17:27:49.501096+0000 mgr.y (mgr.14505) 67 : audit [DBG] from='client.24539 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:51.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:50 vm00 bash[28333]: audit 2026-03-09T17:27:49.501096+0000 mgr.y (mgr.14505) 67 : audit [DBG] from='client.24539 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:51.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:50 vm00 bash[28333]: audit 2026-03-09T17:27:49.501096+0000 mgr.y (mgr.14505) 67 : audit [DBG] from='client.24539 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:51.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:50 vm00 bash[20770]: audit 2026-03-09T17:27:49.501096+0000 mgr.y (mgr.14505) 67 : audit [DBG] from='client.24539 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:51.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:50 vm00 bash[20770]: audit 2026-03-09T17:27:49.501096+0000 mgr.y (mgr.14505) 67 : audit [DBG] from='client.24539 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:27:51.846 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:27:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:27:52.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:51 vm02 bash[23351]: cluster 2026-03-09T17:27:50.660839+0000 mgr.y (mgr.14505) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:52.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:51 vm02 bash[23351]: cluster 2026-03-09T17:27:50.660839+0000 mgr.y (mgr.14505) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:52.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:51 vm00 bash[28333]: cluster 2026-03-09T17:27:50.660839+0000 mgr.y (mgr.14505) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:52.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:51 vm00 bash[28333]: cluster 2026-03-09T17:27:50.660839+0000 mgr.y (mgr.14505) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:52.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:51 vm00 bash[20770]: cluster 2026-03-09T17:27:50.660839+0000 mgr.y (mgr.14505) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:52.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:51 vm00 bash[20770]: cluster 2026-03-09T17:27:50.660839+0000 mgr.y (mgr.14505) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:52 vm02 bash[23351]: audit 2026-03-09T17:27:51.478407+0000 mgr.y (mgr.14505) 69 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:53.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:52 vm02 bash[23351]: audit 2026-03-09T17:27:51.478407+0000 mgr.y (mgr.14505) 69 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:53.269 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T17:27:53.286 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:52 vm00 bash[20770]: audit 2026-03-09T17:27:51.478407+0000 mgr.y (mgr.14505) 69 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:53.286 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:52 vm00 bash[20770]: audit 2026-03-09T17:27:51.478407+0000 mgr.y (mgr.14505) 69 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:53.286 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:52 vm00 bash[28333]: audit 2026-03-09T17:27:51.478407+0000 mgr.y (mgr.14505) 69 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:53.286 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:52 vm00 bash[28333]: audit 2026-03-09T17:27:51.478407+0000 mgr.y (mgr.14505) 69 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- 192.168.123.100:0/2097365115 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 msgr2=0x7f8e501040b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2097365115 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 0x7f8e501040b0 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f8e4000dec0 tx=0x7f8e40033350 comp rx=0 tx=0).stop 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- 192.168.123.100:0/2097365115 shutdown_connections 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2097365115 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8e5010d0b0 0x7f8e5010f550 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2097365115 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8e501045f0 0x7f8e5010cae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2097365115 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 0x7f8e501040b0 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- 192.168.123.100:0/2097365115 >> 192.168.123.100:0/2097365115 conn(0x7f8e500fd6b0 msgr2=0x7f8e500ffad0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- 192.168.123.100:0/2097365115 shutdown_connections 2026-03-09T17:27:53.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- 192.168.123.100:0/2097365115 wait complete. 2026-03-09T17:27:53.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 Processor -- start 2026-03-09T17:27:53.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- start start 2026-03-09T17:27:53.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 0x7f8e50198610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:53.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8e501045f0 0x7f8e50198b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:53.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 0x7f8e50198610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 0x7f8e50198610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:33974/0 (socket says 192.168.123.100:33974) 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8e5010d0b0 0x7f8e5019cee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f8e501036a0 con 0x7f8e501045f0 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f8e50103520 con 0x7f8e5010d0b0 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f8e50103820 con 0x7f8e50103cd0 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 -- 192.168.123.100:0/2509419205 learned_addr learned my addr 192.168.123.100:0/2509419205 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 -- 192.168.123.100:0/2509419205 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8e5010d0b0 msgr2=0x7f8e5019cee0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e554cb640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8e5010d0b0 0x7f8e5019cee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8e5010d0b0 0x7f8e5019cee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 -- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8e501045f0 msgr2=0x7f8e50198b50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e47fff640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8e501045f0 0x7f8e50198b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8e501045f0 0x7f8e50198b50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 -- 192.168.123.100:0/2509419205 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8e5019d5c0 con 0x7f8e50103cd0 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e54cca640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 0x7f8e50198610 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f8e40033860 tx=0x7f8e40004290 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e47fff640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8e501045f0 0x7f8e50198b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e45ffb640 1 -- 192.168.123.100:0/2509419205 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8e40002af0 con 0x7f8e50103cd0 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e45ffb640 1 -- 192.168.123.100:0/2509419205 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f8e40002c90 con 0x7f8e50103cd0 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f8e5019d850 con 0x7f8e50103cd0 2026-03-09T17:27:53.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.411+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f8e501a5190 con 0x7f8e50103cd0 2026-03-09T17:27:53.417 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.415+0000 7f8e45ffb640 1 -- 192.168.123.100:0/2509419205 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8e40004b10 con 0x7f8e50103cd0 2026-03-09T17:27:53.419 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.415+0000 7f8e45ffb640 1 -- 192.168.123.100:0/2509419205 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f8e4000b6d0 con 0x7f8e50103cd0 2026-03-09T17:27:53.419 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.415+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8e18005180 con 0x7f8e50103cd0 2026-03-09T17:27:53.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.419+0000 7f8e45ffb640 1 --2- 192.168.123.100:0/2509419205 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8e2c077680 0x7f8e2c079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T17:27:53.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.419+0000 7f8e47fff640 1 --2- 192.168.123.100:0/2509419205 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8e2c077680 0x7f8e2c079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T17:27:53.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.419+0000 7f8e47fff640 1 --2- 192.168.123.100:0/2509419205 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8e2c077680 0x7f8e2c079b40 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f8e500ffd20 tx=0x7f8e38005e30 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T17:27:53.421 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.419+0000 7f8e45ffb640 1 -- 192.168.123.100:0/2509419205 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f8e400c32b0 con 0x7f8e50103cd0 2026-03-09T17:27:53.421 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.419+0000 7f8e45ffb640 1 -- 192.168.123.100:0/2509419205 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f8e4004a050 con 0x7f8e50103cd0 2026-03-09T17:27:53.535 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "health", "format": "json"} v 0) -- 0x7f8e18005470 con 0x7f8e50103cd0 2026-03-09T17:27:53.536 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e45ffb640 1 -- 192.168.123.100:0/2509419205 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) ==== 72+0+46 (secure 0 0 0) 0x7f8e40090110 con 0x7f8e50103cd0 2026-03-09T17:27:53.536 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T17:27:53.536 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T17:27:53.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8e2c077680 msgr2=0x7f8e2c079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:53.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2509419205 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8e2c077680 0x7f8e2c079b40 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f8e500ffd20 tx=0x7f8e38005e30 comp rx=0 tx=0).stop 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 msgr2=0x7f8e50198610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 0x7f8e50198610 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f8e40033860 tx=0x7f8e40004290 comp rx=0 tx=0).stop 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 shutdown_connections 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2509419205 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8e2c077680 0x7f8e2c079b40 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8e5010d0b0 0x7f8e5019cee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8e501045f0 0x7f8e50198b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 --2- 192.168.123.100:0/2509419205 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8e50103cd0 0x7f8e50198610 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 >> 192.168.123.100:0/2509419205 conn(0x7f8e500fd6b0 msgr2=0x7f8e500fef10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 shutdown_connections 2026-03-09T17:27:53.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T17:27:53.535+0000 7f8e56f55640 1 -- 192.168.123.100:0/2509419205 wait complete. 2026-03-09T17:27:53.587 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T17:27:53.587 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T17:27:53.587 INFO:teuthology.run_tasks:Running task workunit... 2026-03-09T17:27:53.591 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T17:27:53.591 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T17:27:53.591 DEBUG:teuthology.orchestra.run.vm00:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T17:27:53.595 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:27:53.595 INFO:teuthology.orchestra.run.vm00.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-09T17:27:53.595 DEBUG:teuthology.orchestra.run.vm00:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T17:27:53.642 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-09T17:27:53.642 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-09T17:27:53.686 INFO:tasks.workunit:timeout=3h 2026-03-09T17:27:53.686 INFO:tasks.workunit:cleanup=True 2026-03-09T17:27:53.686 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T17:27:53.732 INFO:tasks.workunit.client.0.vm00.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T17:27:53.856 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:53 vm00 bash[20770]: cluster 2026-03-09T17:27:52.661073+0000 mgr.y (mgr.14505) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:53.856 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:53 vm00 bash[20770]: cluster 2026-03-09T17:27:52.661073+0000 mgr.y (mgr.14505) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:53.856 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:53 vm00 bash[20770]: audit 2026-03-09T17:27:53.536583+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.100:0/2509419205' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T17:27:53.856 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:53 vm00 bash[20770]: audit 2026-03-09T17:27:53.536583+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.100:0/2509419205' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T17:27:54.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:53 vm02 bash[23351]: cluster 2026-03-09T17:27:52.661073+0000 mgr.y (mgr.14505) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:54.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:53 vm02 bash[23351]: cluster 2026-03-09T17:27:52.661073+0000 mgr.y (mgr.14505) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:54.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:53 vm02 bash[23351]: audit 2026-03-09T17:27:53.536583+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.100:0/2509419205' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T17:27:54.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:53 vm02 bash[23351]: audit 2026-03-09T17:27:53.536583+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.100:0/2509419205' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T17:27:54.283 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:53 vm00 bash[28333]: cluster 2026-03-09T17:27:52.661073+0000 mgr.y (mgr.14505) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:54.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:53 vm00 bash[28333]: cluster 2026-03-09T17:27:52.661073+0000 mgr.y (mgr.14505) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:54.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:53 vm00 bash[28333]: audit 2026-03-09T17:27:53.536583+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.100:0/2509419205' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T17:27:54.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:53 vm00 bash[28333]: audit 2026-03-09T17:27:53.536583+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.100:0/2509419205' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T17:27:56.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:55 vm02 bash[23351]: cluster 2026-03-09T17:27:54.661458+0000 mgr.y (mgr.14505) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:56.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:55 vm02 bash[23351]: cluster 2026-03-09T17:27:54.661458+0000 mgr.y (mgr.14505) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:56.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:55 vm00 bash[28333]: cluster 2026-03-09T17:27:54.661458+0000 mgr.y (mgr.14505) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:55 vm00 bash[28333]: cluster 2026-03-09T17:27:54.661458+0000 mgr.y (mgr.14505) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:55 vm00 bash[20770]: cluster 2026-03-09T17:27:54.661458+0000 mgr.y (mgr.14505) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:55 vm00 bash[20770]: cluster 2026-03-09T17:27:54.661458+0000 mgr.y (mgr.14505) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:27:56.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:27:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:27:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:27:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:56 vm00 bash[28333]: audit 2026-03-09T17:27:56.784349+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:57.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:56 vm00 bash[28333]: audit 2026-03-09T17:27:56.784349+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:56 vm00 bash[20770]: audit 2026-03-09T17:27:56.784349+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:56 vm00 bash[20770]: audit 2026-03-09T17:27:56.784349+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:57.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:56 vm02 bash[23351]: audit 2026-03-09T17:27:56.784349+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:57.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:56 vm02 bash[23351]: audit 2026-03-09T17:27:56.784349+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:27:58.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:57 vm00 bash[20770]: cluster 2026-03-09T17:27:56.661782+0000 mgr.y (mgr.14505) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:27:57 vm00 bash[20770]: cluster 2026-03-09T17:27:56.661782+0000 mgr.y (mgr.14505) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:57 vm00 bash[28333]: cluster 2026-03-09T17:27:56.661782+0000 mgr.y (mgr.14505) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:27:57 vm00 bash[28333]: cluster 2026-03-09T17:27:56.661782+0000 mgr.y (mgr.14505) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:57 vm02 bash[23351]: cluster 2026-03-09T17:27:56.661782+0000 mgr.y (mgr.14505) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:27:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:27:57 vm02 bash[23351]: cluster 2026-03-09T17:27:56.661782+0000 mgr.y (mgr.14505) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:00 vm00 bash[20770]: cluster 2026-03-09T17:27:58.662295+0000 mgr.y (mgr.14505) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:00 vm00 bash[20770]: cluster 2026-03-09T17:27:58.662295+0000 mgr.y (mgr.14505) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:00 vm00 bash[28333]: cluster 2026-03-09T17:27:58.662295+0000 mgr.y (mgr.14505) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:00 vm00 bash[28333]: cluster 2026-03-09T17:27:58.662295+0000 mgr.y (mgr.14505) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:00 vm02 bash[23351]: cluster 2026-03-09T17:27:58.662295+0000 mgr.y (mgr.14505) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:00 vm02 bash[23351]: cluster 2026-03-09T17:27:58.662295+0000 mgr.y (mgr.14505) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:01.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:28:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:28:02.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:02 vm02 bash[23351]: cluster 2026-03-09T17:28:00.662652+0000 mgr.y (mgr.14505) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:02.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:02 vm02 bash[23351]: cluster 2026-03-09T17:28:00.662652+0000 mgr.y (mgr.14505) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:02 vm00 bash[20770]: cluster 2026-03-09T17:28:00.662652+0000 mgr.y (mgr.14505) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:02 vm00 bash[20770]: cluster 2026-03-09T17:28:00.662652+0000 mgr.y (mgr.14505) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:02 vm00 bash[28333]: cluster 2026-03-09T17:28:00.662652+0000 mgr.y (mgr.14505) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:02 vm00 bash[28333]: cluster 2026-03-09T17:28:00.662652+0000 mgr.y (mgr.14505) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:03.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:03 vm02 bash[23351]: audit 2026-03-09T17:28:01.481327+0000 mgr.y (mgr.14505) 75 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:03.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:03 vm02 bash[23351]: audit 2026-03-09T17:28:01.481327+0000 mgr.y (mgr.14505) 75 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:03 vm00 bash[20770]: audit 2026-03-09T17:28:01.481327+0000 mgr.y (mgr.14505) 75 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:03 vm00 bash[20770]: audit 2026-03-09T17:28:01.481327+0000 mgr.y (mgr.14505) 75 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:03 vm00 bash[28333]: audit 2026-03-09T17:28:01.481327+0000 mgr.y (mgr.14505) 75 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:03 vm00 bash[28333]: audit 2026-03-09T17:28:01.481327+0000 mgr.y (mgr.14505) 75 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:04.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:28:04 vm02 bash[51223]: logger=infra.usagestats t=2026-03-09T17:28:04.04594986Z level=info msg="Usage stats are ready to report" 2026-03-09T17:28:04.653 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:04 vm02 bash[23351]: cluster 2026-03-09T17:28:02.662943+0000 mgr.y (mgr.14505) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:04.653 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:04 vm02 bash[23351]: cluster 2026-03-09T17:28:02.662943+0000 mgr.y (mgr.14505) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:04.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:04 vm00 bash[20770]: cluster 2026-03-09T17:28:02.662943+0000 mgr.y (mgr.14505) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:04.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:04 vm00 bash[20770]: cluster 2026-03-09T17:28:02.662943+0000 mgr.y (mgr.14505) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:04 vm00 bash[28333]: cluster 2026-03-09T17:28:02.662943+0000 mgr.y (mgr.14505) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:04 vm00 bash[28333]: cluster 2026-03-09T17:28:02.662943+0000 mgr.y (mgr.14505) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:05 vm00 bash[20770]: cluster 2026-03-09T17:28:04.663413+0000 mgr.y (mgr.14505) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:05 vm00 bash[20770]: cluster 2026-03-09T17:28:04.663413+0000 mgr.y (mgr.14505) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:05 vm00 bash[28333]: cluster 2026-03-09T17:28:04.663413+0000 mgr.y (mgr.14505) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:05 vm00 bash[28333]: cluster 2026-03-09T17:28:04.663413+0000 mgr.y (mgr.14505) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:05.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:05 vm02 bash[23351]: cluster 2026-03-09T17:28:04.663413+0000 mgr.y (mgr.14505) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:05.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:05 vm02 bash[23351]: cluster 2026-03-09T17:28:04.663413+0000 mgr.y (mgr.14505) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:06.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:28:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:28:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:28:08.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:07 vm00 bash[20770]: cluster 2026-03-09T17:28:06.663694+0000 mgr.y (mgr.14505) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:07 vm00 bash[20770]: cluster 2026-03-09T17:28:06.663694+0000 mgr.y (mgr.14505) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:07 vm00 bash[28333]: cluster 2026-03-09T17:28:06.663694+0000 mgr.y (mgr.14505) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:07 vm00 bash[28333]: cluster 2026-03-09T17:28:06.663694+0000 mgr.y (mgr.14505) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:07 vm02 bash[23351]: cluster 2026-03-09T17:28:06.663694+0000 mgr.y (mgr.14505) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:08.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:07 vm02 bash[23351]: cluster 2026-03-09T17:28:06.663694+0000 mgr.y (mgr.14505) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:10.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:09 vm00 bash[20770]: cluster 2026-03-09T17:28:08.664179+0000 mgr.y (mgr.14505) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:09 vm00 bash[20770]: cluster 2026-03-09T17:28:08.664179+0000 mgr.y (mgr.14505) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:09 vm00 bash[28333]: cluster 2026-03-09T17:28:08.664179+0000 mgr.y (mgr.14505) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:09 vm00 bash[28333]: cluster 2026-03-09T17:28:08.664179+0000 mgr.y (mgr.14505) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:10.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:09 vm02 bash[23351]: cluster 2026-03-09T17:28:08.664179+0000 mgr.y (mgr.14505) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:10.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:09 vm02 bash[23351]: cluster 2026-03-09T17:28:08.664179+0000 mgr.y (mgr.14505) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:11.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:11 vm02 bash[23351]: cluster 2026-03-09T17:28:10.664551+0000 mgr.y (mgr.14505) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:11.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:11 vm02 bash[23351]: cluster 2026-03-09T17:28:10.664551+0000 mgr.y (mgr.14505) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:11.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:28:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:28:12.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:11 vm00 bash[20770]: cluster 2026-03-09T17:28:10.664551+0000 mgr.y (mgr.14505) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:12.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:11 vm00 bash[20770]: cluster 2026-03-09T17:28:10.664551+0000 mgr.y (mgr.14505) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:12.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:11 vm00 bash[28333]: cluster 2026-03-09T17:28:10.664551+0000 mgr.y (mgr.14505) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:12.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:11 vm00 bash[28333]: cluster 2026-03-09T17:28:10.664551+0000 mgr.y (mgr.14505) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:13.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:12 vm00 bash[20770]: audit 2026-03-09T17:28:11.484789+0000 mgr.y (mgr.14505) 81 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:12 vm00 bash[20770]: audit 2026-03-09T17:28:11.484789+0000 mgr.y (mgr.14505) 81 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:12 vm00 bash[20770]: audit 2026-03-09T17:28:11.791054+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:12 vm00 bash[20770]: audit 2026-03-09T17:28:11.791054+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:12 vm00 bash[28333]: audit 2026-03-09T17:28:11.484789+0000 mgr.y (mgr.14505) 81 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:12 vm00 bash[28333]: audit 2026-03-09T17:28:11.484789+0000 mgr.y (mgr.14505) 81 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:12 vm00 bash[28333]: audit 2026-03-09T17:28:11.791054+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:12 vm00 bash[28333]: audit 2026-03-09T17:28:11.791054+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:13.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:12 vm02 bash[23351]: audit 2026-03-09T17:28:11.484789+0000 mgr.y (mgr.14505) 81 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:13.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:12 vm02 bash[23351]: audit 2026-03-09T17:28:11.484789+0000 mgr.y (mgr.14505) 81 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:13.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:12 vm02 bash[23351]: audit 2026-03-09T17:28:11.791054+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:13.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:12 vm02 bash[23351]: audit 2026-03-09T17:28:11.791054+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:14.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:13 vm00 bash[20770]: cluster 2026-03-09T17:28:12.664856+0000 mgr.y (mgr.14505) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:13 vm00 bash[20770]: cluster 2026-03-09T17:28:12.664856+0000 mgr.y (mgr.14505) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:13 vm00 bash[28333]: cluster 2026-03-09T17:28:12.664856+0000 mgr.y (mgr.14505) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:13 vm00 bash[28333]: cluster 2026-03-09T17:28:12.664856+0000 mgr.y (mgr.14505) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:13 vm02 bash[23351]: cluster 2026-03-09T17:28:12.664856+0000 mgr.y (mgr.14505) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:14.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:13 vm02 bash[23351]: cluster 2026-03-09T17:28:12.664856+0000 mgr.y (mgr.14505) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:15 vm00 bash[20770]: cluster 2026-03-09T17:28:14.665332+0000 mgr.y (mgr.14505) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:15 vm00 bash[20770]: cluster 2026-03-09T17:28:14.665332+0000 mgr.y (mgr.14505) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:15 vm00 bash[28333]: cluster 2026-03-09T17:28:14.665332+0000 mgr.y (mgr.14505) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:15 vm00 bash[28333]: cluster 2026-03-09T17:28:14.665332+0000 mgr.y (mgr.14505) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:15 vm02 bash[23351]: cluster 2026-03-09T17:28:14.665332+0000 mgr.y (mgr.14505) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:15 vm02 bash[23351]: cluster 2026-03-09T17:28:14.665332+0000 mgr.y (mgr.14505) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:28:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:28:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:28:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:17 vm00 bash[28333]: cluster 2026-03-09T17:28:16.665616+0000 mgr.y (mgr.14505) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:17 vm00 bash[28333]: cluster 2026-03-09T17:28:16.665616+0000 mgr.y (mgr.14505) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:17 vm00 bash[20770]: cluster 2026-03-09T17:28:16.665616+0000 mgr.y (mgr.14505) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:17 vm00 bash[20770]: cluster 2026-03-09T17:28:16.665616+0000 mgr.y (mgr.14505) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:18.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:17 vm02 bash[23351]: cluster 2026-03-09T17:28:16.665616+0000 mgr.y (mgr.14505) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:18.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:17 vm02 bash[23351]: cluster 2026-03-09T17:28:16.665616+0000 mgr.y (mgr.14505) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:19 vm00 bash[20770]: cluster 2026-03-09T17:28:18.666172+0000 mgr.y (mgr.14505) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:19 vm00 bash[20770]: cluster 2026-03-09T17:28:18.666172+0000 mgr.y (mgr.14505) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:19 vm00 bash[28333]: cluster 2026-03-09T17:28:18.666172+0000 mgr.y (mgr.14505) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:19 vm00 bash[28333]: cluster 2026-03-09T17:28:18.666172+0000 mgr.y (mgr.14505) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:19 vm02 bash[23351]: cluster 2026-03-09T17:28:18.666172+0000 mgr.y (mgr.14505) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:19 vm02 bash[23351]: cluster 2026-03-09T17:28:18.666172+0000 mgr.y (mgr.14505) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:21.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:28:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:28:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:22 vm00 bash[20770]: cluster 2026-03-09T17:28:20.666467+0000 mgr.y (mgr.14505) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:22 vm00 bash[20770]: cluster 2026-03-09T17:28:20.666467+0000 mgr.y (mgr.14505) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:22 vm00 bash[28333]: cluster 2026-03-09T17:28:20.666467+0000 mgr.y (mgr.14505) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:22 vm00 bash[28333]: cluster 2026-03-09T17:28:20.666467+0000 mgr.y (mgr.14505) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:22 vm02 bash[23351]: cluster 2026-03-09T17:28:20.666467+0000 mgr.y (mgr.14505) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:22 vm02 bash[23351]: cluster 2026-03-09T17:28:20.666467+0000 mgr.y (mgr.14505) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:23.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:23 vm02 bash[23351]: audit 2026-03-09T17:28:21.493541+0000 mgr.y (mgr.14505) 87 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:23 vm02 bash[23351]: audit 2026-03-09T17:28:21.493541+0000 mgr.y (mgr.14505) 87 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:23 vm00 bash[20770]: audit 2026-03-09T17:28:21.493541+0000 mgr.y (mgr.14505) 87 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:23 vm00 bash[20770]: audit 2026-03-09T17:28:21.493541+0000 mgr.y (mgr.14505) 87 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:23 vm00 bash[28333]: audit 2026-03-09T17:28:21.493541+0000 mgr.y (mgr.14505) 87 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:23 vm00 bash[28333]: audit 2026-03-09T17:28:21.493541+0000 mgr.y (mgr.14505) 87 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:24 vm02 bash[23351]: cluster 2026-03-09T17:28:22.666882+0000 mgr.y (mgr.14505) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:24 vm02 bash[23351]: cluster 2026-03-09T17:28:22.666882+0000 mgr.y (mgr.14505) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:24 vm00 bash[20770]: cluster 2026-03-09T17:28:22.666882+0000 mgr.y (mgr.14505) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:24 vm00 bash[20770]: cluster 2026-03-09T17:28:22.666882+0000 mgr.y (mgr.14505) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:24 vm00 bash[28333]: cluster 2026-03-09T17:28:22.666882+0000 mgr.y (mgr.14505) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:24 vm00 bash[28333]: cluster 2026-03-09T17:28:22.666882+0000 mgr.y (mgr.14505) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:26 vm02 bash[23351]: cluster 2026-03-09T17:28:24.667432+0000 mgr.y (mgr.14505) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:26 vm02 bash[23351]: cluster 2026-03-09T17:28:24.667432+0000 mgr.y (mgr.14505) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:26.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:26 vm00 bash[20770]: cluster 2026-03-09T17:28:24.667432+0000 mgr.y (mgr.14505) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:26 vm00 bash[20770]: cluster 2026-03-09T17:28:24.667432+0000 mgr.y (mgr.14505) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:28:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:28:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:28:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:26 vm00 bash[28333]: cluster 2026-03-09T17:28:24.667432+0000 mgr.y (mgr.14505) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:26 vm00 bash[28333]: cluster 2026-03-09T17:28:24.667432+0000 mgr.y (mgr.14505) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:27.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:27 vm00 bash[20770]: cluster 2026-03-09T17:28:26.667687+0000 mgr.y (mgr.14505) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:27.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:27 vm00 bash[20770]: cluster 2026-03-09T17:28:26.667687+0000 mgr.y (mgr.14505) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:27 vm00 bash[20770]: audit 2026-03-09T17:28:26.808045+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:27 vm00 bash[20770]: audit 2026-03-09T17:28:26.808045+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:27 vm00 bash[28333]: cluster 2026-03-09T17:28:26.667687+0000 mgr.y (mgr.14505) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:27 vm00 bash[28333]: cluster 2026-03-09T17:28:26.667687+0000 mgr.y (mgr.14505) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:27 vm00 bash[28333]: audit 2026-03-09T17:28:26.808045+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:27 vm00 bash[28333]: audit 2026-03-09T17:28:26.808045+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:27.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:27 vm02 bash[23351]: cluster 2026-03-09T17:28:26.667687+0000 mgr.y (mgr.14505) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:27.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:27 vm02 bash[23351]: cluster 2026-03-09T17:28:26.667687+0000 mgr.y (mgr.14505) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:27.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:27 vm02 bash[23351]: audit 2026-03-09T17:28:26.808045+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:27.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:27 vm02 bash[23351]: audit 2026-03-09T17:28:26.808045+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:30.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:29 vm00 bash[20770]: cluster 2026-03-09T17:28:28.668193+0000 mgr.y (mgr.14505) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:29 vm00 bash[20770]: cluster 2026-03-09T17:28:28.668193+0000 mgr.y (mgr.14505) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:29 vm00 bash[28333]: cluster 2026-03-09T17:28:28.668193+0000 mgr.y (mgr.14505) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:29 vm00 bash[28333]: cluster 2026-03-09T17:28:28.668193+0000 mgr.y (mgr.14505) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:30.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:29 vm02 bash[23351]: cluster 2026-03-09T17:28:28.668193+0000 mgr.y (mgr.14505) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:30.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:29 vm02 bash[23351]: cluster 2026-03-09T17:28:28.668193+0000 mgr.y (mgr.14505) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:31.799 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:28:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:28:32.133 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:31 vm02 bash[23351]: cluster 2026-03-09T17:28:30.668474+0000 mgr.y (mgr.14505) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:32.133 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:31 vm02 bash[23351]: cluster 2026-03-09T17:28:30.668474+0000 mgr.y (mgr.14505) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:32.133 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:31 vm02 bash[23351]: audit 2026-03-09T17:28:30.895697+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:28:32.133 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:31 vm02 bash[23351]: audit 2026-03-09T17:28:30.895697+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:28:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:31 vm00 bash[20770]: cluster 2026-03-09T17:28:30.668474+0000 mgr.y (mgr.14505) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:31 vm00 bash[20770]: cluster 2026-03-09T17:28:30.668474+0000 mgr.y (mgr.14505) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:31 vm00 bash[20770]: audit 2026-03-09T17:28:30.895697+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:28:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:31 vm00 bash[20770]: audit 2026-03-09T17:28:30.895697+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:28:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:31 vm00 bash[28333]: cluster 2026-03-09T17:28:30.668474+0000 mgr.y (mgr.14505) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:31 vm00 bash[28333]: cluster 2026-03-09T17:28:30.668474+0000 mgr.y (mgr.14505) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:31 vm00 bash[28333]: audit 2026-03-09T17:28:30.895697+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:28:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:31 vm00 bash[28333]: audit 2026-03-09T17:28:30.895697+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:28:33.105 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:32 vm00 bash[20770]: audit 2026-03-09T17:28:31.501671+0000 mgr.y (mgr.14505) 93 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:33.105 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:32 vm00 bash[20770]: audit 2026-03-09T17:28:31.501671+0000 mgr.y (mgr.14505) 93 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:33.105 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:32 vm00 bash[28333]: audit 2026-03-09T17:28:31.501671+0000 mgr.y (mgr.14505) 93 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:33.105 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:32 vm00 bash[28333]: audit 2026-03-09T17:28:31.501671+0000 mgr.y (mgr.14505) 93 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:33.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:32 vm02 bash[23351]: audit 2026-03-09T17:28:31.501671+0000 mgr.y (mgr.14505) 93 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:33.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:32 vm02 bash[23351]: audit 2026-03-09T17:28:31.501671+0000 mgr.y (mgr.14505) 93 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:33 vm02 bash[23351]: cluster 2026-03-09T17:28:32.668767+0000 mgr.y (mgr.14505) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:34.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:33 vm02 bash[23351]: cluster 2026-03-09T17:28:32.668767+0000 mgr.y (mgr.14505) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:34.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:33 vm00 bash[20770]: cluster 2026-03-09T17:28:32.668767+0000 mgr.y (mgr.14505) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:33 vm00 bash[20770]: cluster 2026-03-09T17:28:32.668767+0000 mgr.y (mgr.14505) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:33 vm00 bash[28333]: cluster 2026-03-09T17:28:32.668767+0000 mgr.y (mgr.14505) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:33 vm00 bash[28333]: cluster 2026-03-09T17:28:32.668767+0000 mgr.y (mgr.14505) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:35.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:35 vm00 bash[20770]: cluster 2026-03-09T17:28:34.669248+0000 mgr.y (mgr.14505) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:35 vm00 bash[20770]: cluster 2026-03-09T17:28:34.669248+0000 mgr.y (mgr.14505) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:35 vm00 bash[28333]: cluster 2026-03-09T17:28:34.669248+0000 mgr.y (mgr.14505) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:35 vm00 bash[28333]: cluster 2026-03-09T17:28:34.669248+0000 mgr.y (mgr.14505) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:35.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:35 vm02 bash[23351]: cluster 2026-03-09T17:28:34.669248+0000 mgr.y (mgr.14505) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:35 vm02 bash[23351]: cluster 2026-03-09T17:28:34.669248+0000 mgr.y (mgr.14505) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:36.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:28:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:28:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:28:37.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:37 vm00 bash[20770]: audit 2026-03-09T17:28:36.285905+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:37 vm00 bash[20770]: audit 2026-03-09T17:28:36.285905+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:37 vm00 bash[20770]: audit 2026-03-09T17:28:36.295155+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:37 vm00 bash[20770]: audit 2026-03-09T17:28:36.295155+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:37 vm00 bash[20770]: audit 2026-03-09T17:28:36.991894+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:37 vm00 bash[20770]: audit 2026-03-09T17:28:36.991894+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:37 vm00 bash[20770]: audit 2026-03-09T17:28:36.999242+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:37 vm00 bash[20770]: audit 2026-03-09T17:28:36.999242+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:37 vm00 bash[28333]: audit 2026-03-09T17:28:36.285905+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:37 vm00 bash[28333]: audit 2026-03-09T17:28:36.285905+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:37 vm00 bash[28333]: audit 2026-03-09T17:28:36.295155+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:37 vm00 bash[28333]: audit 2026-03-09T17:28:36.295155+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:37 vm00 bash[28333]: audit 2026-03-09T17:28:36.991894+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:37 vm00 bash[28333]: audit 2026-03-09T17:28:36.991894+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:37 vm00 bash[28333]: audit 2026-03-09T17:28:36.999242+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.547 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:37 vm00 bash[28333]: audit 2026-03-09T17:28:36.999242+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:37 vm02 bash[23351]: audit 2026-03-09T17:28:36.285905+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:37 vm02 bash[23351]: audit 2026-03-09T17:28:36.285905+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:37 vm02 bash[23351]: audit 2026-03-09T17:28:36.295155+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:37 vm02 bash[23351]: audit 2026-03-09T17:28:36.295155+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:37 vm02 bash[23351]: audit 2026-03-09T17:28:36.991894+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:37 vm02 bash[23351]: audit 2026-03-09T17:28:36.991894+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:37 vm02 bash[23351]: audit 2026-03-09T17:28:36.999242+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:37 vm02 bash[23351]: audit 2026-03-09T17:28:36.999242+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:38.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:38 vm00 bash[20770]: cluster 2026-03-09T17:28:36.669503+0000 mgr.y (mgr.14505) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:38 vm00 bash[20770]: cluster 2026-03-09T17:28:36.669503+0000 mgr.y (mgr.14505) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:38 vm00 bash[20770]: audit 2026-03-09T17:28:37.534283+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:38 vm00 bash[20770]: audit 2026-03-09T17:28:37.534283+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:38 vm00 bash[20770]: audit 2026-03-09T17:28:37.535046+0000 mon.c (mon.2) 73 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:38 vm00 bash[20770]: audit 2026-03-09T17:28:37.535046+0000 mon.c (mon.2) 73 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:38 vm00 bash[20770]: audit 2026-03-09T17:28:37.607351+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:38 vm00 bash[20770]: audit 2026-03-09T17:28:37.607351+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:38 vm00 bash[28333]: cluster 2026-03-09T17:28:36.669503+0000 mgr.y (mgr.14505) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:38 vm00 bash[28333]: cluster 2026-03-09T17:28:36.669503+0000 mgr.y (mgr.14505) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:38 vm00 bash[28333]: audit 2026-03-09T17:28:37.534283+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:38 vm00 bash[28333]: audit 2026-03-09T17:28:37.534283+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:38 vm00 bash[28333]: audit 2026-03-09T17:28:37.535046+0000 mon.c (mon.2) 73 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:38 vm00 bash[28333]: audit 2026-03-09T17:28:37.535046+0000 mon.c (mon.2) 73 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:38 vm00 bash[28333]: audit 2026-03-09T17:28:37.607351+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:38 vm00 bash[28333]: audit 2026-03-09T17:28:37.607351+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:38 vm02 bash[23351]: cluster 2026-03-09T17:28:36.669503+0000 mgr.y (mgr.14505) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:38 vm02 bash[23351]: cluster 2026-03-09T17:28:36.669503+0000 mgr.y (mgr.14505) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:38 vm02 bash[23351]: audit 2026-03-09T17:28:37.534283+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:28:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:38 vm02 bash[23351]: audit 2026-03-09T17:28:37.534283+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:28:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:38 vm02 bash[23351]: audit 2026-03-09T17:28:37.535046+0000 mon.c (mon.2) 73 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:28:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:38 vm02 bash[23351]: audit 2026-03-09T17:28:37.535046+0000 mon.c (mon.2) 73 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:28:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:38 vm02 bash[23351]: audit 2026-03-09T17:28:37.607351+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:38 vm02 bash[23351]: audit 2026-03-09T17:28:37.607351+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:39.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:39 vm00 bash[20770]: cluster 2026-03-09T17:28:38.669953+0000 mgr.y (mgr.14505) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:39 vm00 bash[20770]: cluster 2026-03-09T17:28:38.669953+0000 mgr.y (mgr.14505) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:39 vm00 bash[28333]: cluster 2026-03-09T17:28:38.669953+0000 mgr.y (mgr.14505) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:39 vm00 bash[28333]: cluster 2026-03-09T17:28:38.669953+0000 mgr.y (mgr.14505) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:39.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:39 vm02 bash[23351]: cluster 2026-03-09T17:28:38.669953+0000 mgr.y (mgr.14505) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:39 vm02 bash[23351]: cluster 2026-03-09T17:28:38.669953+0000 mgr.y (mgr.14505) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:41.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:41 vm02 bash[23351]: cluster 2026-03-09T17:28:40.670249+0000 mgr.y (mgr.14505) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:41 vm02 bash[23351]: cluster 2026-03-09T17:28:40.670249+0000 mgr.y (mgr.14505) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:28:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:28:42.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:41 vm00 bash[20770]: cluster 2026-03-09T17:28:40.670249+0000 mgr.y (mgr.14505) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:41 vm00 bash[20770]: cluster 2026-03-09T17:28:40.670249+0000 mgr.y (mgr.14505) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:41 vm00 bash[28333]: cluster 2026-03-09T17:28:40.670249+0000 mgr.y (mgr.14505) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:41 vm00 bash[28333]: cluster 2026-03-09T17:28:40.670249+0000 mgr.y (mgr.14505) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:43.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:42 vm00 bash[20770]: audit 2026-03-09T17:28:41.509947+0000 mgr.y (mgr.14505) 99 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:42 vm00 bash[20770]: audit 2026-03-09T17:28:41.509947+0000 mgr.y (mgr.14505) 99 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:42 vm00 bash[20770]: audit 2026-03-09T17:28:41.814229+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:42 vm00 bash[20770]: audit 2026-03-09T17:28:41.814229+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:42 vm00 bash[28333]: audit 2026-03-09T17:28:41.509947+0000 mgr.y (mgr.14505) 99 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:42 vm00 bash[28333]: audit 2026-03-09T17:28:41.509947+0000 mgr.y (mgr.14505) 99 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:42 vm00 bash[28333]: audit 2026-03-09T17:28:41.814229+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:42 vm00 bash[28333]: audit 2026-03-09T17:28:41.814229+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:43.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:42 vm02 bash[23351]: audit 2026-03-09T17:28:41.509947+0000 mgr.y (mgr.14505) 99 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:42 vm02 bash[23351]: audit 2026-03-09T17:28:41.509947+0000 mgr.y (mgr.14505) 99 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:42 vm02 bash[23351]: audit 2026-03-09T17:28:41.814229+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:42 vm02 bash[23351]: audit 2026-03-09T17:28:41.814229+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:44.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:43 vm00 bash[20770]: cluster 2026-03-09T17:28:42.670547+0000 mgr.y (mgr.14505) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:44.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:43 vm00 bash[20770]: cluster 2026-03-09T17:28:42.670547+0000 mgr.y (mgr.14505) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:43 vm00 bash[28333]: cluster 2026-03-09T17:28:42.670547+0000 mgr.y (mgr.14505) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:43 vm00 bash[28333]: cluster 2026-03-09T17:28:42.670547+0000 mgr.y (mgr.14505) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:44.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:43 vm02 bash[23351]: cluster 2026-03-09T17:28:42.670547+0000 mgr.y (mgr.14505) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:43 vm02 bash[23351]: cluster 2026-03-09T17:28:42.670547+0000 mgr.y (mgr.14505) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:46.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:45 vm02 bash[23351]: cluster 2026-03-09T17:28:44.670994+0000 mgr.y (mgr.14505) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:46.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:45 vm02 bash[23351]: cluster 2026-03-09T17:28:44.670994+0000 mgr.y (mgr.14505) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:46.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:45 vm00 bash[20770]: cluster 2026-03-09T17:28:44.670994+0000 mgr.y (mgr.14505) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:45 vm00 bash[20770]: cluster 2026-03-09T17:28:44.670994+0000 mgr.y (mgr.14505) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:45 vm00 bash[28333]: cluster 2026-03-09T17:28:44.670994+0000 mgr.y (mgr.14505) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:45 vm00 bash[28333]: cluster 2026-03-09T17:28:44.670994+0000 mgr.y (mgr.14505) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:46.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:28:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:28:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:28:48.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:47 vm00 bash[20770]: cluster 2026-03-09T17:28:46.671237+0000 mgr.y (mgr.14505) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:47 vm00 bash[20770]: cluster 2026-03-09T17:28:46.671237+0000 mgr.y (mgr.14505) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:47 vm00 bash[28333]: cluster 2026-03-09T17:28:46.671237+0000 mgr.y (mgr.14505) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:47 vm00 bash[28333]: cluster 2026-03-09T17:28:46.671237+0000 mgr.y (mgr.14505) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:47 vm02 bash[23351]: cluster 2026-03-09T17:28:46.671237+0000 mgr.y (mgr.14505) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:47 vm02 bash[23351]: cluster 2026-03-09T17:28:46.671237+0000 mgr.y (mgr.14505) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:50.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:50 vm00 bash[20770]: cluster 2026-03-09T17:28:48.671717+0000 mgr.y (mgr.14505) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:50.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:50 vm00 bash[20770]: cluster 2026-03-09T17:28:48.671717+0000 mgr.y (mgr.14505) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:50.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:50 vm00 bash[28333]: cluster 2026-03-09T17:28:48.671717+0000 mgr.y (mgr.14505) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:50.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:50 vm00 bash[28333]: cluster 2026-03-09T17:28:48.671717+0000 mgr.y (mgr.14505) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:50 vm02 bash[23351]: cluster 2026-03-09T17:28:48.671717+0000 mgr.y (mgr.14505) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:50 vm02 bash[23351]: cluster 2026-03-09T17:28:48.671717+0000 mgr.y (mgr.14505) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: git switch -c 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:Or undo this operation with: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: git switch - 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-09T17:28:51.823 INFO:tasks.workunit.client.0.vm00.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T17:28:51.829 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T17:28:51.875 INFO:tasks.workunit.client.0.vm00.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T17:28:51.876 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T17:28:51.876 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T17:28:51.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:28:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:28:51.917 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T17:28:51.949 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T17:28:51.975 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T17:28:51.976 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T17:28:51.976 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T17:28:52.001 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T17:28:52.004 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T17:28:52.004 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T17:28:52.050 INFO:tasks.workunit:Running workunits matching rados/test.sh on client.0... 2026-03-09T17:28:52.050 INFO:tasks.workunit:Running workunit rados/test.sh... 2026-03-09T17:28:52.050 DEBUG:teuthology.orchestra.run.vm00:workunit test rados/test.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ parallel=1 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' '' = --serial ']' 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ crimson=0 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' '' = --crimson ']' 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ color= 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -t 1 ']' 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ trap cleanup EXIT ERR HUP INT QUIT 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ GTEST_OUTPUT_DIR=/home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-09T17:28:52.098 INFO:tasks.workunit.client.0.vm00.stderr:+ mkdir -p /home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-09T17:28:52.099 INFO:tasks.workunit.client.0.vm00.stderr:+ declare -A pids 2026-03-09T17:28:52.099 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.100 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.100 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_aio 2026-03-09T17:28:52.100 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_aio' 2026-03-09T17:28:52.100 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_aio 2026-03-09T17:28:52.101 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stdout:test api_aio on pid 59903 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_aio 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59903 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_aio on pid 59903' 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59903 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_aio_pp 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2>&1 | tee ceph_test_rados_api_aio.log | sed "s/^/ api_aio: /"' 2026-03-09T17:28:52.106 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_aio_pp' 2026-03-09T17:28:52.107 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_aio_pp 2026-03-09T17:28:52.107 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.107 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.107 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.107 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.107 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2026-03-09T17:28:52.107 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.108 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_aio.log 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_aio: /' 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stdout:test api_aio_pp on pid 59911 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_aio_pp 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59911 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_aio_pp on pid 59911' 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59911 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.109 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2>&1 | tee ceph_test_rados_api_aio_pp.log | sed "s/^/ api_aio_pp: /"' 2026-03-09T17:28:52.110 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_io 2026-03-09T17:28:52.110 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_io' 2026-03-09T17:28:52.110 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.111 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.111 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.111 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.111 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_io 2026-03-09T17:28:52.111 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2026-03-09T17:28:52.111 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.112 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_aio_pp.log 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stdout:test api_io on pid 59919 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_io 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59919 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_io on pid 59919' 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59919 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2>&1 | tee ceph_test_rados_api_io.log | sed "s/^/ api_io: /"' 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.114 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.115 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_io.log 2026-03-09T17:28:52.115 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2026-03-09T17:28:52.118 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_io_pp 2026-03-09T17:28:52.119 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_io_pp' 2026-03-09T17:28:52.120 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.120 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_aio_pp: /' 2026-03-09T17:28:52.122 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_io: /' 2026-03-09T17:28:52.123 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_io_pp 2026-03-09T17:28:52.124 INFO:tasks.workunit.client.0.vm00.stdout:test api_io_pp on pid 59927 2026-03-09T17:28:52.124 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_io_pp 2026-03-09T17:28:52.124 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59927 2026-03-09T17:28:52.124 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_io_pp on pid 59927' 2026-03-09T17:28:52.124 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59927 2026-03-09T17:28:52.124 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.124 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.124 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2>&1 | tee ceph_test_rados_api_io_pp.log | sed "s/^/ api_io_pp: /"' 2026-03-09T17:28:52.127 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.127 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.127 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.127 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.130 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_asio 2026-03-09T17:28:52.139 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_asio' 2026-03-09T17:28:52.140 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_asio 2026-03-09T17:28:52.142 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2026-03-09T17:28:52.142 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_io_pp: /' 2026-03-09T17:28:52.143 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_io_pp.log 2026-03-09T17:28:52.151 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.162 INFO:tasks.workunit.client.0.vm00.stdout:test api_asio on pid 59983 2026-03-09T17:28:52.162 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_asio 2026-03-09T17:28:52.162 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59983 2026-03-09T17:28:52.162 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_asio on pid 59983' 2026-03-09T17:28:52.162 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59983 2026-03-09T17:28:52.162 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.162 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.162 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2>&1 | tee ceph_test_rados_api_asio.log | sed "s/^/ api_asio: /"' 2026-03-09T17:28:52.163 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.163 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.163 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.163 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.163 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_list 2026-03-09T17:28:52.163 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_list' 2026-03-09T17:28:52.163 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2026-03-09T17:28:52.171 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.171 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_asio: /' 2026-03-09T17:28:52.171 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_asio.log 2026-03-09T17:28:52.172 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_list 2026-03-09T17:28:52.175 INFO:tasks.workunit.client.0.vm00.stdout:test api_list on pid 59994 2026-03-09T17:28:52.175 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_list 2026-03-09T17:28:52.175 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59994 2026-03-09T17:28:52.175 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_list on pid 59994' 2026-03-09T17:28:52.175 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59994 2026-03-09T17:28:52.175 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.175 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.178 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_lock 2026-03-09T17:28:52.178 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_lock' 2026-03-09T17:28:52.178 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2>&1 | tee ceph_test_rados_api_list.log | sed "s/^/ api_list: /"' 2026-03-09T17:28:52.179 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.179 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.179 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.179 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.182 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_list.log 2026-03-09T17:28:52.182 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_list: /' 2026-03-09T17:28:52.183 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2026-03-09T17:28:52.187 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_lock 2026-03-09T17:28:52.187 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.190 INFO:tasks.workunit.client.0.vm00.stdout:test api_lock on pid 60034 2026-03-09T17:28:52.190 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_lock 2026-03-09T17:28:52.190 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60034 2026-03-09T17:28:52.190 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_lock on pid 60034' 2026-03-09T17:28:52.190 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60034 2026-03-09T17:28:52.190 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.190 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.190 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2>&1 | tee ceph_test_rados_api_lock.log | sed "s/^/ api_lock: /"' 2026-03-09T17:28:52.192 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.192 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.192 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.192 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.192 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_lock_pp 2026-03-09T17:28:52.192 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_lock_pp' 2026-03-09T17:28:52.192 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_lock: /' 2026-03-09T17:28:52.193 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_lock.log 2026-03-09T17:28:52.193 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2026-03-09T17:28:52.204 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.208 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_lock_pp 2026-03-09T17:28:52.208 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_lock_pp 2026-03-09T17:28:52.209 INFO:tasks.workunit.client.0.vm00.stdout:test api_lock_pp on pid 60057 2026-03-09T17:28:52.209 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60057 2026-03-09T17:28:52.209 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_lock_pp on pid 60057' 2026-03-09T17:28:52.209 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60057 2026-03-09T17:28:52.209 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.209 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.211 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_misc 2026-03-09T17:28:52.211 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_misc' 2026-03-09T17:28:52.212 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2>&1 | tee ceph_test_rados_api_lock_pp.log | sed "s/^/ api_lock_pp: /"' 2026-03-09T17:28:52.214 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.214 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.214 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.214 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.214 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2026-03-09T17:28:52.215 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_lock_pp: /' 2026-03-09T17:28:52.216 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_misc 2026-03-09T17:28:52.217 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.221 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_lock_pp.log 2026-03-09T17:28:52.222 INFO:tasks.workunit.client.0.vm00.stdout:test api_misc on pid 60082 2026-03-09T17:28:52.222 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_misc 2026-03-09T17:28:52.222 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60082 2026-03-09T17:28:52.222 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_misc on pid 60082' 2026-03-09T17:28:52.222 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60082 2026-03-09T17:28:52.222 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.222 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.223 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_misc_pp 2026-03-09T17:28:52.223 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_misc_pp' 2026-03-09T17:28:52.224 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2>&1 | tee ceph_test_rados_api_misc.log | sed "s/^/ api_misc: /"' 2026-03-09T17:28:52.226 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.226 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.227 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.227 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.227 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_misc_pp 2026-03-09T17:28:52.230 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.232 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_misc: /' 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stdout:test api_misc_pp on pid 60100 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_misc_pp 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60100 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_misc_pp on pid 60100' 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60100 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_tier_pp 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_misc.log 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_tier_pp' 2026-03-09T17:28:52.233 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2>&1 | tee ceph_test_rados_api_misc_pp.log | sed "s/^/ api_misc_pp: /"' 2026-03-09T17:28:52.234 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.234 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.234 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.234 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.235 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_misc_pp: /' 2026-03-09T17:28:52.235 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_misc_pp.log 2026-03-09T17:28:52.236 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.237 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_tier_pp 2026-03-09T17:28:52.237 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2026-03-09T17:28:52.238 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2026-03-09T17:28:52.239 INFO:tasks.workunit.client.0.vm00.stdout:test api_tier_pp on pid 60112 2026-03-09T17:28:52.239 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_tier_pp 2026-03-09T17:28:52.239 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60112 2026-03-09T17:28:52.239 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_tier_pp on pid 60112' 2026-03-09T17:28:52.239 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60112 2026-03-09T17:28:52.239 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.239 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.244 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_pool 2026-03-09T17:28:52.244 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_pool' 2026-03-09T17:28:52.244 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2>&1 | tee ceph_test_rados_api_tier_pp.log | sed "s/^/ api_tier_pp: /"' 2026-03-09T17:28:52.245 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.245 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.245 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.245 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.251 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_tier_pp: /' 2026-03-09T17:28:52.252 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_tier_pp.log 2026-03-09T17:28:52.253 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2026-03-09T17:28:52.253 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.254 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_pool 2026-03-09T17:28:52.261 INFO:tasks.workunit.client.0.vm00.stdout:test api_pool on pid 60135 2026-03-09T17:28:52.261 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_pool 2026-03-09T17:28:52.261 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60135 2026-03-09T17:28:52.261 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_pool on pid 60135' 2026-03-09T17:28:52.261 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60135 2026-03-09T17:28:52.261 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.261 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.267 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_snapshots 2026-03-09T17:28:52.267 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_snapshots' 2026-03-09T17:28:52.267 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2>&1 | tee ceph_test_rados_api_pool.log | sed "s/^/ api_pool: /"' 2026-03-09T17:28:52.267 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_snapshots 2026-03-09T17:28:52.268 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.271 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.271 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.272 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.274 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_pool: /' 2026-03-09T17:28:52.275 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_pool.log 2026-03-09T17:28:52.275 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2026-03-09T17:28:52.276 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.285 INFO:tasks.workunit.client.0.vm00.stdout:test api_snapshots on pid 60160 2026-03-09T17:28:52.285 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_snapshots 2026-03-09T17:28:52.285 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60160 2026-03-09T17:28:52.285 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_snapshots on pid 60160' 2026-03-09T17:28:52.285 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60160 2026-03-09T17:28:52.286 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.286 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.287 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_snapshots_pp 2026-03-09T17:28:52.287 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_snapshots_pp' 2026-03-09T17:28:52.287 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2>&1 | tee ceph_test_rados_api_snapshots.log | sed "s/^/ api_snapshots: /"' 2026-03-09T17:28:52.288 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.288 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.288 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.288 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.293 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2026-03-09T17:28:52.294 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_snapshots: /' 2026-03-09T17:28:52.295 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_snapshots.log 2026-03-09T17:28:52.295 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_snapshots_pp 2026-03-09T17:28:52.295 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.309 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_snapshots_pp 2026-03-09T17:28:52.309 INFO:tasks.workunit.client.0.vm00.stdout:test api_snapshots_pp on pid 60193 2026-03-09T17:28:52.309 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60193 2026-03-09T17:28:52.309 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_snapshots_pp on pid 60193' 2026-03-09T17:28:52.309 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60193 2026-03-09T17:28:52.309 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.309 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.310 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_stat 2026-03-09T17:28:52.311 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_stat' 2026-03-09T17:28:52.311 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2>&1 | tee ceph_test_rados_api_snapshots_pp.log | sed "s/^/ api_snapshots_pp: /"' 2026-03-09T17:28:52.316 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.324 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.324 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.324 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.330 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_snapshots_pp.log 2026-03-09T17:28:52.330 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_snapshots_pp: /' 2026-03-09T17:28:52.330 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2026-03-09T17:28:52.333 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_stat 2026-03-09T17:28:52.333 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.338 INFO:tasks.workunit.client.0.vm00.stdout:test api_stat on pid 60208 2026-03-09T17:28:52.338 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_stat 2026-03-09T17:28:52.338 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60208 2026-03-09T17:28:52.338 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_stat on pid 60208' 2026-03-09T17:28:52.339 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60208 2026-03-09T17:28:52.339 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.339 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.348 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_stat_pp 2026-03-09T17:28:52.348 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2>&1 | tee ceph_test_rados_api_stat.log | sed "s/^/ api_stat: /"' 2026-03-09T17:28:52.350 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_stat_pp' 2026-03-09T17:28:52.351 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.351 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.351 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.351 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.369 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2026-03-09T17:28:52.374 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_stat.log 2026-03-09T17:28:52.374 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_stat: /' 2026-03-09T17:28:52.380 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_stat_pp 2026-03-09T17:28:52.380 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.385 INFO:tasks.workunit.client.0.vm00.stdout:test api_stat_pp on pid 60239 2026-03-09T17:28:52.385 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_stat_pp 2026-03-09T17:28:52.385 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60239 2026-03-09T17:28:52.385 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_stat_pp on pid 60239' 2026-03-09T17:28:52.385 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60239 2026-03-09T17:28:52.385 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.385 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.406 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2>&1 | tee ceph_test_rados_api_stat_pp.log | sed "s/^/ api_stat_pp: /"' 2026-03-09T17:28:52.406 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.406 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.406 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.406 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.406 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_watch_notify 2026-03-09T17:28:52.407 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_watch_notify' 2026-03-09T17:28:52.407 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.408 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2026-03-09T17:28:52.413 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_watch_notify 2026-03-09T17:28:52.413 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_stat_pp: /' 2026-03-09T17:28:52.413 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_stat_pp.log 2026-03-09T17:28:52.416 INFO:tasks.workunit.client.0.vm00.stdout:test api_watch_notify on pid 60278 2026-03-09T17:28:52.416 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_watch_notify 2026-03-09T17:28:52.416 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60278 2026-03-09T17:28:52.416 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_watch_notify on pid 60278' 2026-03-09T17:28:52.416 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60278 2026-03-09T17:28:52.417 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.417 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.428 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2>&1 | tee ceph_test_rados_api_watch_notify.log | sed "s/^/ api_watch_notify: /"' 2026-03-09T17:28:52.428 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_watch_notify_pp 2026-03-09T17:28:52.431 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.431 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_watch_notify_pp' 2026-03-09T17:28:52.431 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.431 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.431 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.439 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_watch_notify_pp 2026-03-09T17:28:52.439 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.440 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_watch_notify: /' 2026-03-09T17:28:52.441 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_watch_notify.log 2026-03-09T17:28:52.441 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2026-03-09T17:28:52.445 INFO:tasks.workunit.client.0.vm00.stdout:test api_watch_notify_pp on pid 60317 2026-03-09T17:28:52.445 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_watch_notify_pp 2026-03-09T17:28:52.445 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60317 2026-03-09T17:28:52.446 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_watch_notify_pp on pid 60317' 2026-03-09T17:28:52.446 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60317 2026-03-09T17:28:52.446 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.446 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.446 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2>&1 | tee ceph_test_rados_api_watch_notify_pp.log | sed "s/^/ api_watch_notify_pp: /"' 2026-03-09T17:28:52.451 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_cmd 2026-03-09T17:28:52.452 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.452 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.452 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.452 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.453 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_cmd' 2026-03-09T17:28:52.453 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_watch_notify_pp.log 2026-03-09T17:28:52.454 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_watch_notify_pp: /' 2026-03-09T17:28:52.455 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2026-03-09T17:28:52.456 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_cmd 2026-03-09T17:28:52.458 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.458 INFO:tasks.workunit.client.0.vm00.stdout:test api_cmd on pid 60335 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_cmd 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60335 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_cmd on pid 60335' 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60335 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_cmd_pp 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_cmd_pp' 2026-03-09T17:28:52.459 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2>&1 | tee ceph_test_rados_api_cmd.log | sed "s/^/ api_cmd: /"' 2026-03-09T17:28:52.466 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.466 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.466 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.466 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.467 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_cmd_pp 2026-03-09T17:28:52.472 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_cmd.log 2026-03-09T17:28:52.472 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2026-03-09T17:28:52.473 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_cmd: /' 2026-03-09T17:28:52.473 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.475 INFO:tasks.workunit.client.0.vm00.stdout:test api_cmd_pp on pid 60352 2026-03-09T17:28:52.475 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_cmd_pp 2026-03-09T17:28:52.475 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60352 2026-03-09T17:28:52.475 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_cmd_pp on pid 60352' 2026-03-09T17:28:52.475 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60352 2026-03-09T17:28:52.475 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.475 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.478 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2>&1 | tee ceph_test_rados_api_cmd_pp.log | sed "s/^/ api_cmd_pp: /"' 2026-03-09T17:28:52.478 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_service 2026-03-09T17:28:52.478 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_service' 2026-03-09T17:28:52.478 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.478 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.478 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.479 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.479 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_cmd_pp: /' 2026-03-09T17:28:52.482 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.483 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_service 2026-03-09T17:28:52.483 INFO:tasks.workunit.client.0.vm00.stdout:test api_service on pid 60374 2026-03-09T17:28:52.483 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_service 2026-03-09T17:28:52.484 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2026-03-09T17:28:52.484 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60374 2026-03-09T17:28:52.484 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_service on pid 60374' 2026-03-09T17:28:52.484 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60374 2026-03-09T17:28:52.484 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.484 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.487 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2>&1 | tee ceph_test_rados_api_service.log | sed "s/^/ api_service: /"' 2026-03-09T17:28:52.487 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.487 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.487 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.487 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.488 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_cmd_pp.log 2026-03-09T17:28:52.490 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_service_pp 2026-03-09T17:28:52.493 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_service: /' 2026-03-09T17:28:52.493 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_service_pp' 2026-03-09T17:28:52.495 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_service.log 2026-03-09T17:28:52.499 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2026-03-09T17:28:52.508 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_service_pp 2026-03-09T17:28:52.509 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.511 INFO:tasks.workunit.client.0.vm00.stdout:test api_service_pp on pid 60398 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_service_pp 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60398 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_service_pp on pid 60398' 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60398 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_c_write_operations 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_c_write_operations' 2026-03-09T17:28:52.512 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2>&1 | tee ceph_test_rados_api_service_pp.log | sed "s/^/ api_service_pp: /"' 2026-03-09T17:28:52.513 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.513 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.513 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.513 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.513 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_service_pp: /' 2026-03-09T17:28:52.514 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_service_pp.log 2026-03-09T17:28:52.515 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2026-03-09T17:28:52.515 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.516 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_c_write_operations 2026-03-09T17:28:52.522 INFO:tasks.workunit.client.0.vm00.stdout:test api_c_write_operations on pid 60422 2026-03-09T17:28:52.522 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_c_write_operations 2026-03-09T17:28:52.522 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60422 2026-03-09T17:28:52.522 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_c_write_operations on pid 60422' 2026-03-09T17:28:52.522 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60422 2026-03-09T17:28:52.522 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.522 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.524 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2>&1 | tee ceph_test_rados_api_c_write_operations.log | sed "s/^/ api_c_write_operations: /"' 2026-03-09T17:28:52.535 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_c_read_operations 2026-03-09T17:28:52.535 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_c_read_operations' 2026-03-09T17:28:52.535 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.537 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_c_read_operations 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stdout:test api_c_read_operations on pid 60438 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_c_read_operations 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60438 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_c_read_operations on pid 60438' 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60438 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s list_parallel 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' list_parallel' 2026-03-09T17:28:52.538 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2>&1 | tee ceph_test_rados_api_c_read_operations.log | sed "s/^/ api_c_read_operations: /"' 2026-03-09T17:28:52.539 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.539 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.539 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.539 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.541 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.541 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.541 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.541 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.542 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_c_read_operations: /' 2026-03-09T17:28:52.542 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_c_read_operations.log 2026-03-09T17:28:52.543 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.545 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2026-03-09T17:28:52.545 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_c_write_operations: /' 2026-03-09T17:28:52.546 INFO:tasks.workunit.client.0.vm00.stderr:++ echo list_parallel 2026-03-09T17:28:52.548 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_c_write_operations.log 2026-03-09T17:28:52.552 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2026-03-09T17:28:52.556 INFO:tasks.workunit.client.0.vm00.stdout:test list_parallel on pid 60473 2026-03-09T17:28:52.557 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=list_parallel 2026-03-09T17:28:52.557 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60473 2026-03-09T17:28:52.557 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test list_parallel on pid 60473' 2026-03-09T17:28:52.557 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60473 2026-03-09T17:28:52.557 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.557 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.557 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s open_pools_parallel 2026-03-09T17:28:52.557 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2>&1 | tee ceph_test_rados_list_parallel.log | sed "s/^/ list_parallel: /"' 2026-03-09T17:28:52.558 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' open_pools_parallel' 2026-03-09T17:28:52.558 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.558 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.563 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.563 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.566 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_list_parallel.log 2026-03-09T17:28:52.567 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ list_parallel: /' 2026-03-09T17:28:52.568 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2026-03-09T17:28:52.569 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.572 INFO:tasks.workunit.client.0.vm00.stderr:++ echo open_pools_parallel 2026-03-09T17:28:52.576 INFO:tasks.workunit.client.0.vm00.stdout:test open_pools_parallel on pid 60504 2026-03-09T17:28:52.576 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=open_pools_parallel 2026-03-09T17:28:52.576 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60504 2026-03-09T17:28:52.576 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test open_pools_parallel on pid 60504' 2026-03-09T17:28:52.576 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60504 2026-03-09T17:28:52.576 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T17:28:52.576 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.587 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s delete_pools_parallel 2026-03-09T17:28:52.588 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' delete_pools_parallel' 2026-03-09T17:28:52.588 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2>&1 | tee ceph_test_rados_open_pools_parallel.log | sed "s/^/ open_pools_parallel: /"' 2026-03-09T17:28:52.592 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.592 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.592 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.592 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.598 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2026-03-09T17:28:52.599 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_open_pools_parallel.log 2026-03-09T17:28:52.600 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ open_pools_parallel: /' 2026-03-09T17:28:52.603 INFO:tasks.workunit.client.0.vm00.stderr:++ echo delete_pools_parallel 2026-03-09T17:28:52.603 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.614 INFO:tasks.workunit.client.0.vm00.stdout:test delete_pools_parallel on pid 60582 2026-03-09T17:28:52.614 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=delete_pools_parallel 2026-03-09T17:28:52.614 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60582 2026-03-09T17:28:52.614 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test delete_pools_parallel on pid 60582' 2026-03-09T17:28:52.614 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60582 2026-03-09T17:28:52.614 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.614 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.628 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s cls 2026-03-09T17:28:52.628 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' cls' 2026-03-09T17:28:52.628 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2>&1 | tee ceph_test_rados_delete_pools_parallel.log | sed "s/^/ delete_pools_parallel: /"' 2026-03-09T17:28:52.630 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.630 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.630 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.630 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.630 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ delete_pools_parallel: /' 2026-03-09T17:28:52.632 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_delete_pools_parallel.log 2026-03-09T17:28:52.633 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.634 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2026-03-09T17:28:52.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:52 vm02 bash[23351]: cluster 2026-03-09T17:28:50.672008+0000 mgr.y (mgr.14505) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:52.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:52 vm02 bash[23351]: cluster 2026-03-09T17:28:50.672008+0000 mgr.y (mgr.14505) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:52.636 INFO:tasks.workunit.client.0.vm00.stderr:++ echo cls 2026-03-09T17:28:52.644 INFO:tasks.workunit.client.0.vm00.stdout:test cls on pid 60618 2026-03-09T17:28:52.644 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=cls 2026-03-09T17:28:52.645 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60618 2026-03-09T17:28:52.645 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test cls on pid 60618' 2026-03-09T17:28:52.645 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60618 2026-03-09T17:28:52.645 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.645 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.646 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cls 2>&1 | tee ceph_test_neorados_cls.log | sed "s/^/ cls: /"' 2026-03-09T17:28:52.647 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s cmd 2026-03-09T17:28:52.647 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' cmd' 2026-03-09T17:28:52.648 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.648 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.648 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.648 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.649 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ cls: /' 2026-03-09T17:28:52.650 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_cls.log 2026-03-09T17:28:52.650 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_cls 2026-03-09T17:28:52.653 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.655 INFO:tasks.workunit.client.0.vm00.stderr:++ echo cmd 2026-03-09T17:28:52.660 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=cmd 2026-03-09T17:28:52.660 INFO:tasks.workunit.client.0.vm00.stdout:test cmd on pid 60641 2026-03-09T17:28:52.660 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60641 2026-03-09T17:28:52.660 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test cmd on pid 60641' 2026-03-09T17:28:52.660 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60641 2026-03-09T17:28:52.660 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.660 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.666 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cmd 2>&1 | tee ceph_test_neorados_cmd.log | sed "s/^/ cmd: /"' 2026-03-09T17:28:52.668 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s handler_error 2026-03-09T17:28:52.669 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' handler_error' 2026-03-09T17:28:52.670 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.670 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.670 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.670 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.680 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_cmd 2026-03-09T17:28:52.680 INFO:tasks.workunit.client.0.vm00.stderr:++ echo handler_error 2026-03-09T17:28:52.680 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.681 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_cmd.log 2026-03-09T17:28:52.683 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ cmd: /' 2026-03-09T17:28:52.693 INFO:tasks.workunit.client.0.vm00.stdout:test handler_error on pid 60703 2026-03-09T17:28:52.693 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=handler_error 2026-03-09T17:28:52.693 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60703 2026-03-09T17:28:52.693 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test handler_error on pid 60703' 2026-03-09T17:28:52.693 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60703 2026-03-09T17:28:52.693 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.693 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.699 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s io 2026-03-09T17:28:52.699 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_handler_error 2>&1 | tee ceph_test_neorados_handler_error.log | sed "s/^/ handler_error: /"' 2026-03-09T17:28:52.699 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' io' 2026-03-09T17:28:52.704 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.704 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.704 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.704 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.708 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.710 INFO:tasks.workunit.client.0.vm00.stdout:test io on pid 60727 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:++ echo io 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=io 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60727 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test io on pid 60727' 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60727 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s ec_io 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' ec_io' 2026-03-09T17:28:52.711 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_io 2>&1 | tee ceph_test_neorados_io.log | sed "s/^/ io: /"' 2026-03-09T17:28:52.712 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ handler_error: /' 2026-03-09T17:28:52.714 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.714 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.714 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.714 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.715 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ io: /' 2026-03-09T17:28:52.715 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_handler_error.log 2026-03-09T17:28:52.716 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_handler_error 2026-03-09T17:28:52.716 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_io.log 2026-03-09T17:28:52.716 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.719 INFO:tasks.workunit.client.0.vm00.stderr:++ echo ec_io 2026-03-09T17:28:52.721 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_io 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stdout:test ec_io on pid 60742 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=ec_io 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60742 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test ec_io on pid 60742' 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60742 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s list 2026-03-09T17:28:52.724 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' list' 2026-03-09T17:28:52.727 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_io 2>&1 | tee ceph_test_neorados_ec_io.log | sed "s/^/ ec_io: /"' 2026-03-09T17:28:52.727 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.727 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.727 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.727 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.730 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.733 INFO:tasks.workunit.client.0.vm00.stderr:++ echo list 2026-03-09T17:28:52.734 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_ec_io 2026-03-09T17:28:52.739 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ ec_io: /' 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stdout:test list on pid 60759 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=list 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_ec_io.log 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60759 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test list on pid 60759' 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60759 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.740 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_list 2>&1 | tee ceph_test_neorados_list.log | sed "s/^/ list: /"' 2026-03-09T17:28:52.741 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s ec_list 2026-03-09T17:28:52.741 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' ec_list' 2026-03-09T17:28:52.743 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.743 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.744 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.744 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.751 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ list: /' 2026-03-09T17:28:52.751 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.755 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_list.log 2026-03-09T17:28:52.755 INFO:tasks.workunit.client.0.vm00.stderr:++ echo ec_list 2026-03-09T17:28:52.757 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_list 2026-03-09T17:28:52.761 INFO:tasks.workunit.client.0.vm00.stdout:test ec_list on pid 60798 2026-03-09T17:28:52.761 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=ec_list 2026-03-09T17:28:52.761 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60798 2026-03-09T17:28:52.761 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test ec_list on pid 60798' 2026-03-09T17:28:52.761 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60798 2026-03-09T17:28:52.761 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.761 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.764 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s misc 2026-03-09T17:28:52.764 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_list 2>&1 | tee ceph_test_neorados_ec_list.log | sed "s/^/ ec_list: /"' 2026-03-09T17:28:52.764 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' misc' 2026-03-09T17:28:52.765 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.765 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.765 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.765 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.770 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ ec_list: /' 2026-03-09T17:28:52.771 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_ec_list 2026-03-09T17:28:52.772 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_ec_list.log 2026-03-09T17:28:52.778 INFO:tasks.workunit.client.0.vm00.stderr:++ echo misc 2026-03-09T17:28:52.780 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.782 INFO:tasks.workunit.client.0.vm00.stdout:test misc on pid 60830 2026-03-09T17:28:52.782 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=misc 2026-03-09T17:28:52.782 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60830 2026-03-09T17:28:52.782 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test misc on pid 60830' 2026-03-09T17:28:52.782 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60830 2026-03-09T17:28:52.782 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.782 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.782 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_misc 2>&1 | tee ceph_test_neorados_misc.log | sed "s/^/ misc: /"' 2026-03-09T17:28:52.785 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.785 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.785 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.785 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:52 vm00 bash[20770]: cluster 2026-03-09T17:28:50.672008+0000 mgr.y (mgr.14505) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:52.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:52 vm00 bash[20770]: cluster 2026-03-09T17:28:50.672008+0000 mgr.y (mgr.14505) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:52 vm00 bash[28333]: cluster 2026-03-09T17:28:50.672008+0000 mgr.y (mgr.14505) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:52 vm00 bash[28333]: cluster 2026-03-09T17:28:50.672008+0000 mgr.y (mgr.14505) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:52.787 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s pool 2026-03-09T17:28:52.789 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' pool' 2026-03-09T17:28:52.793 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ misc: /' 2026-03-09T17:28:52.797 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_misc.log 2026-03-09T17:28:52.797 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_misc 2026-03-09T17:28:52.802 INFO:tasks.workunit.client.0.vm00.stderr:++ echo pool 2026-03-09T17:28:52.802 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.804 INFO:tasks.workunit.client.0.vm00.stdout:test pool on pid 60851 2026-03-09T17:28:52.804 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=pool 2026-03-09T17:28:52.804 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60851 2026-03-09T17:28:52.804 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test pool on pid 60851' 2026-03-09T17:28:52.804 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60851 2026-03-09T17:28:52.804 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.804 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.807 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s read_operations 2026-03-09T17:28:52.807 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' read_operations' 2026-03-09T17:28:52.807 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_pool 2>&1 | tee ceph_test_neorados_pool.log | sed "s/^/ pool: /"' 2026-03-09T17:28:52.808 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.808 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.808 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.808 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.811 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.814 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_pool 2026-03-09T17:28:52.817 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_pool.log 2026-03-09T17:28:52.818 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ pool: /' 2026-03-09T17:28:52.818 INFO:tasks.workunit.client.0.vm00.stderr:++ echo read_operations 2026-03-09T17:28:52.820 INFO:tasks.workunit.client.0.vm00.stdout:test read_operations on pid 60883 2026-03-09T17:28:52.820 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=read_operations 2026-03-09T17:28:52.820 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60883 2026-03-09T17:28:52.820 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test read_operations on pid 60883' 2026-03-09T17:28:52.820 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60883 2026-03-09T17:28:52.820 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.820 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.825 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_read_operations 2>&1 | tee ceph_test_neorados_read_operations.log | sed "s/^/ read_operations: /"' 2026-03-09T17:28:52.826 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.826 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.826 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.826 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.827 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s snapshots 2026-03-09T17:28:52.827 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' snapshots' 2026-03-09T17:28:52.831 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ read_operations: /' 2026-03-09T17:28:52.832 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_read_operations.log 2026-03-09T17:28:52.832 INFO:tasks.workunit.client.0.vm00.stderr:++ echo snapshots 2026-03-09T17:28:52.832 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_read_operations 2026-03-09T17:28:52.833 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.840 INFO:tasks.workunit.client.0.vm00.stdout:test snapshots on pid 60907 2026-03-09T17:28:52.840 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=snapshots 2026-03-09T17:28:52.840 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60907 2026-03-09T17:28:52.840 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test snapshots on pid 60907' 2026-03-09T17:28:52.840 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60907 2026-03-09T17:28:52.840 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.840 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.843 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s watch_notify 2026-03-09T17:28:52.844 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_snapshots 2>&1 | tee ceph_test_neorados_snapshots.log | sed "s/^/ snapshots: /"' 2026-03-09T17:28:52.844 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' watch_notify' 2026-03-09T17:28:52.845 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.846 INFO:tasks.workunit.client.0.vm00.stderr:++ echo watch_notify 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stdout:test watch_notify on pid 60921 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=watch_notify 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60921 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test watch_notify on pid 60921' 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60921 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s write_operations 2026-03-09T17:28:52.847 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' write_operations' 2026-03-09T17:28:52.848 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_watch_notify 2>&1 | tee ceph_test_neorados_watch_notify.log | sed "s/^/ watch_notify: /"' 2026-03-09T17:28:52.849 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.849 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.849 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.849 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.850 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.850 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.850 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.850 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.850 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_watch_notify 2026-03-09T17:28:52.851 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ snapshots: /' 2026-03-09T17:28:52.852 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_snapshots.log 2026-03-09T17:28:52.852 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_snapshots 2026-03-09T17:28:52.855 INFO:tasks.workunit.client.0.vm00.stderr:++ echo write_operations 2026-03-09T17:28:52.856 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-09T17:28:52.862 INFO:tasks.workunit.client.0.vm00.stdout:test write_operations on pid 60938 2026-03-09T17:28:52.862 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_watch_notify.log 2026-03-09T17:28:52.862 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ watch_notify: /' 2026-03-09T17:28:52.862 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=write_operations 2026-03-09T17:28:52.862 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60938 2026-03-09T17:28:52.863 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test write_operations on pid 60938' 2026-03-09T17:28:52.863 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60938 2026-03-09T17:28:52.863 INFO:tasks.workunit.client.0.vm00.stderr:+ ret=0 2026-03-09T17:28:52.863 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T17:28:52.863 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:28:52.863 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60112 2026-03-09T17:28:52.863 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60112 2026-03-09T17:28:52.865 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_write_operations 2>&1 | tee ceph_test_neorados_write_operations.log | sed "s/^/ write_operations: /"' 2026-03-09T17:28:52.866 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-09T17:28:52.866 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.866 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-09T17:28:52.866 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-09T17:28:52.874 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ write_operations: /' 2026-03-09T17:28:52.874 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_write_operations.log 2026-03-09T17:28:52.876 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_write_operations 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [==========] Running 12 tests from 1 test suite. 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [----------] Global test environment set-up. 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [----------] 12 tests from AsioRados 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadCallback 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadCallback (1 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadFuture 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadFuture (0 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadYield 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadYield (0 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteCallback 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteCallback (7 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteFuture 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteFuture (1 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteYield 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteYield (3 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationCallback 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationCallback (0 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationFuture 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationFuture (1 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationYield 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationYield (0 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationCallback 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationCallback (2 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationFuture 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationFuture (1 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationYield 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationYield (3 ms) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [----------] 12 tests from AsioRados (19 ms total) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [----------] Global test environment tear-down 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [==========] 12 tests from 1 test suite ran. (1577 ms total) 2026-03-09T17:28:53.761 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ PASSED ] 12 tests. 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:51.520777+0000 mgr.y (mgr.14505) 105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:51.520777+0000 mgr.y (mgr.14505) 105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: cluster 2026-03-09T17:28:52.362323+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: cluster 2026-03-09T17:28:52.362323+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.362746+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/344917846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.362746+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/344917846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.362998+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/964858029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.362998+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/964858029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.363154+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/3978540523' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.363154+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/3978540523' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.372631+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.100:0/3161168077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.372631+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.100:0/3161168077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.376612+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.376612+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.376738+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.100:0/2658369658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.376738+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.100:0/2658369658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419109+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419109+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419253+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419253+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419354+0000 mon.a (mon.0) 824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419354+0000 mon.a (mon.0) 824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419413+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419413+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419494+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.419494+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.430728+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.430728+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.430861+0000 mon.a (mon.0) 828 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.430861+0000 mon.a (mon.0) 828 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.504980+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.504980+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.505711+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.505711+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.507535+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.507535+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.507869+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.507869+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.508210+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.508210+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.508810+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.508810+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.545290+0000 mon.c (mon.2) 81 : audit [DBG] from='client.? 192.168.123.100:0/559969112' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.545290+0000 mon.c (mon.2) 81 : audit [DBG] from='client.? 192.168.123.100:0/559969112' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.798520+0000 mon.a (mon.0) 832 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.798520+0000 mon.a (mon.0) 832 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.831859+0000 mon.a (mon.0) 833 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:53 vm00 bash[28333]: audit 2026-03-09T17:28:52.831859+0000 mon.a (mon.0) 833 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:51.520777+0000 mgr.y (mgr.14505) 105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:51.520777+0000 mgr.y (mgr.14505) 105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: cluster 2026-03-09T17:28:52.362323+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: cluster 2026-03-09T17:28:52.362323+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.362746+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/344917846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.362746+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/344917846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.362998+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/964858029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.362998+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/964858029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.363154+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/3978540523' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.363154+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/3978540523' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.372631+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.100:0/3161168077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.372631+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.100:0/3161168077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.376612+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.376612+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.376738+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.100:0/2658369658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.376738+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.100:0/2658369658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419109+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419109+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419253+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419253+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419354+0000 mon.a (mon.0) 824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419354+0000 mon.a (mon.0) 824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419413+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419413+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419494+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.419494+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.430728+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.430728+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.430861+0000 mon.a (mon.0) 828 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.430861+0000 mon.a (mon.0) 828 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.504980+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.504980+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.505711+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.505711+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.507535+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.507535+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.507869+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.507869+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.508210+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.508210+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.508810+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.508810+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.545290+0000 mon.c (mon.2) 81 : audit [DBG] from='client.? 192.168.123.100:0/559969112' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.545290+0000 mon.c (mon.2) 81 : audit [DBG] from='client.? 192.168.123.100:0/559969112' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.798520+0000 mon.a (mon.0) 832 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.798520+0000 mon.a (mon.0) 832 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.831859+0000 mon.a (mon.0) 833 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:53 vm00 bash[20770]: audit 2026-03-09T17:28:52.831859+0000 mon.a (mon.0) 833 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:51.520777+0000 mgr.y (mgr.14505) 105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:51.520777+0000 mgr.y (mgr.14505) 105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: cluster 2026-03-09T17:28:52.362323+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: cluster 2026-03-09T17:28:52.362323+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.362746+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/344917846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.362746+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/344917846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.362998+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/964858029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.362998+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/964858029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.363154+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/3978540523' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.363154+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/3978540523' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.372631+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.100:0/3161168077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.372631+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.100:0/3161168077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.376612+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.376612+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.376738+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.100:0/2658369658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.376738+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.100:0/2658369658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419109+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419109+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419253+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419253+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419354+0000 mon.a (mon.0) 824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419354+0000 mon.a (mon.0) 824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419413+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419413+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419494+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.419494+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.430728+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.430728+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.430861+0000 mon.a (mon.0) 828 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.430861+0000 mon.a (mon.0) 828 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.504980+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.504980+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.505711+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.505711+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.507535+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.507535+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.507869+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.507869+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.508210+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.508210+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.508810+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.508810+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.545290+0000 mon.c (mon.2) 81 : audit [DBG] from='client.? 192.168.123.100:0/559969112' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T17:28:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.545290+0000 mon.c (mon.2) 81 : audit [DBG] from='client.? 192.168.123.100:0/559969112' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T17:28:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.798520+0000 mon.a (mon.0) 832 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.798520+0000 mon.a (mon.0) 832 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.831859+0000 mon.a (mon.0) 833 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:53 vm02 bash[23351]: audit 2026-03-09T17:28:52.831859+0000 mon.a (mon.0) 833 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [==========] Running 11 tests from 3 test suites. 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] Global test environment set-up. 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 7 tests from LibRadosList 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjects 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjects (293 ms) 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjectsZeroInName 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjectsZeroInName (26 ms) 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjectsNS 2026-03-09T17:28:54.015 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo1,foo2,foo3 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo1 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo2 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo3 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo1,foo4,foo5 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo4 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo5 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo1 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo6,foo7 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo7 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo6 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo4 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo5 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns2:foo7 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns2:foo6 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo1 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo1 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo2 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo3 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjectsNS (78 ms) 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjectsStart 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 1 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 10 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 13 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 7 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 14 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 0 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 15 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 11 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 5 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 8 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 6 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 3 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 4 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 12 0 2026-03-09T17:28:54.016 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 9 0 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2 0 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjectsStart (58 ms) 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjectsCursor 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: x cursor=MIN 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=1 cursor=13:02547ec2:::1:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=10 cursor=13:52ea6a34:::10:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=13 cursor=13:566253c9:::13:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=7 cursor=13:5c6b0b28:::7:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=14 cursor=13:62a1935d:::14:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=0 cursor=13:6cac518f:::0:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=15 cursor=13:863748b0:::15:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=11 cursor=13:89d3ae78:::11:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=5 cursor=13:b29083e3:::5:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=8 cursor=13:bd63b0f1:::8:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=6 cursor=13:c4fdafeb:::6:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=3 cursor=13:cfc208b3:::3:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=4 cursor=13:d83876eb:::4:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=12 cursor=13:de5d7c5f:::12:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=9 cursor=13:e960b815:::9:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=2 cursor=13:f905c69b:::2:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: FIRST> seek to MIN oid=1 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=1 cursor=13:02547ec2:::1:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:02547ec2:::1:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:02547ec2:::1:head -> 1 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=10 cursor=13:52ea6a34:::10:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:52ea6a34:::10:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:52ea6a34:::10:head -> 10 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=13 cursor=13:566253c9:::13:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:566253c9:::13:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:566253c9:::13:head -> 13 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=7 cursor=13:5c6b0b28:::7:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:5c6b0b28:::7:head 2026-03-09T17:28:54.017 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:5c6b0b28:::7:head -> 7 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=14 cursor=13:62a1935d:::14:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:62a1935d:::14:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:62a1935d:::14:head -> 14 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=0 cursor=13:6cac518f:::0:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:6cac518f:::0:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:6cac518f:::0:head -> 0 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=15 cursor=13:863748b0:::15:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:863748b0:::15:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:863748b0:::15:head -> 15 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=11 cursor=13:89d3ae78:::11:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:89d3ae78:::11:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:89d3ae78:::11:head -> 11 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=5 cursor=13:b29083e3:::5:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:b29083e3:::5:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:b29083e3:::5:head -> 5 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=8 cursor=13:bd63b0f1:::8:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:bd63b0f1:::8:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:bd63b0f1:::8:head -> 8 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=6 cursor=13:c4fdafeb:::6:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:c4fdafeb:::6:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:c4fdafeb:::6:head -> 6 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=3 cursor=13:cfc208b3:::3:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:cfc208b3:::3:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:cfc208b3:::3:head -> 3 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=4 cursor=13:d83876eb:::4:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:d83876eb:::4:head 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:d83876eb:::4:head -> 4 2026-03-09T17:28:54.064 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=12 cursor=13:de5d7c5f:::12:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:de5d7c5f:::12:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:de5d7c5f:::12:head -> 12 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=9 cursor=13:e960b815:::9:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:e960b815:::9:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:e960b815:::9:head -> 9 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=2 cursor=13:f905c69b:::2:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:f905c69b:::2:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:f905c69b:::2:head -> 2 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:52ea6a34:::10:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:52ea6a34:::10:head expected=13:52ea6a34:::10:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:52ea6a34:::10:head -> 10 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=10 expected=10 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:566253c9:::13:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:566253c9:::13:head expected=13:566253c9:::13:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:566253c9:::13:head -> 13 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=13 expected=13 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:02547ec2:::1:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:02547ec2:::1:head expected=13:02547ec2:::1:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:02547ec2:::1:head -> 1 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=1 expected=1 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:cfc208b3:::3:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:cfc208b3:::3:head expected=13:cfc208b3:::3:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:cfc208b3:::3:head -> 3 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=3 expected=3 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:c4fdafeb:::6:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:c4fdafeb:::6:head expected=13:c4fdafeb:::6:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:c4fdafeb:::6:head -> 6 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=6 expected=6 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:6cac518f:::0:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:6cac518f:::0:head expected=13:6cac518f:::0:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:6cac518f:::0:head -> 0 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=0 expected=0 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:bd63b0f1:::8:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:bd63b0f1:::8:head expected=13:bd63b0f1:::8:head 2026-03-09T17:28:54.065 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:bd63b0f1:::8:head -> 8 2026-03-09T17:28:54.066 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=8 expected=8 2026-03-09T17:28:54.066 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:b29083e3:::5:head 2026-03-09T17:28:54.066 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:b29083e3:::5:head expected=13:b29083e3:::5:head 2026-03-09T17:28:54.066 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:b29083e3:::5:head -> 5 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: cluster 2026-03-09T17:28:52.672430+0000 mgr.y (mgr.14505) 106 : cluster [DBG] pgmap v62: 388 pgs: 256 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: cluster 2026-03-09T17:28:52.672430+0000 mgr.y (mgr.14505) 106 : cluster [DBG] pgmap v62: 388 pgs: 256 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: cluster 2026-03-09T17:28:53.312166+0000 mon.a (mon.0) 834 : cluster [WRN] Health check failed: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: cluster 2026-03-09T17:28:53.312166+0000 mon.a (mon.0) 834 : cluster [WRN] Health check failed: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421226+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421226+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421385+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421385+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421541+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421541+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421598+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421598+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421660+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421660+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421718+0000 mon.a (mon.0) 840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421718+0000 mon.a (mon.0) 840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421751+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421751+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421936+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.421936+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.422090+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.422090+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.422351+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.422351+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: cluster 2026-03-09T17:28:53.462495+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: cluster 2026-03-09T17:28:53.462495+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.463363+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.463363+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.464891+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.464891+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.465012+0000 mon.a (mon.0) 848 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.465012+0000 mon.a (mon.0) 848 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.466211+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.466211+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.498865+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.498865+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.500807+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.500807+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.500978+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1250414856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.500978+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1250414856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.501427+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.501427+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.501577+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.501577+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.501684+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.501684+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.502001+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/1856959627' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.502001+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/1856959627' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.502127+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/917798435' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.502127+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/917798435' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.502273+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.502273+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.502342+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.502342+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.534280+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/2813333160' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.534280+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/2813333160' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.534449+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/705668911' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.534449+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/705668911' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537196+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/4292993947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537196+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/4292993947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537286+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/1939334061' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537286+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/1939334061' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537345+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537345+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537360+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/2085754744' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537360+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/2085754744' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537431+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2520318720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537431+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2520318720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537451+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.537451+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.539076+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.539076+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.539209+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.539209+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.539328+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.539328+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.539445+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:53.539445+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:54.128105+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:54.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:54 vm00 bash[28333]: audit 2026-03-09T17:28:54.128105+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: cluster 2026-03-09T17:28:52.672430+0000 mgr.y (mgr.14505) 106 : cluster [DBG] pgmap v62: 388 pgs: 256 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: cluster 2026-03-09T17:28:52.672430+0000 mgr.y (mgr.14505) 106 : cluster [DBG] pgmap v62: 388 pgs: 256 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: cluster 2026-03-09T17:28:53.312166+0000 mon.a (mon.0) 834 : cluster [WRN] Health check failed: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: cluster 2026-03-09T17:28:53.312166+0000 mon.a (mon.0) 834 : cluster [WRN] Health check failed: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421226+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421226+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421385+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421385+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421541+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421541+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421598+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421598+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421660+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421660+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421718+0000 mon.a (mon.0) 840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421718+0000 mon.a (mon.0) 840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421751+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421751+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421936+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.421936+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.422090+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.422090+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.422351+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.422351+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: cluster 2026-03-09T17:28:53.462495+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: cluster 2026-03-09T17:28:53.462495+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.463363+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.463363+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.464891+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.464891+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.465012+0000 mon.a (mon.0) 848 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.465012+0000 mon.a (mon.0) 848 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.466211+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.466211+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.498865+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.498865+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.500807+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.500807+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.500978+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1250414856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.500978+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1250414856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.501427+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.501427+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.501577+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.501577+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.501684+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.501684+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.502001+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/1856959627' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.502001+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/1856959627' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.502127+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/917798435' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.502127+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/917798435' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.502273+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.502273+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.502342+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.502342+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.534280+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/2813333160' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.534280+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/2813333160' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.534449+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/705668911' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.534449+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/705668911' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537196+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/4292993947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537196+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/4292993947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537286+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/1939334061' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537286+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/1939334061' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537345+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537345+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537360+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/2085754744' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537360+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/2085754744' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537431+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2520318720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537431+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2520318720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537451+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.537451+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.539076+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.539076+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.539209+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.539209+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.539328+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.539328+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.539445+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:53.539445+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:54.128105+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:54.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:54 vm00 bash[20770]: audit 2026-03-09T17:28:54.128105+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: cluster 2026-03-09T17:28:52.672430+0000 mgr.y (mgr.14505) 106 : cluster [DBG] pgmap v62: 388 pgs: 256 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: cluster 2026-03-09T17:28:52.672430+0000 mgr.y (mgr.14505) 106 : cluster [DBG] pgmap v62: 388 pgs: 256 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: cluster 2026-03-09T17:28:53.312166+0000 mon.a (mon.0) 834 : cluster [WRN] Health check failed: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: cluster 2026-03-09T17:28:53.312166+0000 mon.a (mon.0) 834 : cluster [WRN] Health check failed: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421226+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421226+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59916-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421385+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421385+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421541+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421541+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-60068-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421598+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421598+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59921-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421660+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421660+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-60009-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421718+0000 mon.a (mon.0) 840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421718+0000 mon.a (mon.0) 840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59929-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421751+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421751+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/3966379215' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-60039-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421936+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.421936+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.422090+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.422090+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60745-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.422351+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.422351+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60801-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: cluster 2026-03-09T17:28:53.462495+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: cluster 2026-03-09T17:28:53.462495+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.463363+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.463363+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.464891+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.464891+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.465012+0000 mon.a (mon.0) 848 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.465012+0000 mon.a (mon.0) 848 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.466211+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.466211+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.498865+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.498865+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.500807+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.500807+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.500978+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1250414856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.500978+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1250414856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.501427+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.501427+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.501577+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.501577+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.501684+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.501684+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.502001+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/1856959627' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.502001+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/1856959627' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.502127+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/917798435' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.502127+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/917798435' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.502273+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.502273+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.502342+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.502342+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.534280+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/2813333160' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.534280+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/2813333160' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.534449+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/705668911' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.534449+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/705668911' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537196+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/4292993947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537196+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/4292993947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537286+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/1939334061' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537286+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/1939334061' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537345+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537345+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537360+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/2085754744' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537360+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/2085754744' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537431+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2520318720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537431+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2520318720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537451+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.537451+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.539076+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.539076+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.539209+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.539209+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.539328+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.539328+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.539445+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:53.539445+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:54.128105+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:54.888 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:54 vm02 bash[23351]: audit 2026-03-09T17:28:54.128105+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=5 cls: Running main() from gmock_main.cc 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [==========] Running 1 test from 1 test suite. 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [----------] Global test environment set-up. 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [----------] 1 test from NeoRadosCls 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [ RUN ] NeoRadosCls.DNE 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [ OK ] NeoRadosCls.DNE (2982 ms) 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [----------] 1 test from NeoRadosCls (2982 ms total) 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [----------] Global test environment tear-down 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [==========] 1 test from 1 test suite ran. (2982 ms total) 2026-03-09T17:28:55.681 INFO:tasks.workunit.client.0.vm00.stdout: cls: [ PASSED ] 1 test. 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: Running main() from gmock_main.cc 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [==========] Running 1 test from 1 test suite. 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [----------] Global test environment set-up. 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [----------] 1 test from neocls_handler_error 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [ RUN ] neocls_handler_error.test_handler_error 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [ OK ] neocls_handler_error.test_handler_error (2950 ms) 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [----------] 1 test from neocls_handler_error (2950 ms total) 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [----------] Global test environment tear-down 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [==========] 1 test from 1 test suite ran. (2950 ms total) 2026-03-09T17:28:55.705 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [ PASSED ] 1 test. 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: Running main() from gmock_main.cc 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [==========] Running 3 tests from 1 test suite. 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [----------] Global test environment set-up. 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.MonDescribePP 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ OK ] LibRadosCmd.MonDescribePP (39 ms) 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.OSDCmdPP 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ OK ] LibRadosCmd.OSDCmdPP (37 ms) 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.PGCmdPP 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ OK ] LibRadosCmd.PGCmdPP (3136 ms) 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd (3212 ms total) 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [----------] Global test environment tear-down 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [==========] 3 tests from 1 test suite ran. (3212 ms total) 2026-03-09T17:28:55.749 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ PASSED ] 3 tests. 2026-03-09T17:28:55.812 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: Running main() from gmock_main.cc 2026-03-09T17:28:55.812 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [==========] Running 4 tests from 1 test suite. 2026-03-09T17:28:55.812 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [----------] Global test environment set-up. 2026-03-09T17:28:55.812 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [----------] 4 tests from LibRadosCmd 2026-03-09T17:28:55.812 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ RUN ] LibRadosCmd.MonDescribe 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ OK ] LibRadosCmd.MonDescribe (52 ms) 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ RUN ] LibRadosCmd.OSDCmd 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ OK ] LibRadosCmd.OSDCmd (34 ms) 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ RUN ] LibRadosCmd.PGCmd 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ OK ] LibRadosCmd.PGCmd (3088 ms) 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ RUN ] LibRadosCmd.WatchLog 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438045+0000 mon.a [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438108+0000 mon.a [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438495+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438525+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438546+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438570+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438596+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438656+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438719+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438776+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438814+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.813 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.438878+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438045+0000 mon.a (mon.0) 861 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438045+0000 mon.a (mon.0) 861 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438108+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438108+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438495+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438495+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438525+0000 mon.a (mon.0) 864 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438525+0000 mon.a (mon.0) 864 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438546+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438546+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438570+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438570+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438596+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438596+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438656+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438656+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438719+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438719+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438776+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438776+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438814+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438814+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438878+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.438878+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: cluster 2026-03-09T17:28:54.508912+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: cluster 2026-03-09T17:28:54.508912+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.596569+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.596569+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.629176+0000 mon.a (mon.0) 874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.629176+0000 mon.a (mon.0) 874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.630667+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.630667+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.659217+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.659217+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.659428+0000 mon.a (mon.0) 876 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.659428+0000 mon.a (mon.0) 876 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.662566+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.662566+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.663017+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.663017+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.663339+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.663339+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: cluster 2026-03-09T17:28:54.673002+0000 mgr.y (mgr.14505) 107 : cluster [DBG] pgmap v65: 996 pgs: 736 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: cluster 2026-03-09T17:28:54.673002+0000 mgr.y (mgr.14505) 107 : cluster [DBG] pgmap v65: 996 pgs: 736 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.685988+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:54.685988+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:55.129613+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:55 vm02 bash[23351]: audit 2026-03-09T17:28:55.129613+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438045+0000 mon.a (mon.0) 861 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438045+0000 mon.a (mon.0) 861 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438108+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438108+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438495+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438495+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438525+0000 mon.a (mon.0) 864 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438525+0000 mon.a (mon.0) 864 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438546+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438546+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438570+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438570+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438596+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438596+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438656+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438656+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438719+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438719+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438776+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438776+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438814+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438814+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438878+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.438878+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: cluster 2026-03-09T17:28:54.508912+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: cluster 2026-03-09T17:28:54.508912+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.596569+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.596569+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.629176+0000 mon.a (mon.0) 874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.629176+0000 mon.a (mon.0) 874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.630667+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.630667+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.659217+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.659217+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.659428+0000 mon.a (mon.0) 876 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.659428+0000 mon.a (mon.0) 876 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.662566+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.662566+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.663017+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.663017+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.663339+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.663339+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: cluster 2026-03-09T17:28:54.673002+0000 mgr.y (mgr.14505) 107 : cluster [DBG] pgmap v65: 996 pgs: 736 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: cluster 2026-03-09T17:28:54.673002+0000 mgr.y (mgr.14505) 107 : cluster [DBG] pgmap v65: 996 pgs: 736 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.685988+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:54.685988+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:55.129613+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:55 vm00 bash[28333]: audit 2026-03-09T17:28:55.129613+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438045+0000 mon.a (mon.0) 861 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438045+0000 mon.a (mon.0) 861 : audit [INF] from='client.? 192.168.123.100:0/2294175570' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60359-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438108+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438108+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.100:0/2755652907' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60441-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438495+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438495+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438525+0000 mon.a (mon.0) 864 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438525+0000 mon.a (mon.0) 864 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-60118-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438546+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438546+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60286-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438570+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438570+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438596+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438596+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-60171-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438656+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438656+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438719+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438719+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-60199-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438776+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438776+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-60256-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438814+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438814+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-60225-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438878+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.438878+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: cluster 2026-03-09T17:28:54.508912+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: cluster 2026-03-09T17:28:54.508912+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.596569+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.596569+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.629176+0000 mon.a (mon.0) 874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.629176+0000 mon.a (mon.0) 874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.630667+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.630667+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.659217+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.659217+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.659428+0000 mon.a (mon.0) 876 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.659428+0000 mon.a (mon.0) 876 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.662566+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.662566+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.663017+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.663017+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.663339+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.663339+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: cluster 2026-03-09T17:28:54.673002+0000 mgr.y (mgr.14505) 107 : cluster [DBG] pgmap v65: 996 pgs: 736 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: cluster 2026-03-09T17:28:54.673002+0000 mgr.y (mgr.14505) 107 : cluster [DBG] pgmap v65: 996 pgs: 736 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.685988+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:54.685988+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:55.129613+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:56.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:55 vm00 bash[20770]: audit 2026-03-09T17:28:55.129613+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:56.790 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:28:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:28:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.596569+0000 mon.c [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP api_c_read_operations: Running main() from gmock_main.cc 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [==========] Running 17 tests from 1 test suite. 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [----------] Global test environment set-up. 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.NewDelete 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.NewDelete (0 ms) 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.SetOpFlags 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.SetOpFlags (553 ms) 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertExists 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertExists (65 ms) 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertVersion 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertVersion (7 ms) 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpXattr 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpXattr (44 ms) 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Read 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Read (17 ms) 2026-03-09T17:28:56.822 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Checksum 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Checksum (8 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.RWOrderedRead 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.RWOrderedRead (6 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ShortRead 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ShortRead (6 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Exec 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Exec (10 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ExecUserBuf 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ExecUserBuf (5 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat (7 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat2 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat2 (4 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Omap 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Omap (20 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.OmapNuls 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.OmapNuls (626 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.GetXattrs 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.GetXattrs (115 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpExt 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpExt (10 ms) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest (1503 ms total) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [----------] Global test environment tear-down 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [==========] 17 tests from 1 test suite ran. (4256 ms total) 2026-03-09T17:28:56.823 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ PASSED ] 17 tests. 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout:_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.629176+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.630667+0000 mon.c [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.659217+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.659428+0000 mon.a [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.662566+0000 mon.c [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.663017+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.663339+0000 mon.a [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:54.685988+0000 mon.a [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.594057+0000 mon.a [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:56.844 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.594559+0000 mon.a [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.594585+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.594614+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.594636+0000 mon.a [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.622363+0000 mon.c [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.663881+0000 mon.a [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.672409+0000 mon.c [INF] from='client.? 192.168.123.100:0/4049151251' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.672537+0000 mon.c [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.702180+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.702550+0000 mon.a [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.702812+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.702907+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.808516+0000 mon.b [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.815063+0000 mon.c [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.816149+0000 mon.c [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.816336+0000 client.admin [INF] onexx 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.833951+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.834194+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.834287+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.843377+0000 mon.c [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.843533+0000 mon.c [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:56.845 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.854193+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594057+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594057+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594559+0000 mon.a (mon.0) 881 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594559+0000 mon.a (mon.0) 881 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594585+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594585+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594614+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594614+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594636+0000 mon.a (mon.0) 884 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.594636+0000 mon.a (mon.0) 884 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.622363+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.622363+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: cluster 2026-03-09T17:28:55.658292+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: cluster 2026-03-09T17:28:55.658292+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.663881+0000 mon.a (mon.0) 886 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.663881+0000 mon.a (mon.0) 886 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.672409+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/4049151251' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.672409+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/4049151251' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.672537+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.672537+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.702180+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.702180+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.702550+0000 mon.a (mon.0) 888 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.702550+0000 mon.a (mon.0) 888 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.702812+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.702812+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.702907+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.702907+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.808516+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.808516+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.815063+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.815063+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.816149+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.816149+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: cluster 2026-03-09T17:28:55.816336+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: cluster 2026-03-09T17:28:55.816336+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.833951+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.833951+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.834194+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.834194+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.834287+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.834287+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.843377+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.843377+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.843533+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.843533+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.854193+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.854193+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.854856+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.854856+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.862104+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.862104+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.862287+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.862287+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.886096+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.886096+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.897508+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:55.897508+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:56.130363+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:57.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:56 vm02 bash[23351]: audit 2026-03-09T17:28:56.130363+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594057+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594057+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594559+0000 mon.a (mon.0) 881 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594559+0000 mon.a (mon.0) 881 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594585+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594585+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594614+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594614+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594636+0000 mon.a (mon.0) 884 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.594636+0000 mon.a (mon.0) 884 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.622363+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.622363+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: cluster 2026-03-09T17:28:55.658292+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: cluster 2026-03-09T17:28:55.658292+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.663881+0000 mon.a (mon.0) 886 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.663881+0000 mon.a (mon.0) 886 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.672409+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/4049151251' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.672409+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/4049151251' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.672537+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.672537+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.702180+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.702180+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.702550+0000 mon.a (mon.0) 888 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.702550+0000 mon.a (mon.0) 888 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.702812+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.702812+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.702907+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.702907+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.808516+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.808516+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.815063+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.815063+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.816149+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.816149+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: cluster 2026-03-09T17:28:55.816336+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: cluster 2026-03-09T17:28:55.816336+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.833951+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.833951+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.834194+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.834194+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.834287+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.834287+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.843377+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.843377+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.843533+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.843533+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.854193+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.854193+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.854856+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.854856+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.862104+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.862104+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.862287+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.862287+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.886096+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.886096+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.897508+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:55.897508+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:56.130363+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:56 vm00 bash[28333]: audit 2026-03-09T17:28:56.130363+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594057+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594057+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60745-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594559+0000 mon.a (mon.0) 881 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594559+0000 mon.a (mon.0) 881 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60801-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594585+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594585+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60328-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594614+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594614+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594636+0000 mon.a (mon.0) 884 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.594636+0000 mon.a (mon.0) 884 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59921-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.622363+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.622363+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: cluster 2026-03-09T17:28:55.658292+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: cluster 2026-03-09T17:28:55.658292+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.663881+0000 mon.a (mon.0) 886 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.663881+0000 mon.a (mon.0) 886 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.672409+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/4049151251' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.672409+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/4049151251' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.672537+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.672537+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.702180+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.702180+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.702550+0000 mon.a (mon.0) 888 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.702550+0000 mon.a (mon.0) 888 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.702812+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.702812+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.702907+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.702907+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.808516+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.808516+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.815063+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.815063+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.816149+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.816149+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: cluster 2026-03-09T17:28:55.816336+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: cluster 2026-03-09T17:28:55.816336+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.833951+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.833951+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.834194+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.834194+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.834287+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.834287+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.843377+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.843377+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.843533+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.843533+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.854193+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.854193+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.854856+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.854856+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.862104+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.862104+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.862287+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.862287+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.886096+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.886096+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.897508+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:55.897508+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:56.130363+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:57.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:56 vm00 bash[20770]: audit 2026-03-09T17:28:56.130363+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: cluster 2026-03-09T17:28:56.673575+0000 mgr.y (mgr.14505) 108 : cluster [DBG] pgmap v67: 828 pgs: 568 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: cluster 2026-03-09T17:28:56.673575+0000 mgr.y (mgr.14505) 108 : cluster [DBG] pgmap v67: 828 pgs: 568 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710229+0000 mon.a (mon.0) 898 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710229+0000 mon.a (mon.0) 898 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710273+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710273+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710296+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710296+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710323+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710323+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710366+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.710366+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.723400+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.723400+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.723439+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.723439+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: cluster 2026-03-09T17:28:56.765314+0000 mon.a (mon.0) 903 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: cluster 2026-03-09T17:28:56.765314+0000 mon.a (mon.0) 903 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.771535+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/3545648387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.771535+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/3545648387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.785946+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.785946+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.787082+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.787082+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.804415+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.804415+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.835122+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.835122+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.845931+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.845931+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.892034+0000 mon.a (mon.0) 908 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.892034+0000 mon.a (mon.0) 908 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: cluster 2026-03-09T17:28:56.892153+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: cluster 2026-03-09T17:28:56.892153+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.905806+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.905806+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.906147+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.906147+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.943672+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.943672+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.944229+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.944229+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.964912+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.964912+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.965006+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.965006+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.968625+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.968625+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.978512+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.978512+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.981801+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.981801+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.986736+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.986736+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.991322+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:56.991322+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.003095+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.003095+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.003258+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.003258+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.010622+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.010622+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.025904+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.025904+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.029140+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.029140+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.131376+0000 mon.c (mon.2) 106 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.131376+0000 mon.c (mon.2) 106 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749339+0000 mon.a (mon.0) 918 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749339+0000 mon.a (mon.0) 918 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749376+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749376+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749403+0000 mon.a (mon.0) 920 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749403+0000 mon.a (mon.0) 920 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749422+0000 mon.a (mon.0) 921 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749422+0000 mon.a (mon.0) 921 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749440+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749440+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749464+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749464+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749486+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.749486+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.774622+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.774622+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.774748+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.774748+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: cluster 2026-03-09T17:28:57.809444+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: cluster 2026-03-09T17:28:57.809444+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.810501+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.810501+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.810823+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.810823+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.811232+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.811232+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.816844+0000 mon.a (mon.0) 929 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.816844+0000 mon.a (mon.0) 929 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.831526+0000 mon.a (mon.0) 930 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:57 vm00 bash[20770]: audit 2026-03-09T17:28:57.831526+0000 mon.a (mon.0) 930 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: cluster 2026-03-09T17:28:56.673575+0000 mgr.y (mgr.14505) 108 : cluster [DBG] pgmap v67: 828 pgs: 568 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: cluster 2026-03-09T17:28:56.673575+0000 mgr.y (mgr.14505) 108 : cluster [DBG] pgmap v67: 828 pgs: 568 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710229+0000 mon.a (mon.0) 898 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710229+0000 mon.a (mon.0) 898 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710273+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710273+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710296+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710296+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710323+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710323+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710366+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.710366+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.723400+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.723400+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.723439+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.723439+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: cluster 2026-03-09T17:28:56.765314+0000 mon.a (mon.0) 903 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: cluster 2026-03-09T17:28:56.765314+0000 mon.a (mon.0) 903 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.771535+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/3545648387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.771535+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/3545648387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.785946+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.785946+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.787082+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.787082+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.804415+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.804415+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.835122+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.835122+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.845931+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.845931+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.892034+0000 mon.a (mon.0) 908 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.892034+0000 mon.a (mon.0) 908 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: cluster 2026-03-09T17:28:56.892153+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: cluster 2026-03-09T17:28:56.892153+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.905806+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.905806+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.906147+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.906147+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.943672+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.943672+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.944229+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.944229+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.964912+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.964912+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.965006+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.965006+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.968625+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.968625+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.978512+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.978512+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.981801+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.981801+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.986736+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.986736+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.991322+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:56.991322+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.003095+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.003095+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.003258+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.003258+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.010622+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.010622+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.025904+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.025904+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.029140+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.029140+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.131376+0000 mon.c (mon.2) 106 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.131376+0000 mon.c (mon.2) 106 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749339+0000 mon.a (mon.0) 918 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749339+0000 mon.a (mon.0) 918 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749376+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749376+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749403+0000 mon.a (mon.0) 920 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749403+0000 mon.a (mon.0) 920 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749422+0000 mon.a (mon.0) 921 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749422+0000 mon.a (mon.0) 921 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749440+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749440+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749464+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749464+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749486+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.749486+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.774622+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.774622+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.774748+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.774748+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: cluster 2026-03-09T17:28:57.809444+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: cluster 2026-03-09T17:28:57.809444+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.810501+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.810501+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.810823+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.810823+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.811232+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.811232+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.816844+0000 mon.a (mon.0) 929 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.816844+0000 mon.a (mon.0) 929 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.831526+0000 mon.a (mon.0) 930 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:58.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:57 vm00 bash[28333]: audit 2026-03-09T17:28:57.831526+0000 mon.a (mon.0) 930 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.854856+0000 mon.a [INF] from='client.? ' entity='client.admin' cm list_parallel: process_1_[60532]: starting. 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60532]: creating pool ceph_test_rados_list_parallel.vm00-60488 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60532]: created object 0... 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60532]: created object 25... 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60532]: created object 49... 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60532]: finishing. 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60532]: shutting down. 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60533]: starting. 2026-03-09T17:28:58.332 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60533]: listing objects. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60533]: listed object 0... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60533]: listed object 25... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60533]: saw 50 objects 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60533]: shutting down. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[61286]: starting. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[61286]: creating pool ceph_test_rados_list_parallel.vm00-60488 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[61286]: created object 0... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[61286]: created object 25... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[61286]: created object 49... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[61286]: finishing. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[61286]: shutting down. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[61287]: starting. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[61287]: listing objects. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[61287]: listed object 0... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[61287]: listed object 25... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[61287]: saw 45 objects 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[61287]: shutting down. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[61288]: starting. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[61288]: removed 25 objects... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[61288]: removed half of the objects 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[61288]: removed 50 objects... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[61288]: removed 50 objects 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[61288]: shutting down. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61334]: starting. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61334]: creating pool ceph_test_rados_list_parallel.vm00-60488 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61334]: created object 0... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61334]: created object 25... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61334]: created object 49... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61334]: finishing. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61334]: shutting down. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61335]: starting. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61335]: listing objects. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61335]: listed object 0... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61335]: listed object 25... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61335]: listed object 50... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61335]: saw 53 objects 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61335]: shutting down. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61336]: starting. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61336]: added 25 objects... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61336]: added half of the objects 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61336]: added 50 objects... 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61336]: added 50 objects 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61336]: shutting down. 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.333 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: cluster 2026-03-09T17:28:56.673575+0000 mgr.y (mgr.14505) 108 : cluster [DBG] pgmap v67: 828 pgs: 568 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: cluster 2026-03-09T17:28:56.673575+0000 mgr.y (mgr.14505) 108 : cluster [DBG] pgmap v67: 828 pgs: 568 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710229+0000 mon.a (mon.0) 898 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710229+0000 mon.a (mon.0) 898 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710273+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710273+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59908-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710296+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710296+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710323+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710323+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710366+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.710366+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.723400+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.723400+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.723439+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.723439+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: cluster 2026-03-09T17:28:56.765314+0000 mon.a (mon.0) 903 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: cluster 2026-03-09T17:28:56.765314+0000 mon.a (mon.0) 903 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.771535+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/3545648387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.771535+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/3545648387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.785946+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.785946+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.787082+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.787082+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.804415+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.804415+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.835122+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.835122+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.845931+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.845931+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.892034+0000 mon.a (mon.0) 908 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.892034+0000 mon.a (mon.0) 908 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: cluster 2026-03-09T17:28:56.892153+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: cluster 2026-03-09T17:28:56.892153+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.905806+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.905806+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.906147+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.906147+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.943672+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.943672+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.944229+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.944229+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.964912+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.964912+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.965006+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.965006+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.968625+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.968625+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.978512+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.978512+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.981801+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.981801+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.986736+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.986736+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.991322+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:56.991322+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.003095+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.003095+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.003258+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.003258+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.010622+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.010622+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.025904+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.025904+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.029140+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.029140+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.131376+0000 mon.c (mon.2) 106 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.131376+0000 mon.c (mon.2) 106 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749339+0000 mon.a (mon.0) 918 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749339+0000 mon.a (mon.0) 918 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59921-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749376+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749376+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749403+0000 mon.a (mon.0) 920 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749403+0000 mon.a (mon.0) 920 : audit [INF] from='client.? 192.168.123.100:0/1676617362' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749422+0000 mon.a (mon.0) 921 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749422+0000 mon.a (mon.0) 921 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749440+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749440+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749464+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749464+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-60068-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749486+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.749486+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-60039-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.774622+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.774622+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.774748+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.774748+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: cluster 2026-03-09T17:28:57.809444+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: cluster 2026-03-09T17:28:57.809444+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.810501+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.810501+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.810823+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.810823+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.811232+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.811232+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:28:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.816844+0000 mon.a (mon.0) 929 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.816844+0000 mon.a (mon.0) 929 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:58.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.831526+0000 mon.a (mon.0) 930 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:58.388 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:57 vm02 bash[23351]: audit 2026-03-09T17:28:57.831526+0000 mon.a (mon.0) 930 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:28:58.669 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61933]: starting. 2026-03-09T17:28:58.669 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61933]: creating pool ceph_test_rados_list_parallel.vm00-60488 2026-03-09T17:28:58.669 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61933]: created object 0... 2026-03-09T17:28:58.669 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61933]: created object 25... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61933]: created object 49... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61933]: finishing. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61933]: shutting down. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61934]: starting. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61934]: listing objects. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61934]: listed object 0... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61934]: listed object 25... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61934]: listed object 50... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61934]: listed object 75... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61934]: saw 98 objects 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61934]: shutting down. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61937]: starting. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61937]: removed 25 objects... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61937]: removed half of the objects 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61937]: removed 50 objects... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61937]: removed 50 objects 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61937]: shutting down. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61936]: starting. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61936]: added 25 objects... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61936]: added half of the objects 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61936]: added 50 objects... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61936]: added 50 objects 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61936]: shutting down. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61935]: starting. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61935]: added 25 objects... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61935]: added half of the objects 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61935]: added 50 objects... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61935]: added 50 objects 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61935]: shutting down. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[62013]: starting. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[62013]: creating pool ceph_test_rados_list_parallel.vm00-60488 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[62013]: created object 0... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[62013]: created object 25... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[62013]: created object 49... 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[62013]: finishing. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[62013]: shutting down. 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.670 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: starting. 2026-03-09T17:28:58.671 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: listing objects. 2026-03-09T17:28:58.671 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: listed object 0... 2026-03-09T17:28:58.671 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: listed object 25... 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: listed object 50... 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: listed object 75... 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: listed object 100... 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: listed object 125... 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: saw 150 objects 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[62014]: shutting down. 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[62015]: starting. 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[62015]: added 25 objects... 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[62015]: added half of the objects 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[62015]: added 50 objects... 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[62015]: added 50 objects 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[62015]: shutting down. 2026-03-09T17:28:58.961 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.962 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.962 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.962 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.962 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-09T17:28:58.962 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******* SUCCESS ********** 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.872527+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.872527+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.927611+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.927611+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.928489+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.928489+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.935692+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.935692+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.936420+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.936420+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.940328+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/2780728188' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:57.940328+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/2780728188' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:58.132378+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:28:58 vm00 bash[28333]: audit 2026-03-09T17:28:58.132378+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:58 vm00 bash[20770]: audit 2026-03-09T17:28:57.872527+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:58 vm00 bash[20770]: audit 2026-03-09T17:28:57.872527+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.927611+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.927611+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.928489+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.928489+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.935692+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.935692+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.936420+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.936420+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.940328+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/2780728188' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:57.940328+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/2780728188' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:58.132378+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:28:59 vm00 bash[20770]: audit 2026-03-09T17:28:58.132378+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:59.328 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: Running main() from gmock_main.cc 2026-03-09T17:28:59.328 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [==========] Running 39 tests from 2 test suites. 2026-03-09T17:28:59.328 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] Global test environment set-up. 2026-03-09T17:28:59.328 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP 2026-03-09T17:28:59.328 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: seed 59929 2026-03-09T17:28:59.328 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TooBigPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.TooBigPP (0 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SimpleWritePP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.SimpleWritePP (264 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadOpPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadOpPP (8 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SparseReadOpPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.SparseReadOpPP (12 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP (6 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP2 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP2 (2 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.Checksum 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.Checksum (5 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadIntoBufferlist 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadIntoBufferlist (2 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.OverlappingWriteRoundTripPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.OverlappingWriteRoundTripPP (5 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP (4 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP2 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP2 (3 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.AppendRoundTripPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.AppendRoundTripPP (3 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TruncTestPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.TruncTestPP (4 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RemoveTestPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.RemoveTestPP (5 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrsRoundTripPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrsRoundTripPP (6 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RmXattrPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.RmXattrPP (15 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrListPP (6 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CrcZeroWrite 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.CrcZeroWrite (5 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtPP (4 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtDNEPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtDNEPP (1 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtMismatchPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtMismatchPP (3 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP (364 ms total) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SimpleWritePP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SimpleWritePP (1322 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.ReadOpPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.ReadOpPP (19 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SparseReadOpPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SparseReadOpPP (29 ms) 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RoundTripPP 2026-03-09T17:28:59.329 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP (30 ms) 2026-03-09T17:28:59.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.872527+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.872527+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.927611+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.927611+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.928489+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.928489+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.935692+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.935692+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.936420+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.936420+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.940328+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/2780728188' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:57.940328+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/2780728188' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:58.132378+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:28:58 vm02 bash[23351]: audit 2026-03-09T17:28:58.132378+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:28:59.997 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] L delete_pools_parallel: process_1_[60630]: starting. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60630]: creating pool ceph_test_rados_delete_pools_parallel.vm00-60598 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60630]: created object 0... 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60630]: created object 25... 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60630]: created object 49... 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60630]: finishing. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60630]: shutting down. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_2_[60647]: starting. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_2_[60647]: deleting pool ceph_test_rados_delete_pools_parallel.vm00-60598 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_2_[60647]: shutting down. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61415]: starting. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61415]: creating pool ceph_test_rados_delete_pools_parallel.vm00-60598 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61415]: created object 0... 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61415]: created object 25... 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61415]: created object 49... 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61415]: finishing. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61415]: shutting down. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61417]: starting. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61417]: listing objects. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61417]: listed object 0... 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61417]: listed object 25... 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61417]: saw 50 objects 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61417]: shutting down. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_4_[61416]: starting. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_4_[61416]: deleting pool ceph_test_rados_delete_pools_parallel.vm00-60598 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_4_[61416]: shutting down. 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-09T17:28:59.998 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******* SUCCESS ********** 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout:d=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.862104+0000 mon.c [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.862287+0000 mon.c [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.886096+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-60256-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:55.897508+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-60225-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:57.872527+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:57.927611+0000 mon.c [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:57.928489+0000 mon.c [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:57.935692+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:57.936420+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.872223+0000 mon.a [WRN] Health check update: 18 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.876171+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.876208+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.876231+0000 mon.a [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.876262+0000 mon.a [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.023 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.876287+0000 mon.a [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58. cmd: Running main() from gmock_main.cc 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [==========] Running 3 tests from 1 test suite. 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [----------] Global test environment set-up. 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [----------] 3 tests from NeoRadosCmd 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ RUN ] NeoRadosCmd.MonDescribe 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ OK ] NeoRadosCmd.MonDescribe (1815 ms) 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ RUN ] NeoRadosCmd.OSDCmd 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ OK ] NeoRadosCmd.OSDCmd (2244 ms) 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ RUN ] NeoRadosCmd.PGCmd 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ OK ] NeoRadosCmd.PGCmd (3238 ms) 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [----------] 3 tests from NeoRadosCmd (7297 ms total) 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [----------] Global test environment tear-down 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [==========] 3 tests from 1 test suite ran. (7297 ms total) 2026-03-09T17:29:00.029 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ PASSED ] 3 tests. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60597]: starting. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60597]: creating pool ceph_test_rados_open_pools_parallel.vm00-60529 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60597]: created object 0... 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60597]: created object 25... 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60597]: created object 49... 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60597]: finishing. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60597]: shutting down. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_2_[60601]: starting. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_2_[60601]: rados_pool_create. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_2_[60601]: rados_ioctx_create. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_2_[60601]: shutting down. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******************************* 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61423]: starting. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61423]: creating pool ceph_test_rados_open_pools_parallel.vm00-60529 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61423]: created object 0... 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61423]: created object 25... 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61423]: created object 49... 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61423]: finishing. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61423]: shutting down. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******************************* 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_4_[61424]: starting. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_4_[61424]: rados_pool_create. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_4_[61424]: rados_ioctx_create. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_4_[61424]: shutting down. 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******************************* 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******************************* 2026-03-09T17:29:00.039 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******* SUCCESS ********** 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: cluster 2026-03-09T17:28:58.674190+0000 mgr.y (mgr.14505) 109 : cluster [DBG] pgmap v70: 980 pgs: 192 creating+peering, 272 unknown, 516 active+clean; 465 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 56 KiB/s wr, 157 op/s 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: cluster 2026-03-09T17:28:58.674190+0000 mgr.y (mgr.14505) 109 : cluster [DBG] pgmap v70: 980 pgs: 192 creating+peering, 272 unknown, 516 active+clean; 465 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 56 KiB/s wr, 157 op/s 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: cluster 2026-03-09T17:28:58.872223+0000 mon.a (mon.0) 934 : cluster [WRN] Health check update: 18 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: cluster 2026-03-09T17:28:58.872223+0000 mon.a (mon.0) 934 : cluster [WRN] Health check update: 18 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876171+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876171+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876208+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876208+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876231+0000 mon.a (mon.0) 937 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876231+0000 mon.a (mon.0) 937 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876262+0000 mon.a (mon.0) 938 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876262+0000 mon.a (mon.0) 938 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876287+0000 mon.a (mon.0) 939 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876287+0000 mon.a (mon.0) 939 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876305+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.876305+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.880770+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.880770+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.904959+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.904959+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.918252+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.918252+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.924770+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.924770+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: cluster 2026-03-09T17:28:58.941120+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: cluster 2026-03-09T17:28:58.941120+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.947698+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3725845797' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.947698+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3725845797' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.956159+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.956159+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.968112+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.968112+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.968355+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.968355+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.977974+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.977974+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.978126+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:58.978126+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.141086+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.141086+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.160068+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.160068+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.417666+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.417666+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.418326+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.418326+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.899746+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.899746+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: cluster 2026-03-09T17:28:59.900611+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: cluster 2026-03-09T17:28:59.900611+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.966261+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:00 vm00 bash[28333]: audit 2026-03-09T17:28:59.966261+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: cluster 2026-03-09T17:28:58.674190+0000 mgr.y (mgr.14505) 109 : cluster [DBG] pgmap v70: 980 pgs: 192 creating+peering, 272 unknown, 516 active+clean; 465 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 56 KiB/s wr, 157 op/s 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: cluster 2026-03-09T17:28:58.674190+0000 mgr.y (mgr.14505) 109 : cluster [DBG] pgmap v70: 980 pgs: 192 creating+peering, 272 unknown, 516 active+clean; 465 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 56 KiB/s wr, 157 op/s 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: cluster 2026-03-09T17:28:58.872223+0000 mon.a (mon.0) 934 : cluster [WRN] Health check update: 18 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: cluster 2026-03-09T17:28:58.872223+0000 mon.a (mon.0) 934 : cluster [WRN] Health check update: 18 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876171+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876171+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876208+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876208+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876231+0000 mon.a (mon.0) 937 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876231+0000 mon.a (mon.0) 937 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876262+0000 mon.a (mon.0) 938 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876262+0000 mon.a (mon.0) 938 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876287+0000 mon.a (mon.0) 939 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876287+0000 mon.a (mon.0) 939 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876305+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.876305+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.880770+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.880770+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.904959+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.904959+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.918252+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.918252+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.924770+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.924770+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: cluster 2026-03-09T17:28:58.941120+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: cluster 2026-03-09T17:28:58.941120+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.947698+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3725845797' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.947698+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3725845797' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.956159+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.956159+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.968112+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.968112+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.968355+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.968355+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.977974+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.977974+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.978126+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:58.978126+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.141086+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.141086+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.160068+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.160068+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.417666+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.417666+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.418326+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.418326+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.899746+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.899746+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: cluster 2026-03-09T17:28:59.900611+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: cluster 2026-03-09T17:28:59.900611+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.966261+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:00 vm00 bash[20770]: audit 2026-03-09T17:28:59.966261+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: cluster 2026-03-09T17:28:58.674190+0000 mgr.y (mgr.14505) 109 : cluster [DBG] pgmap v70: 980 pgs: 192 creating+peering, 272 unknown, 516 active+clean; 465 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 56 KiB/s wr, 157 op/s 2026-03-09T17:29:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: cluster 2026-03-09T17:28:58.674190+0000 mgr.y (mgr.14505) 109 : cluster [DBG] pgmap v70: 980 pgs: 192 creating+peering, 272 unknown, 516 active+clean; 465 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 56 KiB/s wr, 157 op/s 2026-03-09T17:29:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: cluster 2026-03-09T17:28:58.872223+0000 mon.a (mon.0) 934 : cluster [WRN] Health check update: 18 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: cluster 2026-03-09T17:28:58.872223+0000 mon.a (mon.0) 934 : cluster [WRN] Health check update: 18 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876171+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876171+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-60225-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876208+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876208+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-60256-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876231+0000 mon.a (mon.0) 937 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876231+0000 mon.a (mon.0) 937 : audit [INF] from='client.? 192.168.123.100:0/2267111769' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60745-1"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876262+0000 mon.a (mon.0) 938 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876262+0000 mon.a (mon.0) 938 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59916-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876287+0000 mon.a (mon.0) 939 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876287+0000 mon.a (mon.0) 939 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876305+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.876305+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.880770+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.880770+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.904959+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.904959+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.918252+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.918252+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.924770+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.924770+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: cluster 2026-03-09T17:28:58.941120+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: cluster 2026-03-09T17:28:58.941120+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.947698+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3725845797' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.947698+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3725845797' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.956159+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.956159+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.968112+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.968112+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.968355+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.968355+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.977974+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.977974+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.978126+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:58.978126+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.141086+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.141086+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.160068+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.160068+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.417666+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.417666+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.418326+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.418326+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.899746+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.899746+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: cluster 2026-03-09T17:28:59.900611+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: cluster 2026-03-09T17:28:59.900611+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T17:29:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.966261+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:00.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:00 vm02 bash[23351]: audit 2026-03-09T17:28:59.966261+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:01.323 INFO:tasks.workunit.client.0.vm00.stdout: api_io: Running main() from gmock_main.cc 2026-03-09T17:29:01.323 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [==========] Running 24 tests from 2 test suites. 2026-03-09T17:29:01.323 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] Global test environment set-up. 2026-03-09T17:29:01.323 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] 14 tests from LibRadosIo 2026-03-09T17:29:01.323 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.SimpleWrite 2026-03-09T17:29:01.323 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.SimpleWrite (231 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.TooBig 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.TooBig (0 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.ReadTimeout 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.ReadTimeout (29 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.RoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.RoundTrip (6 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.Checksum 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.Checksum (2 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.OverlappingWriteRoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.OverlappingWriteRoundTrip (5 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.WriteFullRoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.WriteFullRoundTrip (9 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.AppendRoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.AppendRoundTrip (4 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.ZeroLenZero 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.ZeroLenZero (3 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.TruncTest 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.TruncTest (3 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.RemoveTest 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.RemoveTest (4 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.XattrsRoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.XattrsRoundTrip (4 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.RmXattr 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.RmXattr (13 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.XattrIter 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.XattrIter (6 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] 14 tests from LibRadosIo (319 ms total) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] 10 tests from LibRadosIoEC 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.SimpleWrite 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.SimpleWrite (1471 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.RoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.RoundTrip (16 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.OverlappingWriteRoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.OverlappingWriteRoundTrip (25 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.WriteFullRoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.WriteFullRoundTrip (12 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.AppendRoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.AppendRoundTrip (10 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.TruncTest 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.TruncTest (7 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.RemoveTest 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.RemoveTest (5 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.XattrsRoundTrip 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.XattrsRoundTrip (10 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.RmXattr 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.RmXattr (20 ms) 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.XattrIter 2026-03-09T17:29:01.324 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.XattrIter (6 ms) 2026-03-09T17:29:01.325 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] 10 tests from LibRadosIoEC (1582 ms total) 2026-03-09T17:29:01.325 INFO:tasks.workunit.client.0.vm00.stdout: api_io: 2026-03-09T17:29:01.325 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] Global test environment tear-down 2026-03-09T17:29:01.325 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [==========] 24 tests from 2 test suites ran. (9188 ms total) 2026-03-09T17:29:01.325 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ PASSED ] 24 tests. 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.969905+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.969905+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970190+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970190+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970215+0000 mon.a (mon.0) 953 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970215+0000 mon.a (mon.0) 953 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970232+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970232+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970249+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970249+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970273+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970273+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970294+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:01.530 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970294+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970318+0000 mon.a (mon.0) 958 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970318+0000 mon.a (mon.0) 958 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970346+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.970346+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: cluster 2026-03-09T17:28:59.975611+0000 mon.a (mon.0) 960 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: cluster 2026-03-09T17:28:59.975611+0000 mon.a (mon.0) 960 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.992604+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2770923274' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.992604+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2770923274' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.992791+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/4205885940' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:28:59.992791+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/4205885940' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.011186+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.011186+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.011404+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.011404+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.012006+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.012006+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.021337+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.021337+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.038772+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.038772+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: cluster 2026-03-09T17:29:00.063944+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: cluster 2026-03-09T17:29:00.063944+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.085273+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.100:0/1138740917' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.085273+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.100:0/1138740917' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.085590+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.085590+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.086282+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.086282+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.086799+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.086799+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.097439+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.097439+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.097513+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.097513+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.112934+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.112934+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.163507+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:01.531 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:01 vm02 bash[23351]: audit 2026-03-09T17:29:00.163507+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.969905+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.969905+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970190+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970190+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970215+0000 mon.a (mon.0) 953 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970215+0000 mon.a (mon.0) 953 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970232+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970232+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970249+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970249+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970273+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970273+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970294+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970294+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970318+0000 mon.a (mon.0) 958 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970318+0000 mon.a (mon.0) 958 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970346+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.970346+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: cluster 2026-03-09T17:28:59.975611+0000 mon.a (mon.0) 960 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: cluster 2026-03-09T17:28:59.975611+0000 mon.a (mon.0) 960 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.992604+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2770923274' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.992604+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2770923274' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.992791+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/4205885940' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:28:59.992791+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/4205885940' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.011186+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.011186+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.011404+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.011404+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.012006+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.012006+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.021337+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.021337+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.038772+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.038772+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: cluster 2026-03-09T17:29:00.063944+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: cluster 2026-03-09T17:29:00.063944+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.085273+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.100:0/1138740917' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.085273+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.100:0/1138740917' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.085590+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.085590+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.086282+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.086282+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.086799+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.086799+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.097439+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.097439+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.097513+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.097513+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.112934+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.112934+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.163507+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:01 vm00 bash[28333]: audit 2026-03-09T17:29:00.163507+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.969905+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.969905+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-60068-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970190+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970190+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-60039-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970215+0000 mon.a (mon.0) 953 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970215+0000 mon.a (mon.0) 953 : audit [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970232+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970232+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970249+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970249+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970273+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970273+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970294+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970294+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970318+0000 mon.a (mon.0) 958 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970318+0000 mon.a (mon.0) 958 : audit [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970346+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.970346+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: cluster 2026-03-09T17:28:59.975611+0000 mon.a (mon.0) 960 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: cluster 2026-03-09T17:28:59.975611+0000 mon.a (mon.0) 960 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.992604+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2770923274' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.992604+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2770923274' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.992791+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/4205885940' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:28:59.992791+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/4205885940' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.011186+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.011186+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.011404+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.011404+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.012006+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.012006+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.021337+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.021337+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.038772+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.038772+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: cluster 2026-03-09T17:29:00.063944+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: cluster 2026-03-09T17:29:00.063944+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.085273+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.100:0/1138740917' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.085273+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.100:0/1138740917' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.085590+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.085590+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.086282+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.086282+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.086799+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.086799+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.097439+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.097439+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.097513+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.097513+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.112934+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.112934+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.163507+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:01.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:01 vm00 bash[20770]: audit 2026-03-09T17:29:00.163507+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:01.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:29:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:29:02.308 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: cluster 2026-03-09T17:29:00.674742+0000 mgr.y (mgr.14505) 110 : cluster [DBG] pgmap v73: 908 pgs: 32 creating+peering, 392 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 23 KiB/s wr, 117 op/s 2026-03-09T17:29:02.308 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: cluster 2026-03-09T17:29:00.674742+0000 mgr.y (mgr.14505) 110 : cluster [DBG] pgmap v73: 908 pgs: 32 creating+peering, 392 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 23 KiB/s wr, 117 op/s 2026-03-09T17:29:02.308 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113068+0000 mon.a (mon.0) 970 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.308 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113068+0000 mon.a (mon.0) 970 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.308 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113114+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.308 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113114+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.308 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113143+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.308 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113143+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.373 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: Running main() from gmock_main.cc 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [==========] Running 9 tests from 2 test suites. 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] Global test environment set-up. 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: seed 60256 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPP (575 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.Stat2Mtime2PP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.Stat2Mtime2PP (46 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.ClusterStatPP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.ClusterStatPP (0 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.PoolStatPP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.PoolStatPP (4 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPPNS 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPPNS (32 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP (657 ms total) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPP (1354 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.ClusterStatPP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.ClusterStatPP (0 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.PoolStatPP 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.PoolStatPP (28 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPPNS 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPPNS (42 ms) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP (1424 ms total) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] Global test environment tear-down 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [==========] 9 tests from 2 test suites ran. (9947 ms total) 2026-03-09T17:29:02.374 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ PASSED ] 9 tests. 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: Running main() from gmock_main.cc 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [==========] Running 9 tests from 2 test suites. 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] Global test environment set-up. 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] 5 tests from LibRadosStat 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.Stat 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.Stat (616 ms) 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.Stat2 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.Stat2 (11 ms) 2026-03-09T17:29:02.376 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.StatNS 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.StatNS (45 ms) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.ClusterStat 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.ClusterStat (1 ms) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.PoolStat 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.PoolStat (4 ms) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] 5 tests from LibRadosStat (677 ms total) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] 4 tests from LibRadosStatEC 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStatEC.Stat 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStatEC.Stat (1363 ms) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStatEC.StatNS 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStatEC.StatNS (54 ms) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStatEC.ClusterStat 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStatEC.ClusterStat (0 ms) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStatEC.PoolStat 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStatEC.PoolStat (7 ms) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] 4 tests from LibRadosStatEC (1424 ms total) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] Global test environment tear-down 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [==========] 9 tests from 2 test suites ran. (9977 ms total) 2026-03-09T17:29:02.377 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ PASSED ] 9 tests. 2026-03-09T17:29:02.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113166+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113166+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113187+0000 mon.a (mon.0) 974 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.113187+0000 mon.a (mon.0) 974 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: cluster 2026-03-09T17:29:01.123734+0000 mon.a (mon.0) 975 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: cluster 2026-03-09T17:29:01.123734+0000 mon.a (mon.0) 975 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.155181+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.155181+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.155384+0000 mon.a (mon.0) 977 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.155384+0000 mon.a (mon.0) 977 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.158297+0000 mon.a (mon.0) 978 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.158297+0000 mon.a (mon.0) 978 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.164819+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.164819+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.168042+0000 mon.c (mon.2) 120 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.168042+0000 mon.c (mon.2) 120 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.169125+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.169125+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.169152+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.169152+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.169174+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.169174+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.179011+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.179011+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.264151+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.264151+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.266829+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.266829+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.267022+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.267022+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.267135+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.267135+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: cluster 2026-03-09T17:29:01.275320+0000 mon.a (mon.0) 984 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: cluster 2026-03-09T17:29:01.275320+0000 mon.a (mon.0) 984 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.286896+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.286896+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.286932+0000 mon.a (mon.0) 986 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.286932+0000 mon.a (mon.0) 986 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.286958+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.286958+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.286978+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.286978+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.287005+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.287005+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.287024+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.287024+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.287042+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.287042+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: cluster 2026-03-09T17:29:01.297862+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: cluster 2026-03-09T17:29:01.297862+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.300582+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.300582+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.312712+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.312712+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.314535+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.314535+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.315119+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.315119+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.323263+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.323263+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.330525+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.330525+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.331430+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.331430+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.545491+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.545491+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.545861+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.545861+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.557841+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:01.557841+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:02.177682+0000 mon.c (mon.2) 128 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:02 vm02 bash[23351]: audit 2026-03-09T17:29:02.177682+0000 mon.c (mon.2) 128 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: cluster 2026-03-09T17:29:00.674742+0000 mgr.y (mgr.14505) 110 : cluster [DBG] pgmap v73: 908 pgs: 32 creating+peering, 392 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 23 KiB/s wr, 117 op/s 2026-03-09T17:29:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: cluster 2026-03-09T17:29:00.674742+0000 mgr.y (mgr.14505) 110 : cluster [DBG] pgmap v73: 908 pgs: 32 creating+peering, 392 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 23 KiB/s wr, 117 op/s 2026-03-09T17:29:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113068+0000 mon.a (mon.0) 970 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113068+0000 mon.a (mon.0) 970 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113114+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113114+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113143+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113143+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113166+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113166+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113187+0000 mon.a (mon.0) 974 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.113187+0000 mon.a (mon.0) 974 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: cluster 2026-03-09T17:29:01.123734+0000 mon.a (mon.0) 975 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: cluster 2026-03-09T17:29:01.123734+0000 mon.a (mon.0) 975 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.155181+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.155181+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.155384+0000 mon.a (mon.0) 977 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.155384+0000 mon.a (mon.0) 977 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.158297+0000 mon.a (mon.0) 978 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.158297+0000 mon.a (mon.0) 978 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.164819+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.164819+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.168042+0000 mon.c (mon.2) 120 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.168042+0000 mon.c (mon.2) 120 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.169125+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.169125+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.169152+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.169152+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.169174+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.169174+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.179011+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.179011+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.264151+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.264151+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.266829+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.266829+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.267022+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.267022+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.267135+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.267135+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: cluster 2026-03-09T17:29:01.275320+0000 mon.a (mon.0) 984 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: cluster 2026-03-09T17:29:01.275320+0000 mon.a (mon.0) 984 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.286896+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.286896+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.286932+0000 mon.a (mon.0) 986 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.286932+0000 mon.a (mon.0) 986 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.286958+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.286958+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.286978+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.286978+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.287005+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.287005+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.287024+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.287024+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:02.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.287042+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.287042+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: cluster 2026-03-09T17:29:01.297862+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: cluster 2026-03-09T17:29:01.297862+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.300582+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.300582+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.312712+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.312712+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.314535+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.314535+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.315119+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.315119+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.323263+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.323263+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.330525+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.330525+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.331430+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.331430+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.545491+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.545491+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.545861+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.545861+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.557841+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:01.557841+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:02.177682+0000 mon.c (mon.2) 128 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:02 vm00 bash[28333]: audit 2026-03-09T17:29:02.177682+0000 mon.c (mon.2) 128 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: cluster 2026-03-09T17:29:00.674742+0000 mgr.y (mgr.14505) 110 : cluster [DBG] pgmap v73: 908 pgs: 32 creating+peering, 392 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 23 KiB/s wr, 117 op/s 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: cluster 2026-03-09T17:29:00.674742+0000 mgr.y (mgr.14505) 110 : cluster [DBG] pgmap v73: 908 pgs: 32 creating+peering, 392 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 23 KiB/s wr, 117 op/s 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113068+0000 mon.a (mon.0) 970 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113068+0000 mon.a (mon.0) 970 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113114+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113114+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113143+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113143+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113166+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113166+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-4", "overlaypool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113187+0000 mon.a (mon.0) 974 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.113187+0000 mon.a (mon.0) 974 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60801-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: cluster 2026-03-09T17:29:01.123734+0000 mon.a (mon.0) 975 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: cluster 2026-03-09T17:29:01.123734+0000 mon.a (mon.0) 975 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.155181+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.155181+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.155384+0000 mon.a (mon.0) 977 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.155384+0000 mon.a (mon.0) 977 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.158297+0000 mon.a (mon.0) 978 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.158297+0000 mon.a (mon.0) 978 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.164819+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.164819+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.168042+0000 mon.c (mon.2) 120 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.168042+0000 mon.c (mon.2) 120 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.169125+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.169125+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.169152+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.169152+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.169174+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.169174+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.179011+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.179011+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.264151+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.264151+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.266829+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.266829+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.267022+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.267022+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.267135+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.267135+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: cluster 2026-03-09T17:29:01.275320+0000 mon.a (mon.0) 984 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: cluster 2026-03-09T17:29:01.275320+0000 mon.a (mon.0) 984 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.286896+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.286896+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60745-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.286932+0000 mon.a (mon.0) 986 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.286932+0000 mon.a (mon.0) 986 : audit [INF] from='client.? 192.168.123.100:0/51010100' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59921-16"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.286958+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.286958+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.286978+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.286978+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-4-cache", "mode": "writeback"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.287005+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:02.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.287005+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.287024+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.287024+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.287042+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.287042+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: cluster 2026-03-09T17:29:01.297862+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: cluster 2026-03-09T17:29:01.297862+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.300582+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.300582+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.312712+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.312712+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/3085080428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.314535+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.314535+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.315119+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.315119+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.323263+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.323263+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/2286250349' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.330525+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.330525+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.331430+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.331430+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.545491+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.545491+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.545861+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.545861+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.557841+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:01.557841+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:02.177682+0000 mon.c (mon.2) 128 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:02.792 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:02 vm00 bash[20770]: audit 2026-03-09T17:29:02.177682+0000 mon.c (mon.2) 128 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout:876305+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60328-1"}]': finished 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.880770+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.904959+0000 mon.c [INF] from='client.? 192.168.123.100:0/535783413' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.918252+0000 mon.c [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.924770+0000 mon.c [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.947698+0000 mon.c [INF] from='client.? 192.168.123.100:0/3725845797' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.956159+0000 mon.a [INF] from='client.? 192.168.123.100:0/107729841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60801-1"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.968112+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60328-1"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.968355+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.977974+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60745-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:58.978126+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59908-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:59.141086+0000 mon.a [INF] from='client.? 192.168.123.100:0/1016397261' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59916-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:59.417666+0000 mon.c [INF] from='client.? 192.168.123.100:0/2330818528' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:59.418326+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59929-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:59.899746+0000 mon.b [INF] from='client.? 192.168.123.100:0/2901686038' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:59.900611+0000 client.admin [INF] threexx 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-09T17:28:59.966261+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ OK ] LibRadosCmd.WatchLog (7523 ms) 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [----------] 4 tests from LibRadosCmd (10697 ms total) 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [----------] Global test environment tear-down 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [==========] 4 tests from 1 test suite ran. (10698 ms total) 2026-03-09T17:29:03.195 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ PASSED ] 4 tests. 2026-03-09T17:29:03.391 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Running main() from gmock_main.cc 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [==========] Running 2 tests from 1 test suite. 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [----------] Global test environment set-up. 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotify 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: handle_notify cookie 94645126946512 notify_id 292057776128 notifier_gid 24728 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotify (2832 ms) 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotifyTimeout 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Trying... 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: handle_notify cookie 94645139856576 notify_id 304942678018 notifier_gid 24794 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Waiting for 3.000000000s 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Timed out. 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Flushing... 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Flushed... 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotifyTimeout (7694 ms) 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify (10526 ms total) 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [----------] Global test environment tear-down 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [==========] 2 tests from 1 test suite ran. (10526 ms total) 2026-03-09T17:29:03.392 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ PASSED ] 2 tests. 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:01.530165+0000 mgr.y (mgr.14505) 111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:01.530165+0000 mgr.y (mgr.14505) 111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.347852+0000 mon.a (mon.0) 999 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.347852+0000 mon.a (mon.0) 999 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.348022+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.348022+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.348219+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.348219+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.348978+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.348978+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.353488+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.353488+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.353631+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.353631+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.353741+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.353741+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]': finished 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.357503+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.357503+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.379923+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/2519560284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.379923+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/2519560284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.380835+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.380835+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: cluster 2026-03-09T17:29:02.387375+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: cluster 2026-03-09T17:29:02.387375+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.426406+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.426406+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.426501+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.426501+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.426604+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.426604+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.426679+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.426679+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.431066+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.431066+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.461434+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.461434+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.475939+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.475939+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.514543+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.514543+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.516165+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.516165+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.517259+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.517259+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.931606+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.931606+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.933195+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.933195+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.933664+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.933664+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.934201+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.934201+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.934698+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.934698+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.935442+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.935442+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.936106+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.936106+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.936859+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.936859+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.937312+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.937312+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.938020+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:03.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:02.938020+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:03.638 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:03.178414+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:03.638 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:03 vm02 bash[23351]: audit 2026-03-09T17:29:03.178414+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:01.530165+0000 mgr.y (mgr.14505) 111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:01.530165+0000 mgr.y (mgr.14505) 111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.347852+0000 mon.a (mon.0) 999 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.347852+0000 mon.a (mon.0) 999 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.348022+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.348022+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.348219+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.348219+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.348978+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.348978+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.353488+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.353488+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.353631+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.353631+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.353741+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.353741+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]': finished 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.357503+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.357503+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.379923+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/2519560284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.379923+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/2519560284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.380835+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.380835+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: cluster 2026-03-09T17:29:02.387375+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: cluster 2026-03-09T17:29:02.387375+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.426406+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.426406+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.426501+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.426501+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.426604+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.426604+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.426679+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.426679+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.431066+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.431066+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.461434+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.461434+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.475939+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.475939+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.514543+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.514543+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.516165+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.516165+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.517259+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.517259+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.931606+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.931606+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.933195+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.933195+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.933664+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.933664+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.934201+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.934201+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.934698+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.934698+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.935442+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.935442+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.936106+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.936106+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.936859+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.936859+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.937312+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.937312+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.938020+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:02.938020+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:03.178414+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:03 vm00 bash[28333]: audit 2026-03-09T17:29:03.178414+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:01.530165+0000 mgr.y (mgr.14505) 111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:01.530165+0000 mgr.y (mgr.14505) 111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.347852+0000 mon.a (mon.0) 999 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.347852+0000 mon.a (mon.0) 999 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60801-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.348022+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.348022+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? 192.168.123.100:0/940979508' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59908-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.348219+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.348219+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-60225-7"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.348978+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.348978+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.353488+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.353488+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-60256-7"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.353631+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.353631+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-4"}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.353741+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.353741+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60328-6", "pg_num": 4}]': finished 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.357503+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.357503+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2156928045' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.379923+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/2519560284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.379923+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/2519560284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.380835+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.380835+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: cluster 2026-03-09T17:29:02.387375+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: cluster 2026-03-09T17:29:02.387375+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.426406+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.426406+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.426501+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.426501+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.426604+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.426604+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.426679+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.426679+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.431066+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.431066+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.461434+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.461434+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.475939+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.475939+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.514543+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.514543+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.516165+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.516165+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.517259+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.517259+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.931606+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.931606+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.933195+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.933195+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.933664+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.933664+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.934201+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.934201+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.934698+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.934698+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.935442+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.935442+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.936106+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.936106+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.936859+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.936859+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.937312+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.937312+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.938020+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:02.938020+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:03.178414+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:03.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:03 vm00 bash[20770]: audit 2026-03-09T17:29:03.178414+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:04.757 INFO:tasks.workunit.client.0.vm00.stdout: list: Running main() from gmock_main.cc 2026-03-09T17:29:04.757 INFO:tasks.workunit.client.0.vm00.stdout: list: [==========] Running 3 tests from 1 test suite. 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [----------] Global test environment set-up. 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [----------] 3 tests from NeoradosList 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [ RUN ] NeoradosList.ListObjects 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [ OK ] NeoradosList.ListObjects (2916 ms) 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [ RUN ] NeoradosList.ListObjectsNS 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [ OK ] NeoradosList.ListObjectsNS (3251 ms) 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [ RUN ] NeoradosList.ListObjectsMany 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [ OK ] NeoradosList.ListObjectsMany (5797 ms) 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [----------] 3 tests from NeoradosList (11964 ms total) 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [----------] Global test environment tear-down 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [==========] 3 tests from 1 test suite ran. (11964 ms total) 2026-03-09T17:29:04.758 INFO:tasks.workunit.client.0.vm00.stdout: list: [ PASSED ] 3 tests. 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:02.675316+0000 mgr.y (mgr.14505) 112 : cluster [DBG] pgmap v77: 776 pgs: 32 creating+peering, 260 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:02.675316+0000 mgr.y (mgr.14505) 112 : cluster [DBG] pgmap v77: 776 pgs: 32 creating+peering, 260 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.931845+0000 mgr.y (mgr.14505) 113 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.931845+0000 mgr.y (mgr.14505) 113 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.933342+0000 mgr.y (mgr.14505) 114 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.933342+0000 mgr.y (mgr.14505) 114 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.933820+0000 mgr.y (mgr.14505) 115 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.933820+0000 mgr.y (mgr.14505) 115 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.934361+0000 mgr.y (mgr.14505) 116 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.934361+0000 mgr.y (mgr.14505) 116 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.934925+0000 mgr.y (mgr.14505) 117 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.934925+0000 mgr.y (mgr.14505) 117 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.935657+0000 mgr.y (mgr.14505) 118 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.935657+0000 mgr.y (mgr.14505) 118 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.936276+0000 mgr.y (mgr.14505) 119 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.936276+0000 mgr.y (mgr.14505) 119 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.936988+0000 mgr.y (mgr.14505) 120 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.936988+0000 mgr.y (mgr.14505) 120 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.937479+0000 mgr.y (mgr.14505) 121 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.937479+0000 mgr.y (mgr.14505) 121 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.938159+0000 mgr.y (mgr.14505) 122 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:02.938159+0000 mgr.y (mgr.14505) 122 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.189588+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.189588+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.199654+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.199654+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.348817+0000 mon.a (mon.0) 1014 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.348817+0000 mon.a (mon.0) 1014 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.358340+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.358340+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.358415+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.358415+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.358484+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.358484+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.358906+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.358906+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.359095+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.359095+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.364906+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.364906+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.400324+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.100:0/796784356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.400324+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.100:0/796784356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.401161+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.401161+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.402471+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.402471+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.405135+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.405135+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.405251+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.405251+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.405638+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.405638+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.405712+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.405712+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.511673+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.511673+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.532957+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: cluster 2026-03-09T17:29:03.532957+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.545167+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.545167+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.547525+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.547525+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.548044+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:03.548044+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:04.179117+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:04.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:04 vm02 bash[23351]: audit 2026-03-09T17:29:04.179117+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:02.675316+0000 mgr.y (mgr.14505) 112 : cluster [DBG] pgmap v77: 776 pgs: 32 creating+peering, 260 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:02.675316+0000 mgr.y (mgr.14505) 112 : cluster [DBG] pgmap v77: 776 pgs: 32 creating+peering, 260 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.931845+0000 mgr.y (mgr.14505) 113 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.931845+0000 mgr.y (mgr.14505) 113 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.933342+0000 mgr.y (mgr.14505) 114 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.933342+0000 mgr.y (mgr.14505) 114 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.933820+0000 mgr.y (mgr.14505) 115 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.933820+0000 mgr.y (mgr.14505) 115 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.934361+0000 mgr.y (mgr.14505) 116 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.934361+0000 mgr.y (mgr.14505) 116 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.934925+0000 mgr.y (mgr.14505) 117 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.934925+0000 mgr.y (mgr.14505) 117 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.935657+0000 mgr.y (mgr.14505) 118 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.935657+0000 mgr.y (mgr.14505) 118 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.936276+0000 mgr.y (mgr.14505) 119 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.936276+0000 mgr.y (mgr.14505) 119 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.936988+0000 mgr.y (mgr.14505) 120 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.936988+0000 mgr.y (mgr.14505) 120 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.937479+0000 mgr.y (mgr.14505) 121 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.937479+0000 mgr.y (mgr.14505) 121 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.938159+0000 mgr.y (mgr.14505) 122 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:02.938159+0000 mgr.y (mgr.14505) 122 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.189588+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.189588+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.199654+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.199654+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.348817+0000 mon.a (mon.0) 1014 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.348817+0000 mon.a (mon.0) 1014 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.358340+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.358340+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.358415+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.358415+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.358484+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.358484+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.358906+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.358906+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.359095+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.359095+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.364906+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.364906+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.400324+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.100:0/796784356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.400324+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.100:0/796784356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.401161+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.401161+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.402471+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.402471+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.405135+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.405135+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.405251+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.405251+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.405638+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.405638+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.405712+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.405712+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.511673+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.511673+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.532957+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: cluster 2026-03-09T17:29:03.532957+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.545167+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.545167+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.547525+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.547525+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.548044+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:03.548044+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:04.179117+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:04 vm00 bash[20770]: audit 2026-03-09T17:29:04.179117+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:02.675316+0000 mgr.y (mgr.14505) 112 : cluster [DBG] pgmap v77: 776 pgs: 32 creating+peering, 260 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:29:05.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:02.675316+0000 mgr.y (mgr.14505) 112 : cluster [DBG] pgmap v77: 776 pgs: 32 creating+peering, 260 unknown, 484 active+clean; 459 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.931845+0000 mgr.y (mgr.14505) 113 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.931845+0000 mgr.y (mgr.14505) 113 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.933342+0000 mgr.y (mgr.14505) 114 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.933342+0000 mgr.y (mgr.14505) 114 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.933820+0000 mgr.y (mgr.14505) 115 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.933820+0000 mgr.y (mgr.14505) 115 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.934361+0000 mgr.y (mgr.14505) 116 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.934361+0000 mgr.y (mgr.14505) 116 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.934925+0000 mgr.y (mgr.14505) 117 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.934925+0000 mgr.y (mgr.14505) 117 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.935657+0000 mgr.y (mgr.14505) 118 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.935657+0000 mgr.y (mgr.14505) 118 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.936276+0000 mgr.y (mgr.14505) 119 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.936276+0000 mgr.y (mgr.14505) 119 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.936988+0000 mgr.y (mgr.14505) 120 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.936988+0000 mgr.y (mgr.14505) 120 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.937479+0000 mgr.y (mgr.14505) 121 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.937479+0000 mgr.y (mgr.14505) 121 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.938159+0000 mgr.y (mgr.14505) 122 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:02.938159+0000 mgr.y (mgr.14505) 122 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.189588+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.189588+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.199654+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.199654+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.348817+0000 mon.a (mon.0) 1014 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.348817+0000 mon.a (mon.0) 1014 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.358340+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.358340+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-4", "tierpool": "test-rados-api-vm00-60118-4-cache"}]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.358415+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.358415+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.358484+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.358484+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.358906+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.358906+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.359095+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.359095+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59929-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.364906+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.364906+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.400324+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.100:0/796784356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.400324+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.100:0/796784356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.401161+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.401161+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.402471+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.402471+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.405135+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.405135+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:05.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.405251+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.405251+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.405638+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.405638+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.405712+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.405712+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.511673+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.511673+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.532957+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: cluster 2026-03-09T17:29:03.532957+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.545167+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.545167+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.547525+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.547525+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.548044+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:03.548044+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:04.179117+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:05.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:04 vm00 bash[28333]: audit 2026-03-09T17:29:04.179117+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: Running main() from gmock_main.cc 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [==========] Running 2 tests from 1 test suite. 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [----------] Global test environment set-up. 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [----------] 2 tests from NeoRadosECIo 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ RUN ] NeoRadosECIo.SimpleWrite 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ OK ] NeoRadosECIo.SimpleWrite (6118 ms) 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ RUN ] NeoRadosECIo.ReadOp 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ OK ] NeoRadosECIo.ReadOp (6920 ms) 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [----------] 2 tests from NeoRadosECIo (13038 ms total) 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [----------] Global test environment tear-down 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [==========] 2 tests from 1 test suite ran. (13041 ms total) 2026-03-09T17:29:05.800 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ PASSED ] 2 tests. 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:03.350654+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:03.350654+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:03.354612+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:03.354612+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:03.826002+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:03.826002+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:03.826959+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:03.826959+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.192038+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.192038+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.193009+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.193009+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.362302+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.362302+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.363293+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.363293+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.625861+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.625861+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.626347+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.626347+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.626407+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.626407+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.626455+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.626455+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.677108+0000 mgr.y (mgr.14505) 123 : cluster [DBG] pgmap v80: 712 pgs: 1 active+clean+snaptrim, 36 creating+peering, 160 unknown, 515 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 14 MiB/s rd, 34 MiB/s wr, 487 op/s 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.677108+0000 mgr.y (mgr.14505) 123 : cluster [DBG] pgmap v80: 712 pgs: 1 active+clean+snaptrim, 36 creating+peering, 160 unknown, 515 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 14 MiB/s rd, 34 MiB/s wr, 487 op/s 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.690528+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.690528+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.698279+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2"}]: dispatch 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.698279+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2"}]: dispatch 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.700937+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.700937+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.716560+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: cluster 2026-03-09T17:29:04.716560+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.718911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.718911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.719381+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.719381+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.727694+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.727694+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.727801+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.727801+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.729426+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.729426+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.729530+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.729530+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.729576+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.729576+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.742759+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.742759+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.751822+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.100:0/438360680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.751822+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.100:0/438360680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.752628+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:04.752628+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:05.179943+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:05 vm00 bash[20770]: audit 2026-03-09T17:29:05.179943+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:03.350654+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:03.350654+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:03.354612+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:03.354612+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:03.826002+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:03.826002+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:03.826959+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:03.826959+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.192038+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.192038+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.193009+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.193009+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.362302+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.362302+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.363293+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.363293+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.625861+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.625861+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.626347+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.626347+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.626407+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.626407+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.626455+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.626455+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.677108+0000 mgr.y (mgr.14505) 123 : cluster [DBG] pgmap v80: 712 pgs: 1 active+clean+snaptrim, 36 creating+peering, 160 unknown, 515 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 14 MiB/s rd, 34 MiB/s wr, 487 op/s 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.677108+0000 mgr.y (mgr.14505) 123 : cluster [DBG] pgmap v80: 712 pgs: 1 active+clean+snaptrim, 36 creating+peering, 160 unknown, 515 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 14 MiB/s rd, 34 MiB/s wr, 487 op/s 2026-03-09T17:29:06.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.690528+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.690528+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.698279+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.698279+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.700937+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.700937+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.716560+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: cluster 2026-03-09T17:29:04.716560+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.718911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.718911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.719381+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.719381+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.727694+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.727694+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.727801+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.727801+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.729426+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.729426+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.729530+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.729530+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.729576+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.729576+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.742759+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.742759+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.751822+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.100:0/438360680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.751822+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.100:0/438360680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.752628+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:04.752628+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:05.179943+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:06.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:05 vm00 bash[28333]: audit 2026-03-09T17:29:05.179943+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:03.350654+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:03.350654+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:03.354612+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:03.354612+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:03.826002+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:03.826002+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:03.826959+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:03.826959+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.192038+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:06.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.192038+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.193009+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.193009+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.362302+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.362302+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.363293+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.363293+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.625861+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.625861+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "overlaypool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.626347+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.626347+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.626407+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.626407+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.626455+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.626455+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.677108+0000 mgr.y (mgr.14505) 123 : cluster [DBG] pgmap v80: 712 pgs: 1 active+clean+snaptrim, 36 creating+peering, 160 unknown, 515 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 14 MiB/s rd, 34 MiB/s wr, 487 op/s 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.677108+0000 mgr.y (mgr.14505) 123 : cluster [DBG] pgmap v80: 712 pgs: 1 active+clean+snaptrim, 36 creating+peering, 160 unknown, 515 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 14 MiB/s rd, 34 MiB/s wr, 487 op/s 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.690528+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.690528+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.100:0/1551263207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.698279+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.698279+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.700937+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.700937+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.716560+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: cluster 2026-03-09T17:29:04.716560+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.718911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.718911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.719381+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.719381+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.727694+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.727694+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.727801+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.727801+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.729426+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.729426+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.729530+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.729530+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.729576+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.729576+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.742759+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.742759+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.751822+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.100:0/438360680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.751822+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.100:0/438360680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.752628+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:04.752628+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:05.179943+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:05 vm02 bash[23351]: audit 2026-03-09T17:29:05.179943+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:06.440 INFO:tasks.workunit.client.0.vm00.stdout: pool: Running main() from gmock_main.cc 2026-03-09T17:29:06.440 INFO:tasks.workunit.client.0.vm00.stdout: pool: [==========] Running 6 tests from 1 test suite. 2026-03-09T17:29:06.440 INFO:tasks.workunit.client.0.vm00.stdout: pool: [----------] Global test environment set-up. 2026-03-09T17:29:06.440 INFO:tasks.workunit.client.0.vm00.stdout: pool: [----------] 6 tests from NeoRadosPools 2026-03-09T17:29:06.440 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolList 2026-03-09T17:29:06.440 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolList (1678 ms) 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolLookup 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolLookup (2258 ms) 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolLookupOtherInstance 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolLookupOtherInstance (2179 ms) 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolDelete 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolDelete (3431 ms) 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateDelete 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolCreateDelete (2342 ms) 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateWithCrushRule 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolCreateWithCrushRule (1714 ms) 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [----------] 6 tests from NeoRadosPools (13602 ms total) 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [----------] Global test environment tear-down 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [==========] 6 tests from 1 test suite ran. (13602 ms total) 2026-03-09T17:29:06.441 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ PASSED ] 6 tests. 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: Running main() from gmock_main.cc 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [==========] Running 16 tests from 2 test suites. 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] Global test environment set-up. 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: seed 60068 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusivePP 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusivePP (269 ms) 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedPP 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedPP (21 ms) 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusiveDurPP 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusiveDurPP (1243 ms) 2026-03-09T17:29:06.457 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedDurPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedDurPP (1032 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockMayRenewPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockMayRenewPP (21 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.UnlockPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.UnlockPP (11 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.ListLockersPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.ListLockersPP (136 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.BreakLockPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.BreakLockPP (4 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP (2737 ms total) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusivePP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusivePP (1516 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedPP (5 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusiveDurPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusiveDurPP (1241 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedDurPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedDurPP (1029 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockMayRenewPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockMayRenewPP (5 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.UnlockPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.UnlockPP (5 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.ListLockersPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.ListLockersPP (8 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.BreakLockPP 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.BreakLockPP (3 ms) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP (3812 ms total) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] Global test environment tear-down 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [==========] 16 tests from 2 test suites ran. (14220 ms total) 2026-03-09T17:29:06.458 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ PASSED ] 16 tests. 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: Running main() from gmock_main.cc 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [==========] Running 16 tests from 2 test suites. 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] Global test environment set-up. 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] 8 tests from LibRadosLock 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusive 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockExclusive (306 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockShared 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockShared (21 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusiveDur 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockExclusiveDur (1268 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockSharedDur 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockSharedDur (1025 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockMayRenew 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockMayRenew (6 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.Unlock 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.Unlock (5 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.ListLockers 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.ListLockers (114 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.BreakLock 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.BreakLock (27 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] 8 tests from LibRadosLock (2772 ms total) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] 8 tests from LibRadosLockEC 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusive 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusive (1516 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockShared 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockShared (18 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusiveDur 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusiveDur (1252 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockSharedDur 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockSharedDur (1065 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockMayRenew 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockMayRenew (6 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.Unlock 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.Unlock (5 ms) 2026-03-09T17:29:06.459 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.ListLockers 2026-03-09T17:29:06.460 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.ListLockers (6 ms) 2026-03-09T17:29:06.460 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.BreakLock 2026-03-09T17:29:06.460 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.BreakLock (3 ms) 2026-03-09T17:29:06.460 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] 8 tests from LibRadosLockEC (3871 ms total) 2026-03-09T17:29:06.460 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: 2026-03-09T17:29:06.460 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] Global test environment tear-down 2026-03-09T17:29:06.460 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [==========] 16 tests from 2 test suites ran. (14248 ms total) 2026-03-09T17:29:06.460 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ PASSED ] 16 tests. 2026-03-09T17:29:06.710 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:29:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:29:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.231011+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.231011+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.231997+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.231997+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.399745+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.399745+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.401029+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.401029+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.704421+0000 mon.a (mon.0) 1038 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.704421+0000 mon.a (mon.0) 1038 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.705514+0000 mon.a (mon.0) 1039 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.705514+0000 mon.a (mon.0) 1039 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.718879+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.718879+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.718936+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.718936+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719035+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719035+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719051+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719051+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719068+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719068+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719098+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719098+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719109+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.719109+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.721692+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:05.721692+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.730263+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.730263+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.730438+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.730438+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.739138+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/2936898738' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.739138+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/2936898738' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.748121+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/289997494' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.748121+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/289997494' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.755199+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.755199+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.755679+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.755679+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.755846+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.755846+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.756059+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.756059+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.780685+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.780685+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.830940+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.830940+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.842362+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.842362+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.843417+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.843417+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.859288+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.859288+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.861905+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.861905+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.862766+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.862766+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.863666+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.863666+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.864106+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.864106+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.865662+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.231011+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.231011+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.231997+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.231997+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.399745+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.399745+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.401029+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.401029+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.704421+0000 mon.a (mon.0) 1038 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.704421+0000 mon.a (mon.0) 1038 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.705514+0000 mon.a (mon.0) 1039 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.705514+0000 mon.a (mon.0) 1039 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:07.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.718879+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.718879+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.718936+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.718936+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719035+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719035+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719051+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719051+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719068+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719068+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719098+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719098+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719109+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.719109+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.721692+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:05.721692+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.730263+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.730263+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.730438+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.730438+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.739138+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/2936898738' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.739138+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/2936898738' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:05.865662+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.180792+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.180792+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:06.282353+0000 mon.a (mon.0) 1058 : cluster [WRN] pool 'PoolQuotaPP_vm00-59916-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:06.282353+0000 mon.a (mon.0) 1058 : cluster [WRN] pool 'PoolQuotaPP_vm00-59916-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:06.283845+0000 mon.a (mon.0) 1059 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:06.283845+0000 mon.a (mon.0) 1059 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400895+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400895+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400925+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400925+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400948+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400948+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400969+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400969+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400995+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.400995+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.401037+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.401037+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.401112+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.401112+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:06.419048+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: cluster 2026-03-09T17:29:06.419048+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.434151+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.434151+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.437203+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.437203+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.443556+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/2584030285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.443556+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/2584030285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.449105+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.449105+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.449440+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.449440+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.449648+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.449648+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.474982+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.474982+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.485345+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:06 vm00 bash[28333]: audit 2026-03-09T17:29:06.485345+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.748121+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/289997494' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.748121+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/289997494' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.755199+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.755199+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.755679+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.755679+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.755846+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.755846+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.756059+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.756059+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.780685+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.780685+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.830940+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.830940+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.842362+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.842362+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.843417+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.843417+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.859288+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.859288+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.861905+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.861905+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.862766+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.862766+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.863666+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.863666+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.864106+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.864106+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.865662+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:05.865662+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.180792+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.180792+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:06.282353+0000 mon.a (mon.0) 1058 : cluster [WRN] pool 'PoolQuotaPP_vm00-59916-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:06.282353+0000 mon.a (mon.0) 1058 : cluster [WRN] pool 'PoolQuotaPP_vm00-59916-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:06.283845+0000 mon.a (mon.0) 1059 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:06.283845+0000 mon.a (mon.0) 1059 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400895+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400895+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400925+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400925+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400948+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400948+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400969+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400969+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400995+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.400995+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.401037+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.401037+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.401112+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.401112+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:06.419048+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: cluster 2026-03-09T17:29:06.419048+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.434151+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.434151+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.437203+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.437203+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.443556+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/2584030285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.443556+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/2584030285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.449105+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.449105+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.449440+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.449440+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.449648+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.449648+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.474982+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.474982+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.485345+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.042 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:06 vm00 bash[20770]: audit 2026-03-09T17:29:06.485345+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.231011+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.231011+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.231997+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.231997+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.399745+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.399745+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.401029+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.401029+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.704421+0000 mon.a (mon.0) 1038 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.704421+0000 mon.a (mon.0) 1038 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.705514+0000 mon.a (mon.0) 1039 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.705514+0000 mon.a (mon.0) 1039 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:07.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.718879+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.718879+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59929-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.718936+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.718936+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60328-6", "mode": "writeback"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719035+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719035+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60745-2"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719051+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719051+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-60095-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719068+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719068+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719098+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719098+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719109+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.719109+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59908-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.721692+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:05.721692+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.730263+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.730263+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/289600082' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.730438+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.730438+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/1523023295' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.739138+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/2936898738' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.739138+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/2936898738' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.748121+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/289997494' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.748121+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/289997494' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.755199+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.755199+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.755679+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.755679+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.755846+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.755846+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.756059+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.756059+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.780685+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.780685+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.830940+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.830940+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.842362+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.842362+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.843417+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.843417+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.859288+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.859288+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.861905+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.861905+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.862766+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.862766+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.863666+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.863666+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.864106+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.864106+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.865662+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:05.865662+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.180792+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.180792+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:06.282353+0000 mon.a (mon.0) 1058 : cluster [WRN] pool 'PoolQuotaPP_vm00-59916-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:06.282353+0000 mon.a (mon.0) 1058 : cluster [WRN] pool 'PoolQuotaPP_vm00-59916-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:06.283845+0000 mon.a (mon.0) 1059 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:06.283845+0000 mon.a (mon.0) 1059 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400895+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400895+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.100:0/2242459876' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60801-2"}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400925+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400925+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-60068-10"}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400948+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400948+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-60039-10"}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400969+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400969+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-60171-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400995+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.400995+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.401037+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.401037+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.401112+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.401112+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:06.419048+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: cluster 2026-03-09T17:29:06.419048+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.434151+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.434151+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.437203+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.437203+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.443556+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/2584030285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.443556+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/2584030285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.449105+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.449105+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.449440+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.449440+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.449648+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.449648+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.474982+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.474982+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.485345+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:07.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:06 vm02 bash[23351]: audit 2026-03-09T17:29:06.485345+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: cluster 2026-03-09T17:29:06.677603+0000 mgr.y (mgr.14505) 124 : cluster [DBG] pgmap v83: 656 pgs: 1 active+clean+snaptrim, 36 creating+peering, 200 unknown, 419 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 30 MiB/s wr, 427 op/s 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: cluster 2026-03-09T17:29:06.677603+0000 mgr.y (mgr.14505) 124 : cluster [DBG] pgmap v83: 656 pgs: 1 active+clean+snaptrim, 36 creating+peering, 200 unknown, 419 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 30 MiB/s wr, 427 op/s 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.181571+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.181571+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.411913+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.411913+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.411979+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.411979+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.412070+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.412070+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: cluster 2026-03-09T17:29:07.421853+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: cluster 2026-03-09T17:29:07.421853+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.422727+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.422727+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.428211+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.428211+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.428676+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.428676+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.429026+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.429026+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.433590+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.433590+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.433686+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.433686+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.433915+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.433915+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.434220+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.434220+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.436371+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:07 vm00 bash[28333]: audit 2026-03-09T17:29:07.436371+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: cluster 2026-03-09T17:29:06.677603+0000 mgr.y (mgr.14505) 124 : cluster [DBG] pgmap v83: 656 pgs: 1 active+clean+snaptrim, 36 creating+peering, 200 unknown, 419 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 30 MiB/s wr, 427 op/s 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: cluster 2026-03-09T17:29:06.677603+0000 mgr.y (mgr.14505) 124 : cluster [DBG] pgmap v83: 656 pgs: 1 active+clean+snaptrim, 36 creating+peering, 200 unknown, 419 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 30 MiB/s wr, 427 op/s 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.181571+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.181571+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.411913+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.411913+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.411979+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.411979+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.412070+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.412070+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: cluster 2026-03-09T17:29:07.421853+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: cluster 2026-03-09T17:29:07.421853+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.422727+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.422727+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.428211+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.428211+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.428676+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.428676+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.429026+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.429026+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.433590+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.433590+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.433686+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.433686+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.433915+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.433915+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.434220+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.434220+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.436371+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:07 vm00 bash[20770]: audit 2026-03-09T17:29:07.436371+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: cluster 2026-03-09T17:29:06.677603+0000 mgr.y (mgr.14505) 124 : cluster [DBG] pgmap v83: 656 pgs: 1 active+clean+snaptrim, 36 creating+peering, 200 unknown, 419 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 30 MiB/s wr, 427 op/s 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: cluster 2026-03-09T17:29:06.677603+0000 mgr.y (mgr.14505) 124 : cluster [DBG] pgmap v83: 656 pgs: 1 active+clean+snaptrim, 36 creating+peering, 200 unknown, 419 active+clean; 120 MiB data, 589 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 30 MiB/s wr, 427 op/s 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.181571+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.181571+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.411913+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.411913+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.411979+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.411979+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.412070+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.412070+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60801-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: cluster 2026-03-09T17:29:07.421853+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: cluster 2026-03-09T17:29:07.421853+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.422727+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.422727+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.428211+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.428211+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.428676+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.428676+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.429026+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.429026+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.433590+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.433590+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.433686+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.433686+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.433915+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.433915+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.434220+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.434220+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.100:0/1928079168' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.436371+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:07 vm02 bash[23351]: audit 2026-03-09T17:29:07.436371+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: Running main() from gmock_main.cc 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [==========] Running 16 tests from 2 test suites. 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] Global test environment set-up. 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotify 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotify (1902 ms) 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotifyTimeout 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotifyTimeout (13 ms) 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP (1915 ms total) 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 (258 ms) 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 (4370 ms) 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 (6 ms) 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 (5 ms) 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342100816144 notify_id 339302416385 notifier_gid 15126 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 (6 ms) 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 2026-03-09T17:29:08.980 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342100816144 notify_id 339302416386 notifier_gid 15126 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 (6 ms) 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342100802576 notify_id 339302416387 notifier_gid 15126 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 (6 ms) 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342100816144 notify_id 339302416388 notifier_gid 15126 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 (5 ms) 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342100816144 notify_id 339302416389 notifier_gid 15126 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 (6 ms) 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342100799312 notify_id 339302416387 notifier_gid 15126 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 (7 ms) 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: trying... 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342099870224 notify_id 339302416390 notifier_gid 15126 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: timed out 2026-03-09T17:29:08.981 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushing 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.182416+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.182416+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416417+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416417+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416463+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416463+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416486+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416486+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416510+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416510+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416533+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.416533+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.439326+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.439326+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: cluster 2026-03-09T17:29:08.453413+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: cluster 2026-03-09T17:29:08.453413+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.461280+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.461280+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.461392+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:08 vm00 bash[20770]: audit 2026-03-09T17:29:08.461392+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.182416+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.182416+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416417+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416417+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416463+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416463+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416486+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416486+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416510+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416510+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416533+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.416533+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.439326+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.439326+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: cluster 2026-03-09T17:29:08.453413+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: cluster 2026-03-09T17:29:08.453413+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.461280+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.461280+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.461392+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:08 vm00 bash[28333]: audit 2026-03-09T17:29:08.461392+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.182416+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.182416+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416417+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416417+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60286-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416463+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416463+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/3264347344' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59908-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416486+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416486+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416510+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416510+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416533+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.416533+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-60095-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.439326+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.439326+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/2533441032' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: cluster 2026-03-09T17:29:08.453413+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: cluster 2026-03-09T17:29:08.453413+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.461280+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.461280+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.461392+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:08 vm02 bash[23351]: audit 2026-03-09T17:29:08.461392+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]: dispatch 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_watibRadosIoECPP.RoundTripPP2 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP2 (5 ms) 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.OverlappingWriteRoundTripPP 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.OverlappingWriteRoundTripPP (9 ms) 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP (13 ms) 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP2 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP2 (6 ms) 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.AppendRoundTripPP 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.AppendRoundTripPP (8 ms) 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.TruncTestPP 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.TruncTestPP (8 ms) 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RemoveTestPP 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RemoveTestPP (6 ms) 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrsRoundTripPP 2026-03-09T17:29:09.524 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrsRoundTripPP (6 ms) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RmXattrPP 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RmXattrPP (27 ms) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CrcZeroWrite 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CrcZeroWrite (6321 ms) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrListPP 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrListPP (831 ms) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtPP 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtPP (4 ms) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtDNEPP 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtDNEPP (4 ms) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtMismatchPP 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtMismatchPP (3 ms) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP (8651 ms total) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] Global test environment tear-down 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [==========] 39 tests from 2 test suites ran. (17350 ms total) 2026-03-09T17:29:09.525 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ PASSED ] 39 tests. 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: cluster 2026-03-09T17:29:08.678370+0000 mgr.y (mgr.14505) 125 : cluster [DBG] pgmap v86: 560 pgs: 64 creating+peering, 40 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 3 op/s 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: cluster 2026-03-09T17:29:08.678370+0000 mgr.y (mgr.14505) 125 : cluster [DBG] pgmap v86: 560 pgs: 64 creating+peering, 40 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 3 op/s 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.183211+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.183211+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.422406+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.422406+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.422459+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.422459+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.422481+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.422481+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: cluster 2026-03-09T17:29:09.431578+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: cluster 2026-03-09T17:29:09.431578+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.500501+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.500501+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.500747+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/1973595005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.500747+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/1973595005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.500987+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/504320602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.500987+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/504320602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.518097+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.518097+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.518265+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.518265+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.518323+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:09 vm00 bash[20770]: audit 2026-03-09T17:29:09.518323+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: cluster 2026-03-09T17:29:08.678370+0000 mgr.y (mgr.14505) 125 : cluster [DBG] pgmap v86: 560 pgs: 64 creating+peering, 40 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 3 op/s 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: cluster 2026-03-09T17:29:08.678370+0000 mgr.y (mgr.14505) 125 : cluster [DBG] pgmap v86: 560 pgs: 64 creating+peering, 40 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 3 op/s 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.183211+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.183211+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.422406+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.422406+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.422459+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.422459+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.422481+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.422481+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: cluster 2026-03-09T17:29:09.431578+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: cluster 2026-03-09T17:29:09.431578+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.500501+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.500501+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.500747+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/1973595005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.500747+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/1973595005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.500987+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/504320602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.500987+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/504320602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.518097+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.518097+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.518265+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.518265+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.518323+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:09 vm00 bash[28333]: audit 2026-03-09T17:29:09.518323+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: cluster 2026-03-09T17:29:08.678370+0000 mgr.y (mgr.14505) 125 : cluster [DBG] pgmap v86: 560 pgs: 64 creating+peering, 40 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 3 op/s 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: cluster 2026-03-09T17:29:08.678370+0000 mgr.y (mgr.14505) 125 : cluster [DBG] pgmap v86: 560 pgs: 64 creating+peering, 40 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 3 op/s 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.183211+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.183211+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.422406+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.422406+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60801-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.422459+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.422459+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/1253365352' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.422481+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.422481+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59929-23"}]': finished 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: cluster 2026-03-09T17:29:09.431578+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: cluster 2026-03-09T17:29:09.431578+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.500501+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.500501+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.500747+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/1973595005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.500747+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/1973595005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.500987+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/504320602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.500987+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/504320602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.518097+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.518097+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.518265+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.518265+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.518323+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:09 vm02 bash[23351]: audit 2026-03-09T17:29:09.518323+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.186845+0000 mon.c (mon.2) 161 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.186845+0000 mon.c (mon.2) 161 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.426568+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.426568+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.426615+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.426615+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.426665+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.426665+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: cluster 2026-03-09T17:29:10.449337+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: cluster 2026-03-09T17:29:10.449337+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.472367+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/1518255068' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.472367+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/1518255068' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.472469+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.472469+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.479497+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.479497+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.504027+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.504027+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.867555+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.867555+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.868637+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:10 vm00 bash[28333]: audit 2026-03-09T17:29:10.868637+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.186845+0000 mon.c (mon.2) 161 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.186845+0000 mon.c (mon.2) 161 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.426568+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.426568+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.426615+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.426615+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.426665+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.426665+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: cluster 2026-03-09T17:29:10.449337+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: cluster 2026-03-09T17:29:10.449337+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.472367+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/1518255068' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.472367+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/1518255068' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.472469+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.472469+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.479497+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.479497+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.504027+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.504027+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.867555+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.294 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.867555+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.294 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.868637+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.294 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:10 vm00 bash[20770]: audit 2026-03-09T17:29:10.868637+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.186845+0000 mon.c (mon.2) 161 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.186845+0000 mon.c (mon.2) 161 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.426568+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.426568+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.426615+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.426615+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-60199-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.426665+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.426665+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: cluster 2026-03-09T17:29:10.449337+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: cluster 2026-03-09T17:29:10.449337+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.472367+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/1518255068' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.472367+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/1518255068' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.472469+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.472469+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.479497+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.479497+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.504027+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.504027+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.867555+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.867555+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.868637+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:10 vm02 bash[23351]: audit 2026-03-09T17:29:10.868637+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:11.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:29:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: cluster 2026-03-09T17:29:10.679287+0000 mgr.y (mgr.14505) 126 : cluster [DBG] pgmap v89: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 8 op/s 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: cluster 2026-03-09T17:29:10.679287+0000 mgr.y (mgr.14505) 126 : cluster [DBG] pgmap v89: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 8 op/s 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.187765+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.187765+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: cluster 2026-03-09T17:29:11.302722+0000 mon.a (mon.0) 1103 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: cluster 2026-03-09T17:29:11.302722+0000 mon.a (mon.0) 1103 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.438186+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.438186+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.438229+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.438229+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.438251+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.438251+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: cluster 2026-03-09T17:29:11.440140+0000 mon.a (mon.0) 1107 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: cluster 2026-03-09T17:29:11.440140+0000 mon.a (mon.0) 1107 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.477749+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.477749+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.478758+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.478758+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.480110+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.480110+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.483568+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:11 vm00 bash[28333]: audit 2026-03-09T17:29:11.483568+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: cluster 2026-03-09T17:29:10.679287+0000 mgr.y (mgr.14505) 126 : cluster [DBG] pgmap v89: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 8 op/s 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: cluster 2026-03-09T17:29:10.679287+0000 mgr.y (mgr.14505) 126 : cluster [DBG] pgmap v89: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 8 op/s 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.187765+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.187765+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: cluster 2026-03-09T17:29:11.302722+0000 mon.a (mon.0) 1103 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: cluster 2026-03-09T17:29:11.302722+0000 mon.a (mon.0) 1103 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.438186+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.438186+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.438229+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.438229+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.438251+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.438251+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: cluster 2026-03-09T17:29:11.440140+0000 mon.a (mon.0) 1107 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: cluster 2026-03-09T17:29:11.440140+0000 mon.a (mon.0) 1107 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.477749+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.477749+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.478758+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.478758+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.480110+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.480110+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.483568+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.297 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:11 vm00 bash[20770]: audit 2026-03-09T17:29:11.483568+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: cluster 2026-03-09T17:29:10.679287+0000 mgr.y (mgr.14505) 126 : cluster [DBG] pgmap v89: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 8 op/s 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: cluster 2026-03-09T17:29:10.679287+0000 mgr.y (mgr.14505) 126 : cluster [DBG] pgmap v89: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.3 MiB/s wr, 8 op/s 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.187765+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.187765+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: cluster 2026-03-09T17:29:11.302722+0000 mon.a (mon.0) 1103 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: cluster 2026-03-09T17:29:11.302722+0000 mon.a (mon.0) 1103 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.438186+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.438186+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.438229+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.438229+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.438251+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.438251+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: cluster 2026-03-09T17:29:11.440140+0000 mon.a (mon.0) 1107 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: cluster 2026-03-09T17:29:11.440140+0000 mon.a (mon.0) 1107 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.477749+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.477749+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/1476709314' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.478758+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.478758+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.480110+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.480110+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.483568+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:11 vm02 bash[23351]: audit 2026-03-09T17:29:11.483568+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:12.551 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: Running main() from gmock_main.cc 2026-03-09T17:29:12.551 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [==========] Running 11 tests from 2 test suites. 2026-03-09T17:29:12.551 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] Global test environment set-up. 2026-03-09T17:29:12.551 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify_test_cb 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify (561 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch2Delete 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94029992360528 err -107 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch2Delete (87 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94029992360528 err -107 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete (39 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 24602 notify_id 292057776128 cookie 94029992372480 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2 (10 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchNotify2 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 24602 notify_id 292057776128 cookie 94029992372480 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchNotify2 (6 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioNotify 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 24602 notify_id 292057776129 cookie 94029992392688 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioNotify (35 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Multi 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 24602 notify_id 292057776128 cookie 94029992414928 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 24602 notify_id 292057776128 cookie 94029992429584 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Multi (12 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Timeout 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 24602 notify_id 292057776129 cookie 94029992429584 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 24602 notify_id 296352743426 cookie 94029992429584 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Timeout (3020 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch3Timeout 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: waiting up to 1024 for osd to time us out ... 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94029992429584 err -107 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 24602 notify_id 326417514498 cookie 94029992429584 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch3Timeout (5015 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete2 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: waiting up to 30 for disconnect notification ... 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94029992429584 err -107 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete2 (1085 ms) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify (9870 ms total) 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC 2026-03-09T17:29:12.552 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotifyEC.WatchNotify 2026-03-09T17:29:12.553 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify_test_cb 2026-03-09T17:29:12.553 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotifyEC.WatchNotify (1082 ms) 2026-03-09T17:29:12.553 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC (1082 ms total) 2026-03-09T17:29:12.553 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: 2026-03-09T17:29:12.553 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] Global test environment tear-down 2026-03-09T17:29:12.553 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [==========] 11 tests from 2 test suites ran. (20086 ms total) 2026-03-09T17:29:12.666 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ PASSED ] 11 tests. 2026-03-09T17:29:14.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:11.539627+0000 mgr.y (mgr.14505) 127 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:11.539627+0000 mgr.y (mgr.14505) 127 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.066351+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.066351+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.070961+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.070961+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.188511+0000 mon.c (mon.2) 164 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.188511+0000 mon.c (mon.2) 164 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: cluster 2026-03-09T17:29:12.439534+0000 mon.a (mon.0) 1111 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: cluster 2026-03-09T17:29:12.439534+0000 mon.a (mon.0) 1111 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.444683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.444683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.444757+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.444757+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: cluster 2026-03-09T17:29:12.484484+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: cluster 2026-03-09T17:29:12.484484+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.495080+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/882903731' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.495080+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/882903731' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.503070+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.503070+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.685456+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.685456+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.686468+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:13 vm00 bash[20770]: audit 2026-03-09T17:29:12.686468+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:11.539627+0000 mgr.y (mgr.14505) 127 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:11.539627+0000 mgr.y (mgr.14505) 127 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.066351+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:14.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.066351+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.070961+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.070961+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.188511+0000 mon.c (mon.2) 164 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.188511+0000 mon.c (mon.2) 164 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: cluster 2026-03-09T17:29:12.439534+0000 mon.a (mon.0) 1111 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: cluster 2026-03-09T17:29:12.439534+0000 mon.a (mon.0) 1111 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.444683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.444683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.444757+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.444757+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: cluster 2026-03-09T17:29:12.484484+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: cluster 2026-03-09T17:29:12.484484+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.495080+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/882903731' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.495080+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/882903731' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.503070+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.503070+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.685456+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.685456+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.686468+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:13 vm00 bash[28333]: audit 2026-03-09T17:29:12.686468+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:11.539627+0000 mgr.y (mgr.14505) 127 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:11.539627+0000 mgr.y (mgr.14505) 127 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.066351+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.066351+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.070961+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.070961+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.188511+0000 mon.c (mon.2) 164 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.188511+0000 mon.c (mon.2) 164 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: cluster 2026-03-09T17:29:12.439534+0000 mon.a (mon.0) 1111 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: cluster 2026-03-09T17:29:12.439534+0000 mon.a (mon.0) 1111 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.444683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.444683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60286-12"}]': finished 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.444757+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.444757+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: cluster 2026-03-09T17:29:12.484484+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: cluster 2026-03-09T17:29:12.484484+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.495080+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/882903731' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.495080+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/882903731' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.503070+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.503070+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.685456+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.685456+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.686468+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:13 vm02 bash[23351]: audit 2026-03-09T17:29:12.686468+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: cluster 2026-03-09T17:29:12.679825+0000 mgr.y (mgr.14505) 128 : cluster [DBG] pgmap v92: 656 pgs: 32 creating+peering, 200 unknown, 424 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: cluster 2026-03-09T17:29:12.679825+0000 mgr.y (mgr.14505) 128 : cluster [DBG] pgmap v92: 656 pgs: 32 creating+peering, 200 unknown, 424 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:13.189285+0000 mon.c (mon.2) 166 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:13.189285+0000 mon.c (mon.2) 166 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:13.982013+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:13.982013+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:13.982339+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:13.982339+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.000080+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.000080+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.053442+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.100:0/766966934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.053442+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.100:0/766966934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: cluster 2026-03-09T17:29:14.080899+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: cluster 2026-03-09T17:29:14.080899+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.089645+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.089645+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.090542+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.090542+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.190908+0000 mon.c (mon.2) 168 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:14 vm02 bash[23351]: audit 2026-03-09T17:29:14.190908+0000 mon.c (mon.2) 168 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: cluster 2026-03-09T17:29:12.679825+0000 mgr.y (mgr.14505) 128 : cluster [DBG] pgmap v92: 656 pgs: 32 creating+peering, 200 unknown, 424 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: cluster 2026-03-09T17:29:12.679825+0000 mgr.y (mgr.14505) 128 : cluster [DBG] pgmap v92: 656 pgs: 32 creating+peering, 200 unknown, 424 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:13.189285+0000 mon.c (mon.2) 166 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:13.189285+0000 mon.c (mon.2) 166 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:13.982013+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:13.982013+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:13.982339+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:13.982339+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.000080+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.000080+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.053442+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.100:0/766966934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.053442+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.100:0/766966934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: cluster 2026-03-09T17:29:14.080899+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: cluster 2026-03-09T17:29:14.080899+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.089645+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.089645+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.090542+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.090542+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.190908+0000 mon.c (mon.2) 168 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:14 vm00 bash[28333]: audit 2026-03-09T17:29:14.190908+0000 mon.c (mon.2) 168 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.292 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: cluster 2026-03-09T17:29:12.679825+0000 mgr.y (mgr.14505) 128 : cluster [DBG] pgmap v92: 656 pgs: 32 creating+peering, 200 unknown, 424 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:15.292 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: cluster 2026-03-09T17:29:12.679825+0000 mgr.y (mgr.14505) 128 : cluster [DBG] pgmap v92: 656 pgs: 32 creating+peering, 200 unknown, 424 active+clean; 144 MiB data, 798 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:15.292 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:13.189285+0000 mon.c (mon.2) 166 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.292 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:13.189285+0000 mon.c (mon.2) 166 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.292 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:13.982013+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:15.292 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:13.982013+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60449-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:15.292 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:13.982339+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:13.982339+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.000080+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.000080+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.053442+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.100:0/766966934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.053442+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.100:0/766966934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: cluster 2026-03-09T17:29:14.080899+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: cluster 2026-03-09T17:29:14.080899+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.089645+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.089645+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.090542+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.090542+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.190908+0000 mon.c (mon.2) 168 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:15.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:14 vm00 bash[20770]: audit 2026-03-09T17:29:14.190908+0000 mon.c (mon.2) 168 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:16.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: cluster 2026-03-09T17:29:14.570085+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: cluster 2026-03-09T17:29:14.570085+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: cluster 2026-03-09T17:29:14.572877+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: cluster 2026-03-09T17:29:14.572877+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: cluster 2026-03-09T17:29:14.680303+0000 mgr.y (mgr.14505) 129 : cluster [DBG] pgmap v94: 656 pgs: 32 unknown, 128 creating+peering, 1 active+clean+snaptrim, 495 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: cluster 2026-03-09T17:29:14.680303+0000 mgr.y (mgr.14505) 129 : cluster [DBG] pgmap v94: 656 pgs: 32 unknown, 128 creating+peering, 1 active+clean+snaptrim, 495 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:14.992932+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:14.992932+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:14.992975+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:14.992975+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: cluster 2026-03-09T17:29:15.037495+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: cluster 2026-03-09T17:29:15.037495+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.047998+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.047998+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.053835+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/446294722' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.053835+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/446294722' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.054011+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/3697849533' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.054011+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/3697849533' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.081956+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.081956+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.082528+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.082528+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.082627+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.082627+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.194516+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:15 vm02 bash[23351]: audit 2026-03-09T17:29:15.194516+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: cluster 2026-03-09T17:29:14.570085+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: cluster 2026-03-09T17:29:14.570085+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: cluster 2026-03-09T17:29:14.572877+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: cluster 2026-03-09T17:29:14.572877+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: cluster 2026-03-09T17:29:14.680303+0000 mgr.y (mgr.14505) 129 : cluster [DBG] pgmap v94: 656 pgs: 32 unknown, 128 creating+peering, 1 active+clean+snaptrim, 495 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: cluster 2026-03-09T17:29:14.680303+0000 mgr.y (mgr.14505) 129 : cluster [DBG] pgmap v94: 656 pgs: 32 unknown, 128 creating+peering, 1 active+clean+snaptrim, 495 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:14.992932+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:14.992932+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:14.992975+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:14.992975+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: cluster 2026-03-09T17:29:15.037495+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: cluster 2026-03-09T17:29:15.037495+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.047998+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.047998+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.053835+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/446294722' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.053835+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/446294722' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.054011+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/3697849533' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.054011+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/3697849533' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.081956+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.081956+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.082528+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.082528+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.082627+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.082627+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.194516+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:15 vm00 bash[20770]: audit 2026-03-09T17:29:15.194516+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: cluster 2026-03-09T17:29:14.570085+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: cluster 2026-03-09T17:29:14.570085+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: cluster 2026-03-09T17:29:14.572877+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: cluster 2026-03-09T17:29:14.572877+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: cluster 2026-03-09T17:29:14.680303+0000 mgr.y (mgr.14505) 129 : cluster [DBG] pgmap v94: 656 pgs: 32 unknown, 128 creating+peering, 1 active+clean+snaptrim, 495 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: cluster 2026-03-09T17:29:14.680303+0000 mgr.y (mgr.14505) 129 : cluster [DBG] pgmap v94: 656 pgs: 32 unknown, 128 creating+peering, 1 active+clean+snaptrim, 495 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:14.992932+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:14.992932+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-7"}]': finished 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:14.992975+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:14.992975+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59908-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: cluster 2026-03-09T17:29:15.037495+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: cluster 2026-03-09T17:29:15.037495+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.047998+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.047998+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.053835+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/446294722' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.053835+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/446294722' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.054011+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/3697849533' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.054011+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/3697849533' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.081956+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.081956+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.082528+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.082528+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.082627+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.082627+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.194516+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:16.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:15 vm00 bash[28333]: audit 2026-03-09T17:29:15.194516+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:16.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:29:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:29:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: Running main() from gmock_main.cc 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [==========] Running 3 tests from 1 test suite. 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [----------] Global test environment set-up. 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [----------] 3 tests from NeoradosECList 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ RUN ] NeoradosECList.ListObjects 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ OK ] NeoradosECList.ListObjects (7172 ms) 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsNS 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsNS (6430 ms) 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsMany 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsMany (10658 ms) 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [----------] 3 tests from NeoradosECList (24260 ms total) 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [----------] Global test environment tear-down 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [==========] 3 tests from 1 test suite ran. (24260 ms total) 2026-03-09T17:29:17.070 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ PASSED ] 3 tests. 2026-03-09T17:29:17.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: cluster 2026-03-09T17:29:15.612401+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: cluster 2026-03-09T17:29:15.612401+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: cluster 2026-03-09T17:29:15.658094+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: cluster 2026-03-09T17:29:15.658094+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:15.997627+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:15.997627+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:15.997720+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:15.997720+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:15.997748+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:15.997748+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: cluster 2026-03-09T17:29:16.003657+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: cluster 2026-03-09T17:29:16.003657+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:16.022529+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:16.022529+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:16.051148+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:16.051148+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:16.202256+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: audit 2026-03-09T17:29:16.202256+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: cluster 2026-03-09T17:29:16.303378+0000 mon.a (mon.0) 1133 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:16 vm02 bash[23351]: cluster 2026-03-09T17:29:16.303378+0000 mon.a (mon.0) 1133 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: cluster 2026-03-09T17:29:15.612401+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: cluster 2026-03-09T17:29:15.612401+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: cluster 2026-03-09T17:29:15.658094+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: cluster 2026-03-09T17:29:15.658094+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:15.997627+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:15.997627+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:15.997720+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:15.997720+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:15.997748+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:15.997748+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: cluster 2026-03-09T17:29:16.003657+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: cluster 2026-03-09T17:29:16.003657+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:16.022529+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:16.022529+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:16.051148+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:16.051148+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:16.202256+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: audit 2026-03-09T17:29:16.202256+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: cluster 2026-03-09T17:29:16.303378+0000 mon.a (mon.0) 1133 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:16 vm00 bash[20770]: cluster 2026-03-09T17:29:16.303378+0000 mon.a (mon.0) 1133 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: cluster 2026-03-09T17:29:15.612401+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: cluster 2026-03-09T17:29:15.612401+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: cluster 2026-03-09T17:29:15.658094+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: cluster 2026-03-09T17:29:15.658094+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:15.997627+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:15.997627+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:15.997720+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:15.997720+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:15.997748+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:15.997748+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: cluster 2026-03-09T17:29:16.003657+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: cluster 2026-03-09T17:29:16.003657+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:16.022529+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:16.022529+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/2973045107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:16.051148+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:16.051148+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]: dispatch 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:16.202256+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: audit 2026-03-09T17:29:16.202256+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: cluster 2026-03-09T17:29:16.303378+0000 mon.a (mon.0) 1133 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:16 vm00 bash[28333]: cluster 2026-03-09T17:29:16.303378+0000 mon.a (mon.0) 1133 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: cluster 2026-03-09T17:29:16.680770+0000 mgr.y (mgr.14505) 130 : cluster [DBG] pgmap v97: 648 pgs: 160 unknown, 32 creating+peering, 1 active+clean+snaptrim, 455 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: cluster 2026-03-09T17:29:16.680770+0000 mgr.y (mgr.14505) 130 : cluster [DBG] pgmap v97: 648 pgs: 160 unknown, 32 creating+peering, 1 active+clean+snaptrim, 455 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.020586+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.020586+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: cluster 2026-03-09T17:29:17.029238+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: cluster 2026-03-09T17:29:17.029238+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.058899+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.058899+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.068871+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.068871+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.112080+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/610719972' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.112080+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/610719972' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.117306+0000 mon.a (mon.0) 1137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.117306+0000 mon.a (mon.0) 1137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.210040+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:18 vm02 bash[23351]: audit 2026-03-09T17:29:17.210040+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: cluster 2026-03-09T17:29:16.680770+0000 mgr.y (mgr.14505) 130 : cluster [DBG] pgmap v97: 648 pgs: 160 unknown, 32 creating+peering, 1 active+clean+snaptrim, 455 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: cluster 2026-03-09T17:29:16.680770+0000 mgr.y (mgr.14505) 130 : cluster [DBG] pgmap v97: 648 pgs: 160 unknown, 32 creating+peering, 1 active+clean+snaptrim, 455 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.020586+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.020586+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: cluster 2026-03-09T17:29:17.029238+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: cluster 2026-03-09T17:29:17.029238+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.058899+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.058899+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.068871+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.068871+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.112080+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/610719972' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.112080+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/610719972' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.117306+0000 mon.a (mon.0) 1137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.117306+0000 mon.a (mon.0) 1137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.210040+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:18 vm00 bash[20770]: audit 2026-03-09T17:29:17.210040+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: cluster 2026-03-09T17:29:16.680770+0000 mgr.y (mgr.14505) 130 : cluster [DBG] pgmap v97: 648 pgs: 160 unknown, 32 creating+peering, 1 active+clean+snaptrim, 455 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: cluster 2026-03-09T17:29:16.680770+0000 mgr.y (mgr.14505) 130 : cluster [DBG] pgmap v97: 648 pgs: 160 unknown, 32 creating+peering, 1 active+clean+snaptrim, 455 active+clean; 144 MiB data, 847 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.020586+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.020586+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60801-3"}]': finished 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: cluster 2026-03-09T17:29:17.029238+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: cluster 2026-03-09T17:29:17.029238+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.058899+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.058899+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.068871+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.068871+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.112080+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/610719972' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.112080+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/610719972' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.117306+0000 mon.a (mon.0) 1137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.117306+0000 mon.a (mon.0) 1137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.210040+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:18.541 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:18 vm00 bash[28333]: audit 2026-03-09T17:29:17.210040+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.030569+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.030569+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.030643+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.030643+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: cluster 2026-03-09T17:29:18.050819+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: cluster 2026-03-09T17:29:18.050819+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.179608+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/2689747594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.179608+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/2689747594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.181092+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.181092+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.215839+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.215839+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.312501+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.312501+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.338592+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.338592+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.341105+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:18.341105+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.050979+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.050979+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.051125+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.051125+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.051151+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.051151+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.064322+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.064322+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: cluster 2026-03-09T17:29:19.089767+0000 mon.a (mon.0) 1147 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: cluster 2026-03-09T17:29:19.089767+0000 mon.a (mon.0) 1147 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.106989+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.106989+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.107377+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:19 vm00 bash[20770]: audit 2026-03-09T17:29:19.107377+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.030569+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.030569+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.030643+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.030643+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: cluster 2026-03-09T17:29:18.050819+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: cluster 2026-03-09T17:29:18.050819+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.179608+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/2689747594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.179608+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/2689747594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.181092+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.181092+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.215839+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.215839+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.312501+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.312501+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.338592+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.338592+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.341105+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:18.341105+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.050979+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.050979+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.051125+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.051125+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.051151+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.051151+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.064322+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.064322+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: cluster 2026-03-09T17:29:19.089767+0000 mon.a (mon.0) 1147 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: cluster 2026-03-09T17:29:19.089767+0000 mon.a (mon.0) 1147 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.106989+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.106989+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.107377+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:19 vm00 bash[28333]: audit 2026-03-09T17:29:19.107377+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.030569+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.030569+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.030643+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.030643+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59908-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: cluster 2026-03-09T17:29:18.050819+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: cluster 2026-03-09T17:29:18.050819+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.179608+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/2689747594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.179608+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/2689747594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.181092+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.181092+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.215839+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.215839+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.312501+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.312501+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.338592+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.338592+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.341105+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:18.341105+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.050979+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.050979+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59916-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.051125+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.051125+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4"}]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.051151+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.051151+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.064322+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.064322+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: cluster 2026-03-09T17:29:19.089767+0000 mon.a (mon.0) 1147 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: cluster 2026-03-09T17:29:19.089767+0000 mon.a (mon.0) 1147 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.106989+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.106989+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.107377+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:19 vm02 bash[23351]: audit 2026-03-09T17:29:19.107377+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]: dispatch 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [==========] Running 4 tests from 1 test suite. 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [----------] Global test environment set-up. 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [----------] 4 tests from LibRadosService 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ RUN ] LibRadosService.RegisterEarly 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ OK ] LibRadosService.RegisterEarly (5044 ms) 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ RUN ] LibRadosService.RegisterLate 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ OK ] LibRadosService.RegisterLate (20 ms) 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ RUN ] LibRadosService.StatusFormat 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: cluster: 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: id: 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: health: HEALTH_WARN 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 15 pool(s) do not have an application enabled 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: services: 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 7m) 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: mgr: y(active, since 2m), standbys: x 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: osd: 8 osds: 8 up (since 2m), 8 in (since 3m) 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: laundry: 2 daemons active (1 hosts) 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: data: 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: pools: 30 pools, 828 pgs 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: objects: 199 objects, 455 KiB 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: usage: 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: pgs: 68.599% pgs unknown 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 15.459% pgs not active 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 568 unknown 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 132 active+clean 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 128 creating+peering 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: io: 2026-03-09T17:29:20.218 INFO:tasks.workunit.client.0.vm00.stdout: api_service: client: 1.1 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: cluster: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: id: 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: health: HEALTH_WARN 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 18 pool(s) do not have an application enabled 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: services: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 7m) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: mgr: y(active, since 2m), standbys: x 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: osd: 8 osds: 8 up (since 2m), 8 in (since 3m) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: foo: 16 portals active (1 hosts, 3 zones) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: laundry: 1 daemon active (1 hosts) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: data: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: pools: 34 pools, 980 pgs 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: objects: 292 objects, 465 KiB 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: usage: 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: pgs: 27.755% pgs unknown 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 19.592% pgs not active 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 516 active+clean 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 272 unknown 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 192 creating+peering 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: io: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: client: 2.6 KiB/s rd, 56 KiB/s wr, 39 op/s rd, 117 op/s wr 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ OK ] LibRadosService.StatusFormat (2553 ms) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ RUN ] LibRadosService.Status 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ OK ] LibRadosService.Status (20043 ms) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [----------] 4 tests from LibRadosService (27661 ms total) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [----------] Global test environment tear-down 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [==========] 4 tests from 1 test suite ran. (27661 ms total) 2026-03-09T17:29:20.219 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ PASSED ] 4 tests. 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: cluster 2026-03-09T17:29:18.681318+0000 mgr.y (mgr.14505) 131 : cluster [DBG] pgmap v100: 616 pgs: 64 creating+peering, 1 active+clean+snaptrim, 32 unknown, 519 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: cluster 2026-03-09T17:29:18.681318+0000 mgr.y (mgr.14505) 131 : cluster [DBG] pgmap v100: 616 pgs: 64 creating+peering, 1 active+clean+snaptrim, 32 unknown, 519 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:19.221878+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:19.221878+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: cluster 2026-03-09T17:29:20.051642+0000 mon.a (mon.0) 1150 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: cluster 2026-03-09T17:29:20.051642+0000 mon.a (mon.0) 1150 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.072624+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.072624+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.072678+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.072678+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: cluster 2026-03-09T17:29:20.097150+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: cluster 2026-03-09T17:29:20.097150+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.101163+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.101163+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.103931+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.103931+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.116733+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.116733+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.118952+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.118952+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.121502+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:20 vm00 bash[20770]: audit 2026-03-09T17:29:20.121502+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: cluster 2026-03-09T17:29:18.681318+0000 mgr.y (mgr.14505) 131 : cluster [DBG] pgmap v100: 616 pgs: 64 creating+peering, 1 active+clean+snaptrim, 32 unknown, 519 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: cluster 2026-03-09T17:29:18.681318+0000 mgr.y (mgr.14505) 131 : cluster [DBG] pgmap v100: 616 pgs: 64 creating+peering, 1 active+clean+snaptrim, 32 unknown, 519 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:19.221878+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:19.221878+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: cluster 2026-03-09T17:29:20.051642+0000 mon.a (mon.0) 1150 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: cluster 2026-03-09T17:29:20.051642+0000 mon.a (mon.0) 1150 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.072624+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.072624+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.072678+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.072678+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: cluster 2026-03-09T17:29:20.097150+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: cluster 2026-03-09T17:29:20.097150+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.101163+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.101163+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.103931+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.103931+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.116733+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.116733+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.118952+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.118952+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.121502+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:20 vm00 bash[28333]: audit 2026-03-09T17:29:20.121502+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: cluster 2026-03-09T17:29:18.681318+0000 mgr.y (mgr.14505) 131 : cluster [DBG] pgmap v100: 616 pgs: 64 creating+peering, 1 active+clean+snaptrim, 32 unknown, 519 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: cluster 2026-03-09T17:29:18.681318+0000 mgr.y (mgr.14505) 131 : cluster [DBG] pgmap v100: 616 pgs: 64 creating+peering, 1 active+clean+snaptrim, 32 unknown, 519 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:19.221878+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:19.221878+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: cluster 2026-03-09T17:29:20.051642+0000 mon.a (mon.0) 1150 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: cluster 2026-03-09T17:29:20.051642+0000 mon.a (mon.0) 1150 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.072624+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.072624+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.072678+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.072678+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60328-4", "tierpool": "test-rados-api-vm00-60328-6"}]': finished 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: cluster 2026-03-09T17:29:20.097150+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: cluster 2026-03-09T17:29:20.097150+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.101163+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.101163+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.103931+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.103931+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.116733+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.116733+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.118952+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.118952+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.121502+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:20 vm02 bash[23351]: audit 2026-03-09T17:29:20.121502+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:20.230980+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:20.230980+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: cluster 2026-03-09T17:29:21.073069+0000 mon.a (mon.0) 1158 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: cluster 2026-03-09T17:29:21.073069+0000 mon.a (mon.0) 1158 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.076133+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.076133+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.076163+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]': finished 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.076163+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]': finished 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.076191+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.076191+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.076215+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.076215+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: cluster 2026-03-09T17:29:21.080052+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: cluster 2026-03-09T17:29:21.080052+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.146825+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/699630662' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.146825+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/699630662' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.147893+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.147893+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.234946+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:21 vm02 bash[23351]: audit 2026-03-09T17:29:21.234946+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:21.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:29:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:20.230980+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:20.230980+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: cluster 2026-03-09T17:29:21.073069+0000 mon.a (mon.0) 1158 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: cluster 2026-03-09T17:29:21.073069+0000 mon.a (mon.0) 1158 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.076133+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.076133+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.076163+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]': finished 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.076163+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]': finished 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.076191+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.076191+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.076215+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.076215+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: cluster 2026-03-09T17:29:21.080052+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: cluster 2026-03-09T17:29:21.080052+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.146825+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/699630662' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.146825+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/699630662' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.147893+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.147893+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.234946+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:21 vm00 bash[20770]: audit 2026-03-09T17:29:21.234946+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:20.230980+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:20.230980+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: cluster 2026-03-09T17:29:21.073069+0000 mon.a (mon.0) 1158 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: cluster 2026-03-09T17:29:21.073069+0000 mon.a (mon.0) 1158 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.076133+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.076133+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/1234079441' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60328-6", "pool2": "test-rados-api-vm00-60328-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.076163+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]': finished 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.076163+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-9", "mode": "writeback"}]': finished 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.076191+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.076191+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/650447599' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59908-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.076215+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.076215+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.100:0/4236302555' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: cluster 2026-03-09T17:29:21.080052+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: cluster 2026-03-09T17:29:21.080052+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.146825+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/699630662' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.146825+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/699630662' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.147893+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.147893+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.234946+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:21 vm00 bash[28333]: audit 2026-03-09T17:29:21.234946+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.180 INFO:tasks.workunit.client.0.vm00.stdout:ch_notify_pp: flushed 2026-03-09T17:29:22.180 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 (3005 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: trying... 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342099870224 notify_id 352187318279 notifier_gid 15126 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: timed out 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushing 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushed 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 (3021 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: List watches 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify2 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342099870224 notify_id 365072220165 notifier_gid 15126 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify2 done 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: watch_check 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: unwatch2 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushing 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: done 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 (3177 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: List watches 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify2 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94342099870224 notify_id 377957122056 notifier_gid 15126 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify2 done 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: watch_check 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: unwatch2 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushing 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: done 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 (3128 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP (17007 ms total) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] Global test environment tear-down 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [==========] 16 tests from 2 test suites ran. (29706 ms total) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ PASSED ] 16 tests. 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: Running main() from gmock_main.cc 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [==========] Running 8 tests from 2 test suites. 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] Global test environment set-up. 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibradosCWriteOps.NewDelete 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibradosCWriteOps.NewDelete (0 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps (0 ms total) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.assertExists 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.assertExists (3110 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteOpAssertVersion 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteOpAssertVersion (3295 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Xattrs 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Xattrs (3398 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Write 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Write (3439 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Exec 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Exec (2647 ms) 2026-03-09T17:29:22.181 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteSame 2026-03-09T17:29:22.182 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteSame (3061 ms) 2026-03-09T17:29:22.182 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.CmpExt 2026-03-09T17:29:22.182 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.CmpExt (10641 ms) 2026-03-09T17:29:22.182 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps (29591 ms total) 2026-03-09T17:29:22.182 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: 2026-03-09T17:29:22.182 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] Global test environment tear-down 2026-03-09T17:29:22.182 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [==========] 8 tests from 2 test suites ran. (29591 ms total) 2026-03-09T17:29:22.182 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ PASSED ] 8 tests. 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: cluster 2026-03-09T17:29:20.681873+0000 mgr.y (mgr.14505) 132 : cluster [DBG] pgmap v103: 680 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 487 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: cluster 2026-03-09T17:29:20.681873+0000 mgr.y (mgr.14505) 132 : cluster [DBG] pgmap v103: 680 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 487 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.291691+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.291691+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: cluster 2026-03-09T17:29:21.356604+0000 mon.a (mon.0) 1165 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: cluster 2026-03-09T17:29:21.356604+0000 mon.a (mon.0) 1165 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.418877+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.418877+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.443438+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.443438+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.504964+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.504964+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.524765+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.524765+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.549989+0000 mgr.y (mgr.14505) 133 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.549989+0000 mgr.y (mgr.14505) 133 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.621209+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.621209+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.669300+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.669300+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.670390+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:21.670390+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.080962+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.080962+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.081006+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.081006+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.081030+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.081030+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.098257+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.098257+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: cluster 2026-03-09T17:29:22.114777+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: cluster 2026-03-09T17:29:22.114777+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.122214+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.122214+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.124984+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.124984+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.132652+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.132652+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.235912+0000 mon.c (mon.2) 185 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:22.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:22 vm02 bash[23351]: audit 2026-03-09T17:29:22.235912+0000 mon.c (mon.2) 185 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: cluster 2026-03-09T17:29:20.681873+0000 mgr.y (mgr.14505) 132 : cluster [DBG] pgmap v103: 680 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 487 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: cluster 2026-03-09T17:29:20.681873+0000 mgr.y (mgr.14505) 132 : cluster [DBG] pgmap v103: 680 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 487 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.291691+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.291691+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: cluster 2026-03-09T17:29:21.356604+0000 mon.a (mon.0) 1165 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: cluster 2026-03-09T17:29:21.356604+0000 mon.a (mon.0) 1165 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.418877+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.418877+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.443438+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.443438+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.504964+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.504964+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.524765+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.524765+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.549989+0000 mgr.y (mgr.14505) 133 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.549989+0000 mgr.y (mgr.14505) 133 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.621209+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.621209+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.669300+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.669300+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.670390+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:21.670390+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.080962+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.080962+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.081006+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.081006+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.081030+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.081030+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.098257+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.098257+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: cluster 2026-03-09T17:29:22.114777+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: cluster 2026-03-09T17:29:22.114777+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.122214+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.122214+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.124984+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.124984+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.132652+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.132652+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.235912+0000 mon.c (mon.2) 185 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:22 vm00 bash[20770]: audit 2026-03-09T17:29:22.235912+0000 mon.c (mon.2) 185 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: cluster 2026-03-09T17:29:20.681873+0000 mgr.y (mgr.14505) 132 : cluster [DBG] pgmap v103: 680 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 487 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: cluster 2026-03-09T17:29:20.681873+0000 mgr.y (mgr.14505) 132 : cluster [DBG] pgmap v103: 680 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 487 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.291691+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.291691+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: cluster 2026-03-09T17:29:21.356604+0000 mon.a (mon.0) 1165 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: cluster 2026-03-09T17:29:21.356604+0000 mon.a (mon.0) 1165 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.418877+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.418877+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.443438+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.443438+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.504964+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.504964+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.524765+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.524765+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.549989+0000 mgr.y (mgr.14505) 133 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.549989+0000 mgr.y (mgr.14505) 133 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.621209+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.621209+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.669300+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.669300+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.670390+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:21.670390+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.080962+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.080962+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59916-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.081006+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.081006+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.081030+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.081030+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.098257+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.098257+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: cluster 2026-03-09T17:29:22.114777+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: cluster 2026-03-09T17:29:22.114777+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.122214+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.122214+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.124984+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.124984+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.132652+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.132652+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.235912+0000 mon.c (mon.2) 185 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:23.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:22 vm00 bash[28333]: audit 2026-03-09T17:29:22.235912+0000 mon.c (mon.2) 185 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [==========] Running 4 tests from 1 test suite. 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [----------] Global test environment set-up. 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterEarly 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterEarly (5043 ms) 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterLate 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterLate (73 ms) 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Status 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ OK ] LibRadosServicePP.Status (20093 ms) 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Close 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: attempt 0 of 20 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: failed to deregister: 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: { 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "epoch": 19, 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "modified": "2026-03-09T17:29:20.681530+0000", 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "services": { 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "laundry": { 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "daemons": { 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "summary": "", 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "close-test-pid60407": { 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "start_epoch": 18, 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "start_stamp": "2026-03-09T17:29:17.876506+0000", 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "gid": 18798, 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "addr": "192.168.123.100:0/3677959287", 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "metadata": { 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "arch": "x86_64", 2026-03-09T17:29:23.090 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "ceph_release": "squid", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "ceph_version_short": "19.2.3-678-ge911bdeb", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "cpu": "AMD Ryzen 9 7950X3D 16-Core Processor", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "distro": "ubuntu", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "distro_description": "Ubuntu 22.04.5 LTS", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "distro_version": "22.04", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "foo": "bar", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "hostname": "vm00", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "kernel_description": "#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "kernel_version": "5.15.0-1092-kvm", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "mem_swap_kb": "0", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "mem_total_kb": "8156564", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "os": "Linux", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "this": "that" 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: }, 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "task_status": {} 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: } 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: } 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: }, 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "rgw": { 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "daemons": { 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "summary": "", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "14385": { 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "start_epoch": 13, 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "start_stamp": "2026-03-09T17:26:35.306672+0000", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "gid": 14385, 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "addr": "192.168.123.100:0/147358814", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "metadata": { 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "arch": "x86_64", 2026-03-09T17:29:23.091 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "ceph_release": "squid", 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: cluster 2026-03-09T17:29:22.682343+0000 mgr.y (mgr.14505) 134 : cluster [DBG] pgmap v106: 612 pgs: 32 creating+peering, 1 active+clean+snaptrim, 192 unknown, 387 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: cluster 2026-03-09T17:29:22.682343+0000 mgr.y (mgr.14505) 134 : cluster [DBG] pgmap v106: 612 pgs: 32 creating+peering, 1 active+clean+snaptrim, 192 unknown, 387 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.066080+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.100:0/659278286' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.066080+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.100:0/659278286' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.066317+0000 mgr.y (mgr.14505) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.066317+0000 mgr.y (mgr.14505) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: cluster 2026-03-09T17:29:23.081899+0000 mon.a (mon.0) 1176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: cluster 2026-03-09T17:29:23.081899+0000 mon.a (mon.0) 1176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.101318+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.101318+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: cluster 2026-03-09T17:29:23.149408+0000 mon.a (mon.0) 1178 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: cluster 2026-03-09T17:29:23.149408+0000 mon.a (mon.0) 1178 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.173968+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/1529748978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.173968+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/1529748978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.184524+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.184524+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.242257+0000 mon.c (mon.2) 188 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:23 vm02 bash[23351]: audit 2026-03-09T17:29:23.242257+0000 mon.c (mon.2) 188 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:24.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: cluster 2026-03-09T17:29:22.682343+0000 mgr.y (mgr.14505) 134 : cluster [DBG] pgmap v106: 612 pgs: 32 creating+peering, 1 active+clean+snaptrim, 192 unknown, 387 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: cluster 2026-03-09T17:29:22.682343+0000 mgr.y (mgr.14505) 134 : cluster [DBG] pgmap v106: 612 pgs: 32 creating+peering, 1 active+clean+snaptrim, 192 unknown, 387 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.066080+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.100:0/659278286' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.066080+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.100:0/659278286' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.066317+0000 mgr.y (mgr.14505) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.066317+0000 mgr.y (mgr.14505) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: cluster 2026-03-09T17:29:23.081899+0000 mon.a (mon.0) 1176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: cluster 2026-03-09T17:29:23.081899+0000 mon.a (mon.0) 1176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.101318+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.101318+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: cluster 2026-03-09T17:29:23.149408+0000 mon.a (mon.0) 1178 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: cluster 2026-03-09T17:29:23.149408+0000 mon.a (mon.0) 1178 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.173968+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/1529748978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.173968+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/1529748978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.184524+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.184524+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.242257+0000 mon.c (mon.2) 188 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:23 vm00 bash[20770]: audit 2026-03-09T17:29:23.242257+0000 mon.c (mon.2) 188 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: cluster 2026-03-09T17:29:22.682343+0000 mgr.y (mgr.14505) 134 : cluster [DBG] pgmap v106: 612 pgs: 32 creating+peering, 1 active+clean+snaptrim, 192 unknown, 387 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: cluster 2026-03-09T17:29:22.682343+0000 mgr.y (mgr.14505) 134 : cluster [DBG] pgmap v106: 612 pgs: 32 creating+peering, 1 active+clean+snaptrim, 192 unknown, 387 active+clean; 144 MiB data, 832 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.066080+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.100:0/659278286' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.066080+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.100:0/659278286' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.066317+0000 mgr.y (mgr.14505) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.066317+0000 mgr.y (mgr.14505) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: cluster 2026-03-09T17:29:23.081899+0000 mon.a (mon.0) 1176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: cluster 2026-03-09T17:29:23.081899+0000 mon.a (mon.0) 1176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.101318+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.101318+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-9"}]': finished 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: cluster 2026-03-09T17:29:23.149408+0000 mon.a (mon.0) 1178 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: cluster 2026-03-09T17:29:23.149408+0000 mon.a (mon.0) 1178 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.173968+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/1529748978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.173968+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/1529748978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.184524+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.184524+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.242257+0000 mon.c (mon.2) 188 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:23 vm00 bash[28333]: audit 2026-03-09T17:29:23.242257+0000 mon.c (mon.2) 188 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.124499+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.124499+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.124550+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.124550+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: cluster 2026-03-09T17:29:24.128991+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: cluster 2026-03-09T17:29:24.128991+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.140831+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/2711722130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.140831+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/2711722130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.149637+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.149637+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.249383+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:25 vm00 bash[20770]: audit 2026-03-09T17:29:24.249383+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.124499+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.124499+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.124550+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.124550+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: cluster 2026-03-09T17:29:24.128991+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: cluster 2026-03-09T17:29:24.128991+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.140831+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/2711722130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.140831+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/2711722130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.149637+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.149637+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.249383+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:25 vm00 bash[28333]: audit 2026-03-09T17:29:24.249383+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:25.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.124499+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.124499+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-60171-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.124550+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.124550+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59908-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: cluster 2026-03-09T17:29:24.128991+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: cluster 2026-03-09T17:29:24.128991+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.140831+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/2711722130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.140831+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/2711722130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.149637+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.149637+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.249383+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:25 vm02 bash[23351]: audit 2026-03-09T17:29:24.249383+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) sq misc: Running main() from gmock_main.cc 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [==========] Running 12 tests from 1 test suite. 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [----------] Global test environment set-up. 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [----------] 12 tests from NeoRadosMisc 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.Version 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.Version (1730 ms) 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.WaitOSDMap 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.WaitOSDMap (2231 ms) 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.LongName 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.LongName (3242 ms) 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.LongLocator 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.LongLocator (2387 ms) 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.LongNamespace 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.LongNamespace (3344 ms) 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.LongAttrName 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.LongAttrName (2725 ms) 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.Exec 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.Exec (3035 ms) 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.Operate1 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.Operate1 (3590 ms) 2026-03-09T17:29:26.370 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.Operate2 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.Operate2 (2990 ms) 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.BigObject 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.BigObject (3061 ms) 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.BigAttr 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.BigAttr (2028 ms) 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.WriteSame 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.WriteSame (3188 ms) 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [----------] 12 tests from NeoRadosMisc (33551 ms total) 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [----------] Global test environment tear-down 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [==========] 12 tests from 1 test suite ran. (33551 ms total) 2026-03-09T17:29:26.371 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ PASSED ] 12 tests. 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: cluster 2026-03-09T17:29:24.682796+0000 mgr.y (mgr.14505) 136 : cluster [DBG] pgmap v109: 492 pgs: 1 active+clean+snaptrim, 104 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: cluster 2026-03-09T17:29:24.682796+0000 mgr.y (mgr.14505) 136 : cluster [DBG] pgmap v109: 492 pgs: 1 active+clean+snaptrim, 104 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.195309+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.195309+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.270466+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.270466+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: cluster 2026-03-09T17:29:25.281126+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: cluster 2026-03-09T17:29:25.281126+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.309469+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.309469+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.313228+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.313228+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.319006+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:26 vm00 bash[20770]: audit 2026-03-09T17:29:25.319006+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:29:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:29:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: cluster 2026-03-09T17:29:24.682796+0000 mgr.y (mgr.14505) 136 : cluster [DBG] pgmap v109: 492 pgs: 1 active+clean+snaptrim, 104 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: cluster 2026-03-09T17:29:24.682796+0000 mgr.y (mgr.14505) 136 : cluster [DBG] pgmap v109: 492 pgs: 1 active+clean+snaptrim, 104 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.195309+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.195309+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.270466+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.270466+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: cluster 2026-03-09T17:29:25.281126+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: cluster 2026-03-09T17:29:25.281126+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.309469+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.309469+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.313228+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.313228+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.319006+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:26 vm00 bash[28333]: audit 2026-03-09T17:29:25.319006+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: cluster 2026-03-09T17:29:24.682796+0000 mgr.y (mgr.14505) 136 : cluster [DBG] pgmap v109: 492 pgs: 1 active+clean+snaptrim, 104 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: cluster 2026-03-09T17:29:24.682796+0000 mgr.y (mgr.14505) 136 : cluster [DBG] pgmap v109: 492 pgs: 1 active+clean+snaptrim, 104 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.195309+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.195309+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59916-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.270466+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.270466+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: cluster 2026-03-09T17:29:25.281126+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: cluster 2026-03-09T17:29:25.281126+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.309469+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.309469+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.313228+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.313228+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.319006+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:26 vm02 bash[23351]: audit 2026-03-09T17:29:25.319006+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.217293+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.217293+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.217349+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.217349+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: cluster 2026-03-09T17:29:26.229293+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: cluster 2026-03-09T17:29:26.229293+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.383135+0000 mon.c (mon.2) 191 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.383135+0000 mon.c (mon.2) 191 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.414408+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/2096789039' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.414408+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/2096789039' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: cluster 2026-03-09T17:29:26.415909+0000 mon.a (mon.0) 1191 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: cluster 2026-03-09T17:29:26.415909+0000 mon.a (mon.0) 1191 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.527218+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.527218+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: cluster 2026-03-09T17:29:26.586515+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: cluster 2026-03-09T17:29:26.586515+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.621977+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:26.621977+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: cluster 2026-03-09T17:29:26.683415+0000 mgr.y (mgr.14505) 137 : cluster [DBG] pgmap v113: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: cluster 2026-03-09T17:29:26.683415+0000 mgr.y (mgr.14505) 137 : cluster [DBG] pgmap v113: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:27.095784+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:27.095784+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:27.097741+0000 mon.c (mon.2) 193 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:27.097741+0000 mon.c (mon.2) 193 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:27.390194+0000 mon.c (mon.2) 194 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:27 vm00 bash[20770]: audit 2026-03-09T17:29:27.390194+0000 mon.c (mon.2) 194 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.217293+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.217293+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.217349+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.217349+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: cluster 2026-03-09T17:29:26.229293+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: cluster 2026-03-09T17:29:26.229293+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.383135+0000 mon.c (mon.2) 191 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.383135+0000 mon.c (mon.2) 191 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.414408+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/2096789039' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.414408+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/2096789039' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: cluster 2026-03-09T17:29:26.415909+0000 mon.a (mon.0) 1191 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: cluster 2026-03-09T17:29:26.415909+0000 mon.a (mon.0) 1191 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.527218+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.527218+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: cluster 2026-03-09T17:29:26.586515+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: cluster 2026-03-09T17:29:26.586515+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.621977+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:26.621977+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: cluster 2026-03-09T17:29:26.683415+0000 mgr.y (mgr.14505) 137 : cluster [DBG] pgmap v113: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: cluster 2026-03-09T17:29:26.683415+0000 mgr.y (mgr.14505) 137 : cluster [DBG] pgmap v113: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:27.095784+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:27.095784+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:27.097741+0000 mon.c (mon.2) 193 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:27.097741+0000 mon.c (mon.2) 193 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:27.390194+0000 mon.c (mon.2) 194 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:27 vm00 bash[28333]: audit 2026-03-09T17:29:27.390194+0000 mon.c (mon.2) 194 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.217293+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.217293+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.217349+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.217349+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60146-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: cluster 2026-03-09T17:29:26.229293+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: cluster 2026-03-09T17:29:26.229293+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.383135+0000 mon.c (mon.2) 191 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.383135+0000 mon.c (mon.2) 191 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.414408+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/2096789039' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.414408+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/2096789039' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: cluster 2026-03-09T17:29:26.415909+0000 mon.a (mon.0) 1191 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: cluster 2026-03-09T17:29:26.415909+0000 mon.a (mon.0) 1191 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.527218+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.527218+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: cluster 2026-03-09T17:29:26.586515+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: cluster 2026-03-09T17:29:26.586515+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.621977+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:26.621977+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: cluster 2026-03-09T17:29:26.683415+0000 mgr.y (mgr.14505) 137 : cluster [DBG] pgmap v113: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: cluster 2026-03-09T17:29:26.683415+0000 mgr.y (mgr.14505) 137 : cluster [DBG] pgmap v113: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:27.095784+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:27.095784+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:27.097741+0000 mon.c (mon.2) 193 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:27.097741+0000 mon.c (mon.2) 193 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:27.390194+0000 mon.c (mon.2) 194 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:27 vm02 bash[23351]: audit 2026-03-09T17:29:27.390194+0000 mon.c (mon.2) 194 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout:uid (stable)", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "ceph_version_short": "19.2.3-678-ge911bdeb", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "container_hostname": "vm00", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "container_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "cpu": "AMD Ryzen 9 7950X3D 16-Core Processor", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "distro": "centos", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "distro_description": "CentOS Stream 9", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "distro_version": "9", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "frontend_config#0": "beast port=80", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "frontend_type#0": "beast", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "hostname": "vm00", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "id": "foo.a", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "kernel_description": "#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "kernel_version": "5.15.0-1092-kvm", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "mem_swap_kb": "0", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "mem_total_kb": "8156564", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "num_handles": "1", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "os": "Linux", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "pid": "7", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "realm_id": "", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "realm_name": "", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "zone_id": "b5fa5622-fbab-4e52-ab61-328b320d0dcd", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "zone_name": "default", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "zonegroup_id": "14769702-7847-42cc-a589-aae46379b316", 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "zonegroup_name": "default" 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: }, 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: "task_status": {} 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: } 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: } 2026-03-09T17:29:28.477 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: } 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: } 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: } 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: attempt 1 of 20 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ OK ] LibRadosServicePP.Close (10695 ms) 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP (35904 ms total) 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [----------] Global test environment tear-down 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [==========] 4 tests from 1 test suite ran. (35904 ms total) 2026-03-09T17:29:28.478 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ PASSED ] 4 tests. 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:27.612619+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:27.612619+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:27.612681+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]': finished 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:27.612681+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]': finished 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: cluster 2026-03-09T17:29:27.638190+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: cluster 2026-03-09T17:29:27.638190+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:27.647157+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:27.647157+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:27.658413+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:27.658413+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:28.151436+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:28.151436+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:28.152486+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:28.152486+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:28.391638+0000 mon.c (mon.2) 195 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:28.391638+0000 mon.c (mon.2) 195 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:28.470548+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/4247471386' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:28 vm02 bash[23351]: audit 2026-03-09T17:29:28.470548+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/4247471386' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:27.612619+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:27.612619+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:27.612681+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]': finished 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:27.612681+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]': finished 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: cluster 2026-03-09T17:29:27.638190+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: cluster 2026-03-09T17:29:27.638190+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:27.647157+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:27.647157+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:27.658413+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:27.658413+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:28.151436+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:28.151436+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:28.152486+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:28.152486+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:28.391638+0000 mon.c (mon.2) 195 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:28.391638+0000 mon.c (mon.2) 195 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:28.470548+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/4247471386' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:28 vm00 bash[20770]: audit 2026-03-09T17:29:28.470548+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/4247471386' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:27.612619+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:27.612619+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59908-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:27.612681+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]': finished 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:27.612681+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache", "force_nonempty":""}]': finished 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: cluster 2026-03-09T17:29:27.638190+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: cluster 2026-03-09T17:29:27.638190+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:27.647157+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:27.647157+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:27.658413+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:27.658413+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:28.151436+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:28.151436+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:28.152486+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:28.152486+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:28.391638+0000 mon.c (mon.2) 195 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:28.391638+0000 mon.c (mon.2) 195 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:28.470548+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/4247471386' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:28 vm00 bash[28333]: audit 2026-03-09T17:29:28.470548+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/4247471386' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.470692+0000 mgr.y (mgr.14505) 138 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.470692+0000 mgr.y (mgr.14505) 138 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: cluster 2026-03-09T17:29:28.684711+0000 mgr.y (mgr.14505) 139 : cluster [DBG] pgmap v115: 556 pgs: 48 creating+peering, 33 creating+activating, 1 active+clean+snaptrim, 10 unknown, 464 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: cluster 2026-03-09T17:29:28.684711+0000 mgr.y (mgr.14505) 139 : cluster [DBG] pgmap v115: 556 pgs: 48 creating+peering, 33 creating+activating, 1 active+clean+snaptrim, 10 unknown, 464 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.692537+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.692537+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.692574+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.692574+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.692600+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.692600+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: cluster 2026-03-09T17:29:28.710706+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: cluster 2026-03-09T17:29:28.710706+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.722976+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.722976+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.740196+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.740196+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.783273+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:28.783273+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:29.392326+0000 mon.c (mon.2) 196 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:29.392326+0000 mon.c (mon.2) 196 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:29.754462+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:29.754462+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:29.754519+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: audit 2026-03-09T17:29:29.754519+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]': finished 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: cluster 2026-03-09T17:29:29.876734+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T17:29:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:30 vm02 bash[23351]: cluster 2026-03-09T17:29:29.876734+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.470692+0000 mgr.y (mgr.14505) 138 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.470692+0000 mgr.y (mgr.14505) 138 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: cluster 2026-03-09T17:29:28.684711+0000 mgr.y (mgr.14505) 139 : cluster [DBG] pgmap v115: 556 pgs: 48 creating+peering, 33 creating+activating, 1 active+clean+snaptrim, 10 unknown, 464 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: cluster 2026-03-09T17:29:28.684711+0000 mgr.y (mgr.14505) 139 : cluster [DBG] pgmap v115: 556 pgs: 48 creating+peering, 33 creating+activating, 1 active+clean+snaptrim, 10 unknown, 464 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.692537+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.692537+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.692574+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.692574+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.692600+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.692600+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: cluster 2026-03-09T17:29:28.710706+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: cluster 2026-03-09T17:29:28.710706+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.722976+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.722976+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.740196+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.740196+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.783273+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:28.783273+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:29.392326+0000 mon.c (mon.2) 196 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:29.392326+0000 mon.c (mon.2) 196 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:29.754462+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:29.754462+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:29.754519+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: audit 2026-03-09T17:29:29.754519+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: cluster 2026-03-09T17:29:29.876734+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:30 vm00 bash[20770]: cluster 2026-03-09T17:29:29.876734+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.470692+0000 mgr.y (mgr.14505) 138 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.470692+0000 mgr.y (mgr.14505) 138 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: cluster 2026-03-09T17:29:28.684711+0000 mgr.y (mgr.14505) 139 : cluster [DBG] pgmap v115: 556 pgs: 48 creating+peering, 33 creating+activating, 1 active+clean+snaptrim, 10 unknown, 464 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: cluster 2026-03-09T17:29:28.684711+0000 mgr.y (mgr.14505) 139 : cluster [DBG] pgmap v115: 556 pgs: 48 creating+peering, 33 creating+activating, 1 active+clean+snaptrim, 10 unknown, 464 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.692537+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.692537+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60146-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.692574+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.692574+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/3702530234' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59916-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.692600+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.692600+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: cluster 2026-03-09T17:29:28.710706+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: cluster 2026-03-09T17:29:28.710706+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.722976+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.722976+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.740196+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.740196+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.783273+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]: dispatch 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:28.783273+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]: dispatch 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:29.392326+0000 mon.c (mon.2) 196 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:29.392326+0000 mon.c (mon.2) 196 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:29.754462+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:29.754462+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:29.754519+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: audit 2026-03-09T17:29:29.754519+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.100:0/3983367371' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60146-10", "tierpool":"test-rados-api-vm00-60146-10-cache"}]': finished 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: cluster 2026-03-09T17:29:29.876734+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T17:29:30.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:30 vm00 bash[28333]: cluster 2026-03-09T17:29:29.876734+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:29.841438+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:29.841438+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:29.977301+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:29.977301+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.063359+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/3632482846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.063359+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/3632482846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.164695+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.164695+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.393220+0000 mon.c (mon.2) 198 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.393220+0000 mon.c (mon.2) 198 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: cluster 2026-03-09T17:29:30.754915+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: cluster 2026-03-09T17:29:30.754915+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.804020+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]': finished 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.804020+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]': finished 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.804053+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.804053+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: cluster 2026-03-09T17:29:30.823039+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: cluster 2026-03-09T17:29:30.823039+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.898567+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:31 vm02 bash[23351]: audit 2026-03-09T17:29:30.898567+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:29.841438+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:29.841438+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:29.977301+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:29.977301+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.063359+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/3632482846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.063359+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/3632482846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.164695+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.164695+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.393220+0000 mon.c (mon.2) 198 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.393220+0000 mon.c (mon.2) 198 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: cluster 2026-03-09T17:29:30.754915+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: cluster 2026-03-09T17:29:30.754915+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.804020+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]': finished 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.804020+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]': finished 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.804053+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.804053+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: cluster 2026-03-09T17:29:30.823039+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: cluster 2026-03-09T17:29:30.823039+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.898567+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.540 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:31 vm00 bash[28333]: audit 2026-03-09T17:29:30.898567+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:29.841438+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:29.841438+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:29.977301+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:29.977301+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.063359+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/3632482846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.063359+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/3632482846' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.164695+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.164695+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.393220+0000 mon.c (mon.2) 198 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.393220+0000 mon.c (mon.2) 198 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: cluster 2026-03-09T17:29:30.754915+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: cluster 2026-03-09T17:29:30.754915+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.804020+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]': finished 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.804020+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-11", "mode": "writeback"}]': finished 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.804053+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.804053+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59908-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: cluster 2026-03-09T17:29:30.823039+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: cluster 2026-03-09T17:29:30.823039+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.898567+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.545 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:31 vm00 bash[20770]: audit 2026-03-09T17:29:30.898567+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: Running main() from gmock_main.cc 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [==========] Running 9 tests from 1 test suite. 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [----------] Global test environment set-up. 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [----------] 9 tests from LibRadosPools 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolList 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolList (3361 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup (3316 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup2 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup2 (2346 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookupOtherInstance 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolLookupOtherInstance (3418 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolReverseLookupOtherInstance 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolReverseLookupOtherInstance (2694 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolDelete 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolDelete (5134 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateDelete 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateDelete (6591 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateWithCrushRule 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateWithCrushRule (5000 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolGetBaseTier 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolGetBaseTier (7361 ms) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [----------] 9 tests from LibRadosPools (39221 ms total) 2026-03-09T17:29:31.617 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: 2026-03-09T17:29:31.618 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [----------] Global test environment tear-down 2026-03-09T17:29:31.618 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [==========] 9 tests from 1 test suite ran. (39221 ms total) 2026-03-09T17:29:31.618 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ PASSED ] 9 tests. 2026-03-09T17:29:31.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:29:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:29:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: cluster 2026-03-09T17:29:30.685194+0000 mgr.y (mgr.14505) 140 : cluster [DBG] pgmap v118: 588 pgs: 12 creating+peering, 19 creating+activating, 1 active+clean+snaptrim, 96 unknown, 460 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 KiB/s wr, 2 op/s 2026-03-09T17:29:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: cluster 2026-03-09T17:29:30.685194+0000 mgr.y (mgr.14505) 140 : cluster [DBG] pgmap v118: 588 pgs: 12 creating+peering, 19 creating+activating, 1 active+clean+snaptrim, 96 unknown, 460 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 KiB/s wr, 2 op/s 2026-03-09T17:29:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: audit 2026-03-09T17:29:31.394424+0000 mon.c (mon.2) 199 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: audit 2026-03-09T17:29:31.394424+0000 mon.c (mon.2) 199 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: cluster 2026-03-09T17:29:31.422212+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: cluster 2026-03-09T17:29:31.422212+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: audit 2026-03-09T17:29:31.428209+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: audit 2026-03-09T17:29:31.428209+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: cluster 2026-03-09T17:29:31.470028+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: cluster 2026-03-09T17:29:31.470028+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: audit 2026-03-09T17:29:31.947382+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: audit 2026-03-09T17:29:31.947382+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: audit 2026-03-09T17:29:31.948677+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:32 vm02 bash[23351]: audit 2026-03-09T17:29:31.948677+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: cluster 2026-03-09T17:29:30.685194+0000 mgr.y (mgr.14505) 140 : cluster [DBG] pgmap v118: 588 pgs: 12 creating+peering, 19 creating+activating, 1 active+clean+snaptrim, 96 unknown, 460 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 KiB/s wr, 2 op/s 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: cluster 2026-03-09T17:29:30.685194+0000 mgr.y (mgr.14505) 140 : cluster [DBG] pgmap v118: 588 pgs: 12 creating+peering, 19 creating+activating, 1 active+clean+snaptrim, 96 unknown, 460 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 KiB/s wr, 2 op/s 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: audit 2026-03-09T17:29:31.394424+0000 mon.c (mon.2) 199 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: audit 2026-03-09T17:29:31.394424+0000 mon.c (mon.2) 199 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: cluster 2026-03-09T17:29:31.422212+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: cluster 2026-03-09T17:29:31.422212+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: audit 2026-03-09T17:29:31.428209+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: audit 2026-03-09T17:29:31.428209+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: cluster 2026-03-09T17:29:31.470028+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: cluster 2026-03-09T17:29:31.470028+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: audit 2026-03-09T17:29:31.947382+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: audit 2026-03-09T17:29:31.947382+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: audit 2026-03-09T17:29:31.948677+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:32 vm00 bash[20770]: audit 2026-03-09T17:29:31.948677+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: cluster 2026-03-09T17:29:30.685194+0000 mgr.y (mgr.14505) 140 : cluster [DBG] pgmap v118: 588 pgs: 12 creating+peering, 19 creating+activating, 1 active+clean+snaptrim, 96 unknown, 460 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 KiB/s wr, 2 op/s 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: cluster 2026-03-09T17:29:30.685194+0000 mgr.y (mgr.14505) 140 : cluster [DBG] pgmap v118: 588 pgs: 12 creating+peering, 19 creating+activating, 1 active+clean+snaptrim, 96 unknown, 460 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 KiB/s wr, 2 op/s 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: audit 2026-03-09T17:29:31.394424+0000 mon.c (mon.2) 199 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: audit 2026-03-09T17:29:31.394424+0000 mon.c (mon.2) 199 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: cluster 2026-03-09T17:29:31.422212+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: cluster 2026-03-09T17:29:31.422212+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: audit 2026-03-09T17:29:31.428209+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: audit 2026-03-09T17:29:31.428209+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? 192.168.123.100:0/3903760116' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: cluster 2026-03-09T17:29:31.470028+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: cluster 2026-03-09T17:29:31.470028+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: audit 2026-03-09T17:29:31.947382+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: audit 2026-03-09T17:29:31.947382+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: audit 2026-03-09T17:29:31.948677+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:32 vm00 bash[28333]: audit 2026-03-09T17:29:31.948677+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:31.558302+0000 mgr.y (mgr.14505) 141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:31.558302+0000 mgr.y (mgr.14505) 141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.395252+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.395252+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.562853+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.562853+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: cluster 2026-03-09T17:29:32.579626+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: cluster 2026-03-09T17:29:32.579626+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.608481+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.608481+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.614861+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.614861+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.614945+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/3081787593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.614945+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/3081787593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.630351+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:33 vm00 bash[20770]: audit 2026-03-09T17:29:32.630351+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:31.558302+0000 mgr.y (mgr.14505) 141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:31.558302+0000 mgr.y (mgr.14505) 141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.395252+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.395252+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.562853+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.562853+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: cluster 2026-03-09T17:29:32.579626+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: cluster 2026-03-09T17:29:32.579626+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.608481+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.608481+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.614861+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.614861+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.614945+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/3081787593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.614945+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/3081787593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.630351+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:33 vm00 bash[28333]: audit 2026-03-09T17:29:32.630351+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:31.558302+0000 mgr.y (mgr.14505) 141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:31.558302+0000 mgr.y (mgr.14505) 141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.395252+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.395252+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.562853+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.562853+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: cluster 2026-03-09T17:29:32.579626+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: cluster 2026-03-09T17:29:32.579626+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.608481+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.608481+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.614861+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.614861+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.614945+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/3081787593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.614945+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/3081787593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.630351+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:34.146 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:33 vm02 bash[23351]: audit 2026-03-09T17:29:32.630351+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: cluster 2026-03-09T17:29:32.686251+0000 mgr.y (mgr.14505) 142 : cluster [DBG] pgmap v122: 524 pgs: 1 active+clean+snaptrim, 96 unknown, 427 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: cluster 2026-03-09T17:29:32.686251+0000 mgr.y (mgr.14505) 142 : cluster [DBG] pgmap v122: 524 pgs: 1 active+clean+snaptrim, 96 unknown, 427 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.396136+0000 mon.c (mon.2) 201 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.396136+0000 mon.c (mon.2) 201 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: cluster 2026-03-09T17:29:33.563299+0000 mon.a (mon.0) 1227 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: cluster 2026-03-09T17:29:33.563299+0000 mon.a (mon.0) 1227 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.810950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.810950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.811055+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.811055+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: cluster 2026-03-09T17:29:33.819363+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: cluster 2026-03-09T17:29:33.819363+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.832128+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/2067026098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.832128+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/2067026098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.833330+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:33.833330+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.354627+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.354627+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.357546+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.357546+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.359389+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.359389+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.360906+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.360906+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.364922+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.364922+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.368041+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.368041+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.369281+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.369281+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.370730+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.370730+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.372662+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.372662+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.376444+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.376444+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.396958+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:34 vm00 bash[28333]: audit 2026-03-09T17:29:34.396958+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: cluster 2026-03-09T17:29:32.686251+0000 mgr.y (mgr.14505) 142 : cluster [DBG] pgmap v122: 524 pgs: 1 active+clean+snaptrim, 96 unknown, 427 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: cluster 2026-03-09T17:29:32.686251+0000 mgr.y (mgr.14505) 142 : cluster [DBG] pgmap v122: 524 pgs: 1 active+clean+snaptrim, 96 unknown, 427 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.396136+0000 mon.c (mon.2) 201 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.396136+0000 mon.c (mon.2) 201 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: cluster 2026-03-09T17:29:33.563299+0000 mon.a (mon.0) 1227 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: cluster 2026-03-09T17:29:33.563299+0000 mon.a (mon.0) 1227 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.810950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.810950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.811055+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.811055+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: cluster 2026-03-09T17:29:33.819363+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: cluster 2026-03-09T17:29:33.819363+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.832128+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/2067026098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.832128+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/2067026098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.833330+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:33.833330+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.354627+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.354627+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.357546+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.357546+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.359389+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.359389+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.360906+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.360906+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.364922+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.364922+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.368041+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.368041+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.369281+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.369281+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.370730+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.370730+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.372662+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.372662+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.376444+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.376444+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.396958+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:34 vm00 bash[20770]: audit 2026-03-09T17:29:34.396958+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: cluster 2026-03-09T17:29:32.686251+0000 mgr.y (mgr.14505) 142 : cluster [DBG] pgmap v122: 524 pgs: 1 active+clean+snaptrim, 96 unknown, 427 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: cluster 2026-03-09T17:29:32.686251+0000 mgr.y (mgr.14505) 142 : cluster [DBG] pgmap v122: 524 pgs: 1 active+clean+snaptrim, 96 unknown, 427 active+clean; 145 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.396136+0000 mon.c (mon.2) 201 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.396136+0000 mon.c (mon.2) 201 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: cluster 2026-03-09T17:29:33.563299+0000 mon.a (mon.0) 1227 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: cluster 2026-03-09T17:29:33.563299+0000 mon.a (mon.0) 1227 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.810950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.810950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-11"}]': finished 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.811055+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.811055+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59908-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: cluster 2026-03-09T17:29:33.819363+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: cluster 2026-03-09T17:29:33.819363+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.832128+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/2067026098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.832128+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/2067026098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.833330+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:33.833330+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.354627+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.354627+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.357546+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.357546+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.359389+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.359389+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.360906+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.360906+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.364922+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.364922+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.368041+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.368041+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.369281+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.369281+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.370730+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.370730+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.372662+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.372662+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:35.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.376444+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:35.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.376444+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:35.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.396958+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:35.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:34 vm02 bash[23351]: audit 2026-03-09T17:29:34.396958+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.355890+0000 mgr.y (mgr.14505) 143 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.355890+0000 mgr.y (mgr.14505) 143 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.358009+0000 mgr.y (mgr.14505) 144 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.358009+0000 mgr.y (mgr.14505) 144 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.359532+0000 mgr.y (mgr.14505) 145 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.359532+0000 mgr.y (mgr.14505) 145 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.361250+0000 mgr.y (mgr.14505) 146 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.361250+0000 mgr.y (mgr.14505) 146 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.365329+0000 mgr.y (mgr.14505) 147 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.365329+0000 mgr.y (mgr.14505) 147 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.368540+0000 mgr.y (mgr.14505) 148 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.368540+0000 mgr.y (mgr.14505) 148 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.369496+0000 mgr.y (mgr.14505) 149 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.369496+0000 mgr.y (mgr.14505) 149 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.370868+0000 mgr.y (mgr.14505) 150 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.370868+0000 mgr.y (mgr.14505) 150 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.372892+0000 mgr.y (mgr.14505) 151 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.372892+0000 mgr.y (mgr.14505) 151 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.376641+0000 mgr.y (mgr.14505) 152 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.376641+0000 mgr.y (mgr.14505) 152 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.410397+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.410397+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.427137+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.427137+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.461690+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.461690+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.506599+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.506599+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.541407+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.541407+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.590438+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.590438+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.611090+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.611090+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.612280+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.612280+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.686672+0000 mgr.y (mgr.14505) 153 : cluster [DBG] pgmap v124: 492 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 414 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 2.7 KiB/s wr, 8 op/s 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.686672+0000 mgr.y (mgr.14505) 153 : cluster [DBG] pgmap v124: 492 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 414 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 2.7 KiB/s wr, 8 op/s 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.816842+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.816842+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.874102+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: cluster 2026-03-09T17:29:34.874102+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.903692+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.903692+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.996109+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:34.996109+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:35.397771+0000 mon.c (mon.2) 214 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:36.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:35 vm02 bash[23351]: audit 2026-03-09T17:29:35.397771+0000 mon.c (mon.2) 214 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.355890+0000 mgr.y (mgr.14505) 143 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.355890+0000 mgr.y (mgr.14505) 143 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.358009+0000 mgr.y (mgr.14505) 144 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.358009+0000 mgr.y (mgr.14505) 144 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.359532+0000 mgr.y (mgr.14505) 145 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.359532+0000 mgr.y (mgr.14505) 145 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.361250+0000 mgr.y (mgr.14505) 146 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.361250+0000 mgr.y (mgr.14505) 146 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.365329+0000 mgr.y (mgr.14505) 147 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.365329+0000 mgr.y (mgr.14505) 147 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.368540+0000 mgr.y (mgr.14505) 148 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.368540+0000 mgr.y (mgr.14505) 148 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.369496+0000 mgr.y (mgr.14505) 149 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.369496+0000 mgr.y (mgr.14505) 149 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.370868+0000 mgr.y (mgr.14505) 150 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.370868+0000 mgr.y (mgr.14505) 150 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.372892+0000 mgr.y (mgr.14505) 151 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.372892+0000 mgr.y (mgr.14505) 151 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.376641+0000 mgr.y (mgr.14505) 152 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.376641+0000 mgr.y (mgr.14505) 152 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.410397+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.355890+0000 mgr.y (mgr.14505) 143 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.355890+0000 mgr.y (mgr.14505) 143 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.358009+0000 mgr.y (mgr.14505) 144 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.358009+0000 mgr.y (mgr.14505) 144 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.359532+0000 mgr.y (mgr.14505) 145 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.359532+0000 mgr.y (mgr.14505) 145 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.361250+0000 mgr.y (mgr.14505) 146 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.361250+0000 mgr.y (mgr.14505) 146 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.365329+0000 mgr.y (mgr.14505) 147 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.365329+0000 mgr.y (mgr.14505) 147 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.368540+0000 mgr.y (mgr.14505) 148 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.368540+0000 mgr.y (mgr.14505) 148 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.369496+0000 mgr.y (mgr.14505) 149 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.369496+0000 mgr.y (mgr.14505) 149 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.370868+0000 mgr.y (mgr.14505) 150 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.370868+0000 mgr.y (mgr.14505) 150 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.372892+0000 mgr.y (mgr.14505) 151 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:36.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.372892+0000 mgr.y (mgr.14505) 151 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.376641+0000 mgr.y (mgr.14505) 152 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.376641+0000 mgr.y (mgr.14505) 152 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.410397+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.410397+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.427137+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.427137+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.461690+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.461690+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.506599+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.506599+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.541407+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.541407+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.590438+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.590438+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.611090+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.611090+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.612280+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.612280+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.686672+0000 mgr.y (mgr.14505) 153 : cluster [DBG] pgmap v124: 492 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 414 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 2.7 KiB/s wr, 8 op/s 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.686672+0000 mgr.y (mgr.14505) 153 : cluster [DBG] pgmap v124: 492 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 414 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 2.7 KiB/s wr, 8 op/s 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.816842+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.816842+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.874102+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: cluster 2026-03-09T17:29:34.874102+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.903692+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.903692+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.996109+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:34.996109+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:35.397771+0000 mon.c (mon.2) 214 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:36.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:35 vm00 bash[20770]: audit 2026-03-09T17:29:35.397771+0000 mon.c (mon.2) 214 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.410397+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.427137+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.427137+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.461690+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.461690+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.0 deep-scrub starts 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.506599+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.506599+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.541407+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.541407+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.0 deep-scrub ok 2026-03-09T17:29:36.292 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.590438+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.590438+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.611090+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.611090+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.612280+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.612280+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.686672+0000 mgr.y (mgr.14505) 153 : cluster [DBG] pgmap v124: 492 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 414 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 2.7 KiB/s wr, 8 op/s 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.686672+0000 mgr.y (mgr.14505) 153 : cluster [DBG] pgmap v124: 492 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 414 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 2.7 KiB/s wr, 8 op/s 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.816842+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.816842+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59916-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.874102+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: cluster 2026-03-09T17:29:34.874102+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.903692+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.903692+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.996109+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:34.996109+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:35.397771+0000 mon.c (mon.2) 214 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:36.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:35 vm00 bash[28333]: audit 2026-03-09T17:29:35.397771+0000 mon.c (mon.2) 214 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:29:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:29:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:29:37.141 INFO:tasks.workunit.client.0.vm00.stdout: io: Running main() from gmock_main.cc 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [==========] Running 14 tests from 1 test suite. 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [----------] Global test environment set-up. 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [----------] 14 tests from NeoRadosIo 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.Limits 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.Limits (2983 ms) 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.SimpleWrite 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.SimpleWrite (3236 ms) 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.ReadOp 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.ReadOp (3446 ms) 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.SparseRead 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.SparseRead (3345 ms) 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.RoundTrip 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.RoundTrip (2716 ms) 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.ReadIntoBuufferlist 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.ReadIntoBuufferlist (3039 ms) 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.OverlappingWriteRoundTrip 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.OverlappingWriteRoundTrip (3565 ms) 2026-03-09T17:29:37.142 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.WriteFullRoundTrip 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.WriteFullRoundTrip (3032 ms) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.AppendRoundTrip 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.AppendRoundTrip (3005 ms) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.Trunc 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.Trunc (3118 ms) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.Remove 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.Remove (3494 ms) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.XattrsRoundTrip 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.XattrsRoundTrip (3186 ms) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.RmXattr 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.RmXattr (2946 ms) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.GetXattrs 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.GetXattrs (3290 ms) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [----------] 14 tests from NeoRadosIo (44401 ms total) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [----------] Global test environment tear-down 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [==========] 14 tests from 1 test suite ran. (44401 ms total) 2026-03-09T17:29:37.143 INFO:tasks.workunit.client.0.vm00.stdout: io: [ PASSED ] 14 tests. 2026-03-09T17:29:37.307 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: Running main() from gmock_main.cc 2026-03-09T17:29:37.307 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [==========] Running 14 tests from 1 test suite. 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [----------] Global test environment set-up. 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.SetOpFlags 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.SetOpFlags (2820 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertExists 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertExists (3283 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertVersion 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertVersion (3495 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpXattr 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpXattr (3376 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Read 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Read (2638 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Checksum 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Checksum (3034 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.RWOrderedRead 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.RWOrderedRead (3564 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.ShortRead 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.ShortRead (3054 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Exec 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Exec (3003 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Stat 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Stat (3031 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Omap 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Omap (3602 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.OmapNuls 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.OmapNuls (3148 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.GetXattrs 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.GetXattrs (2940 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpExt 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpExt (3428 ms) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps (44416 ms total) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [----------] Global test environment tear-down 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [==========] 14 tests from 1 test suite ran. (44416 ms total) 2026-03-09T17:29:37.308 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ PASSED ] 14 tests. 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.372894+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.372894+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.401137+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.401137+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.461187+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.461187+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.462236+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.462236+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.511021+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.511021+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.698026+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.698026+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.846032+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.846032+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.862024+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:35.862024+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.863990+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.863990+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.895350+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.895350+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.897866+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.897866+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.910820+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.910820+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.924096+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/2797594107' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.924096+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/2797594107' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.934384+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:35.934384+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:36.401041+0000 mon.c (mon.2) 217 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: audit 2026-03-09T17:29:36.401041+0000 mon.c (mon.2) 217 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:36.426137+0000 mon.a (mon.0) 1240 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:37 vm00 bash[20770]: cluster 2026-03-09T17:29:36.426137+0000 mon.a (mon.0) 1240 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.372894+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.372894+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.401137+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.401137+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.461187+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.461187+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.462236+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.462236+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.511021+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.511021+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.698026+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.698026+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.846032+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.846032+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.862024+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:35.862024+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.863990+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.863990+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.895350+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.895350+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.897866+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.897866+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.910820+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.910820+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.924096+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/2797594107' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.924096+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/2797594107' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.934384+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:35.934384+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:36.401041+0000 mon.c (mon.2) 217 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: audit 2026-03-09T17:29:36.401041+0000 mon.c (mon.2) 217 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:36.426137+0000 mon.a (mon.0) 1240 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:37.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:37 vm00 bash[28333]: cluster 2026-03-09T17:29:36.426137+0000 mon.a (mon.0) 1240 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.372894+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.372894+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.401137+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.401137+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.461187+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.461187+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.2 deep-scrub starts 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.462236+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.462236+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.2 deep-scrub ok 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.511021+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.511021+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.698026+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.698026+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.846032+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.846032+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.862024+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:35.862024+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.863990+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.863990+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.895350+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.895350+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.897866+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.897866+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.100:0/1537786428' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.910820+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.910820+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.924096+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/2797594107' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.924096+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/2797594107' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.934384+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:35.934384+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:36.401041+0000 mon.c (mon.2) 217 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: audit 2026-03-09T17:29:36.401041+0000 mon.c (mon.2) 217 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:36.426137+0000 mon.a (mon.0) 1240 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:37.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:37 vm02 bash[23351]: cluster 2026-03-09T17:29:36.426137+0000 mon.a (mon.0) 1240 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.421545+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.421545+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.450280+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.450280+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.528643+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.528643+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.610524+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.610524+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.687058+0000 mgr.y (mgr.14505) 154 : cluster [DBG] pgmap v127: 516 pgs: 5 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 128 unknown, 380 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:36.687058+0000 mgr.y (mgr.14505) 154 : cluster [DBG] pgmap v127: 516 pgs: 5 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 128 unknown, 380 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:36.989860+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:36.989860+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:36.989917+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:36.989917+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:36.989948+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:36.989948+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:37.125084+0000 mon.a (mon.0) 1244 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:37.125084+0000 mon.a (mon.0) 1244 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:37.382158+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/213312534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:37.382158+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/213312534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:37.391179+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:37.391179+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:37.412063+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:37.412063+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:37.677242+0000 mon.c (mon.2) 220 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:37.677242+0000 mon.c (mon.2) 220 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:38.010289+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:38.010289+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:38.032932+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: cluster 2026-03-09T17:29:38.032932+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:38.040318+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:38.040318+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:38.106043+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:38.106043+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:38.120828+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:38 vm00 bash[20770]: audit 2026-03-09T17:29:38.120828+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.421545+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.421545+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.450280+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.450280+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.528643+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.528643+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.610524+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.610524+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.687058+0000 mgr.y (mgr.14505) 154 : cluster [DBG] pgmap v127: 516 pgs: 5 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 128 unknown, 380 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:36.687058+0000 mgr.y (mgr.14505) 154 : cluster [DBG] pgmap v127: 516 pgs: 5 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 128 unknown, 380 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:36.989860+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:36.989860+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:36.989917+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:36.989917+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:36.989948+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:36.989948+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:37.125084+0000 mon.a (mon.0) 1244 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:37.125084+0000 mon.a (mon.0) 1244 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:37.382158+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/213312534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:37.382158+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/213312534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:37.391179+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:37.391179+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:37.412063+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:37.412063+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:37.677242+0000 mon.c (mon.2) 220 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:37.677242+0000 mon.c (mon.2) 220 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:38.010289+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:38.010289+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:38.032932+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: cluster 2026-03-09T17:29:38.032932+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:38.040318+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:38.040318+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:38.106043+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:38.106043+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:38.120828+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:38.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:38 vm00 bash[28333]: audit 2026-03-09T17:29:38.120828+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.421545+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.421545+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.450280+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.450280+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.8 deep-scrub starts 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.528643+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.528643+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.610524+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.610524+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.8 deep-scrub ok 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.687058+0000 mgr.y (mgr.14505) 154 : cluster [DBG] pgmap v127: 516 pgs: 5 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 128 unknown, 380 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:36.687058+0000 mgr.y (mgr.14505) 154 : cluster [DBG] pgmap v127: 516 pgs: 5 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 128 unknown, 380 active+clean; 144 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:36.989860+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:36.989860+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:36.989917+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:36.989917+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-60171-10"}]': finished 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:36.989948+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:36.989948+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:37.125084+0000 mon.a (mon.0) 1244 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:37.125084+0000 mon.a (mon.0) 1244 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:37.382158+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/213312534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:37.382158+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/213312534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:37.391179+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:37.391179+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:37.412063+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:37.412063+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:37.677242+0000 mon.c (mon.2) 220 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:37.677242+0000 mon.c (mon.2) 220 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:38.010289+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:38.010289+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59916-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:38.032932+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: cluster 2026-03-09T17:29:38.032932+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:38.040318+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:38.040318+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:38.106043+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:38.106043+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:38.120828+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:38 vm02 bash[23351]: audit 2026-03-09T17:29:38.120828+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:38.455151+0000 mon.c (mon.2) 221 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:38.455151+0000 mon.c (mon.2) 221 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: cluster 2026-03-09T17:29:38.687523+0000 mgr.y (mgr.14505) 155 : cluster [DBG] pgmap v130: 452 pgs: 32 creating+peering, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 18 MiB/s wr, 0 op/s 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: cluster 2026-03-09T17:29:38.687523+0000 mgr.y (mgr.14505) 155 : cluster [DBG] pgmap v130: 452 pgs: 32 creating+peering, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 18 MiB/s wr, 0 op/s 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:39.017993+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:39.017993+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: cluster 2026-03-09T17:29:39.021351+0000 mon.a (mon.0) 1252 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: cluster 2026-03-09T17:29:39.021351+0000 mon.a (mon.0) 1252 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:39.021853+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:39.021853+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:39.069713+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1947901145' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:39.069713+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1947901145' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:39.070249+0000 mon.a (mon.0) 1254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:39 vm00 bash[20770]: audit 2026-03-09T17:29:39.070249+0000 mon.a (mon.0) 1254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:38.455151+0000 mon.c (mon.2) 221 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:38.455151+0000 mon.c (mon.2) 221 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: cluster 2026-03-09T17:29:38.687523+0000 mgr.y (mgr.14505) 155 : cluster [DBG] pgmap v130: 452 pgs: 32 creating+peering, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 18 MiB/s wr, 0 op/s 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: cluster 2026-03-09T17:29:38.687523+0000 mgr.y (mgr.14505) 155 : cluster [DBG] pgmap v130: 452 pgs: 32 creating+peering, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 18 MiB/s wr, 0 op/s 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:39.017993+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:39.017993+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: cluster 2026-03-09T17:29:39.021351+0000 mon.a (mon.0) 1252 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: cluster 2026-03-09T17:29:39.021351+0000 mon.a (mon.0) 1252 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:39.021853+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:39.021853+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:39.069713+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1947901145' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:39.069713+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1947901145' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:39.070249+0000 mon.a (mon.0) 1254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:39 vm00 bash[28333]: audit 2026-03-09T17:29:39.070249+0000 mon.a (mon.0) 1254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:38.455151+0000 mon.c (mon.2) 221 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:38.455151+0000 mon.c (mon.2) 221 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: cluster 2026-03-09T17:29:38.687523+0000 mgr.y (mgr.14505) 155 : cluster [DBG] pgmap v130: 452 pgs: 32 creating+peering, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 18 MiB/s wr, 0 op/s 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: cluster 2026-03-09T17:29:38.687523+0000 mgr.y (mgr.14505) 155 : cluster [DBG] pgmap v130: 452 pgs: 32 creating+peering, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 18 MiB/s wr, 0 op/s 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:39.017993+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:39.017993+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: cluster 2026-03-09T17:29:39.021351+0000 mon.a (mon.0) 1252 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: cluster 2026-03-09T17:29:39.021351+0000 mon.a (mon.0) 1252 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:39.021853+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:39.021853+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:39.069713+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1947901145' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:39.069713+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1947901145' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:39.070249+0000 mon.a (mon.0) 1254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:39 vm02 bash[23351]: audit 2026-03-09T17:29:39.070249+0000 mon.a (mon.0) 1254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: audit 2026-03-09T17:29:39.456226+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: audit 2026-03-09T17:29:39.456226+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: audit 2026-03-09T17:29:40.023523+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: audit 2026-03-09T17:29:40.023523+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: cluster 2026-03-09T17:29:40.085063+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: cluster 2026-03-09T17:29:40.085063+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: audit 2026-03-09T17:29:40.097369+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/1478824345' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: audit 2026-03-09T17:29:40.097369+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/1478824345' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: audit 2026-03-09T17:29:40.111967+0000 mon.a (mon.0) 1257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:40 vm00 bash[20770]: audit 2026-03-09T17:29:40.111967+0000 mon.a (mon.0) 1257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: audit 2026-03-09T17:29:39.456226+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: audit 2026-03-09T17:29:39.456226+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: audit 2026-03-09T17:29:40.023523+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: audit 2026-03-09T17:29:40.023523+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: cluster 2026-03-09T17:29:40.085063+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: cluster 2026-03-09T17:29:40.085063+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: audit 2026-03-09T17:29:40.097369+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/1478824345' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: audit 2026-03-09T17:29:40.097369+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/1478824345' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: audit 2026-03-09T17:29:40.111967+0000 mon.a (mon.0) 1257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:40 vm00 bash[28333]: audit 2026-03-09T17:29:40.111967+0000 mon.a (mon.0) 1257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: audit 2026-03-09T17:29:39.456226+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: audit 2026-03-09T17:29:39.456226+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: audit 2026-03-09T17:29:40.023523+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: audit 2026-03-09T17:29:40.023523+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59908-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: cluster 2026-03-09T17:29:40.085063+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: cluster 2026-03-09T17:29:40.085063+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: audit 2026-03-09T17:29:40.097369+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/1478824345' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: audit 2026-03-09T17:29:40.097369+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/1478824345' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: audit 2026-03-09T17:29:40.111967+0000 mon.a (mon.0) 1257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:40 vm02 bash[23351]: audit 2026-03-09T17:29:40.111967+0000 mon.a (mon.0) 1257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: cluster 2026-03-09T17:29:40.096589+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: cluster 2026-03-09T17:29:40.096589+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: cluster 2026-03-09T17:29:40.135402+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: cluster 2026-03-09T17:29:40.135402+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: audit 2026-03-09T17:29:40.464230+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: audit 2026-03-09T17:29:40.464230+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: cluster 2026-03-09T17:29:40.687938+0000 mgr.y (mgr.14505) 156 : cluster [DBG] pgmap v133: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 4 op/s 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: cluster 2026-03-09T17:29:40.687938+0000 mgr.y (mgr.14505) 156 : cluster [DBG] pgmap v133: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 4 op/s 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: audit 2026-03-09T17:29:41.028333+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: audit 2026-03-09T17:29:41.028333+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: audit 2026-03-09T17:29:41.028365+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: audit 2026-03-09T17:29:41.028365+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: cluster 2026-03-09T17:29:41.067990+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T17:29:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:41 vm02 bash[23351]: cluster 2026-03-09T17:29:41.067990+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T17:29:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:29:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: cluster 2026-03-09T17:29:40.096589+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: cluster 2026-03-09T17:29:40.096589+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: cluster 2026-03-09T17:29:40.135402+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: cluster 2026-03-09T17:29:40.135402+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: audit 2026-03-09T17:29:40.464230+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: audit 2026-03-09T17:29:40.464230+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: cluster 2026-03-09T17:29:40.687938+0000 mgr.y (mgr.14505) 156 : cluster [DBG] pgmap v133: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 4 op/s 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: cluster 2026-03-09T17:29:40.687938+0000 mgr.y (mgr.14505) 156 : cluster [DBG] pgmap v133: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 4 op/s 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: audit 2026-03-09T17:29:41.028333+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: audit 2026-03-09T17:29:41.028333+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: audit 2026-03-09T17:29:41.028365+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: audit 2026-03-09T17:29:41.028365+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: cluster 2026-03-09T17:29:41.067990+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:41 vm00 bash[20770]: cluster 2026-03-09T17:29:41.067990+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: cluster 2026-03-09T17:29:40.096589+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: cluster 2026-03-09T17:29:40.096589+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: cluster 2026-03-09T17:29:40.135402+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: cluster 2026-03-09T17:29:40.135402+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: audit 2026-03-09T17:29:40.464230+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: audit 2026-03-09T17:29:40.464230+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:42.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: cluster 2026-03-09T17:29:40.687938+0000 mgr.y (mgr.14505) 156 : cluster [DBG] pgmap v133: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 4 op/s 2026-03-09T17:29:42.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: cluster 2026-03-09T17:29:40.687938+0000 mgr.y (mgr.14505) 156 : cluster [DBG] pgmap v133: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 4 op/s 2026-03-09T17:29:42.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: audit 2026-03-09T17:29:41.028333+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:42.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: audit 2026-03-09T17:29:41.028333+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-60171-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:42.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: audit 2026-03-09T17:29:41.028365+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:42.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: audit 2026-03-09T17:29:41.028365+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59916-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:42.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: cluster 2026-03-09T17:29:41.067990+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T17:29:42.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:41 vm00 bash[28333]: cluster 2026-03-09T17:29:41.067990+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: cluster 2026-03-09T17:29:41.448129+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: cluster 2026-03-09T17:29:41.448129+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: audit 2026-03-09T17:29:41.465412+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: audit 2026-03-09T17:29:41.465412+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: audit 2026-03-09T17:29:41.566909+0000 mgr.y (mgr.14505) 157 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: audit 2026-03-09T17:29:41.566909+0000 mgr.y (mgr.14505) 157 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: cluster 2026-03-09T17:29:41.886443+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: cluster 2026-03-09T17:29:41.886443+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: audit 2026-03-09T17:29:42.108800+0000 mon.c (mon.2) 227 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: audit 2026-03-09T17:29:42.108800+0000 mon.c (mon.2) 227 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: audit 2026-03-09T17:29:42.466248+0000 mon.c (mon.2) 228 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:42 vm02 bash[23351]: audit 2026-03-09T17:29:42.466248+0000 mon.c (mon.2) 228 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: cluster 2026-03-09T17:29:41.448129+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: cluster 2026-03-09T17:29:41.448129+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: audit 2026-03-09T17:29:41.465412+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: audit 2026-03-09T17:29:41.465412+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: audit 2026-03-09T17:29:41.566909+0000 mgr.y (mgr.14505) 157 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: audit 2026-03-09T17:29:41.566909+0000 mgr.y (mgr.14505) 157 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: cluster 2026-03-09T17:29:41.886443+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: cluster 2026-03-09T17:29:41.886443+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: audit 2026-03-09T17:29:42.108800+0000 mon.c (mon.2) 227 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: audit 2026-03-09T17:29:42.108800+0000 mon.c (mon.2) 227 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: audit 2026-03-09T17:29:42.466248+0000 mon.c (mon.2) 228 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:42 vm00 bash[20770]: audit 2026-03-09T17:29:42.466248+0000 mon.c (mon.2) 228 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: cluster 2026-03-09T17:29:41.448129+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: cluster 2026-03-09T17:29:41.448129+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: audit 2026-03-09T17:29:41.465412+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: audit 2026-03-09T17:29:41.465412+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: audit 2026-03-09T17:29:41.566909+0000 mgr.y (mgr.14505) 157 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: audit 2026-03-09T17:29:41.566909+0000 mgr.y (mgr.14505) 157 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: cluster 2026-03-09T17:29:41.886443+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: cluster 2026-03-09T17:29:41.886443+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: audit 2026-03-09T17:29:42.108800+0000 mon.c (mon.2) 227 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: audit 2026-03-09T17:29:42.108800+0000 mon.c (mon.2) 227 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: audit 2026-03-09T17:29:42.466248+0000 mon.c (mon.2) 228 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:42 vm00 bash[28333]: audit 2026-03-09T17:29:42.466248+0000 mon.c (mon.2) 228 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: cluster 2026-03-09T17:29:42.688360+0000 mgr.y (mgr.14505) 158 : cluster [DBG] pgmap v136: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: cluster 2026-03-09T17:29:42.688360+0000 mgr.y (mgr.14505) 158 : cluster [DBG] pgmap v136: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: cluster 2026-03-09T17:29:42.798031+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: cluster 2026-03-09T17:29:42.798031+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: audit 2026-03-09T17:29:42.845812+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: audit 2026-03-09T17:29:42.845812+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: audit 2026-03-09T17:29:43.466885+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: audit 2026-03-09T17:29:43.466885+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: audit 2026-03-09T17:29:43.855278+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: audit 2026-03-09T17:29:43.855278+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: cluster 2026-03-09T17:29:43.867694+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T17:29:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:44 vm02 bash[23351]: cluster 2026-03-09T17:29:43.867694+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T17:29:44.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: cluster 2026-03-09T17:29:42.688360+0000 mgr.y (mgr.14505) 158 : cluster [DBG] pgmap v136: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: cluster 2026-03-09T17:29:42.688360+0000 mgr.y (mgr.14505) 158 : cluster [DBG] pgmap v136: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: cluster 2026-03-09T17:29:42.798031+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: cluster 2026-03-09T17:29:42.798031+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: audit 2026-03-09T17:29:42.845812+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: audit 2026-03-09T17:29:42.845812+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: audit 2026-03-09T17:29:43.466885+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: audit 2026-03-09T17:29:43.466885+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: audit 2026-03-09T17:29:43.855278+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: audit 2026-03-09T17:29:43.855278+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: cluster 2026-03-09T17:29:43.867694+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:44 vm00 bash[20770]: cluster 2026-03-09T17:29:43.867694+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: cluster 2026-03-09T17:29:42.688360+0000 mgr.y (mgr.14505) 158 : cluster [DBG] pgmap v136: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: cluster 2026-03-09T17:29:42.688360+0000 mgr.y (mgr.14505) 158 : cluster [DBG] pgmap v136: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: cluster 2026-03-09T17:29:42.798031+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: cluster 2026-03-09T17:29:42.798031+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: audit 2026-03-09T17:29:42.845812+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: audit 2026-03-09T17:29:42.845812+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: audit 2026-03-09T17:29:43.466885+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: audit 2026-03-09T17:29:43.466885+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: audit 2026-03-09T17:29:43.855278+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: audit 2026-03-09T17:29:43.855278+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.100:0/626659542' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59908-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: cluster 2026-03-09T17:29:43.867694+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T17:29:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:44 vm00 bash[28333]: cluster 2026-03-09T17:29:43.867694+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:43.908499+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:43.908499+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.467600+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.467600+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.537445+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.537445+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.538556+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.538556+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.880475+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.880475+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.880506+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.880506+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.893922+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.893922+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.918988+0000 mon.c (mon.2) 231 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.918988+0000 mon.c (mon.2) 231 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: cluster 2026-03-09T17:29:44.920578+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: cluster 2026-03-09T17:29:44.920578+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.933797+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.933797+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.942419+0000 mon.c (mon.2) 232 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:44.942419+0000 mon.c (mon.2) 232 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:45.040109+0000 mon.a (mon.0) 1273 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:45 vm02 bash[23351]: audit 2026-03-09T17:29:45.040109+0000 mon.a (mon.0) 1273 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:43.908499+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:43.908499+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.467600+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.467600+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.537445+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.537445+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.538556+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.538556+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.880475+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.880475+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.880506+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.880506+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.893922+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.893922+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.918988+0000 mon.c (mon.2) 231 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.918988+0000 mon.c (mon.2) 231 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: cluster 2026-03-09T17:29:44.920578+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: cluster 2026-03-09T17:29:44.920578+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.933797+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.933797+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.942419+0000 mon.c (mon.2) 232 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:44.942419+0000 mon.c (mon.2) 232 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:45.040109+0000 mon.a (mon.0) 1273 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:45 vm00 bash[28333]: audit 2026-03-09T17:29:45.040109+0000 mon.a (mon.0) 1273 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:43.908499+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:43.908499+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.467600+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.467600+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.537445+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.537445+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.538556+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.538556+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.880475+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.880475+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2483822054' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59916-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.880506+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.880506+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.893922+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.893922+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.918988+0000 mon.c (mon.2) 231 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.918988+0000 mon.c (mon.2) 231 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: cluster 2026-03-09T17:29:44.920578+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T17:29:45.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: cluster 2026-03-09T17:29:44.920578+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T17:29:45.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.933797+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.933797+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:29:45.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.942419+0000 mon.c (mon.2) 232 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:29:45.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:44.942419+0000 mon.c (mon.2) 232 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:29:45.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:45.040109+0000 mon.a (mon.0) 1273 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:45.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:45 vm00 bash[20770]: audit 2026-03-09T17:29:45.040109+0000 mon.a (mon.0) 1273 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: cluster 2026-03-09T17:29:44.688740+0000 mgr.y (mgr.14505) 159 : cluster [DBG] pgmap v139: 492 pgs: 6 creating+peering, 58 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: cluster 2026-03-09T17:29:44.688740+0000 mgr.y (mgr.14505) 159 : cluster [DBG] pgmap v139: 492 pgs: 6 creating+peering, 58 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.468456+0000 mon.c (mon.2) 233 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.468456+0000 mon.c (mon.2) 233 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.885855+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.885855+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: cluster 2026-03-09T17:29:45.899060+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: cluster 2026-03-09T17:29:45.899060+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.904955+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.904955+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.911705+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.911705+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.951153+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/1233910470' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.951153+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/1233910470' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.951944+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:46 vm02 bash[23351]: audit 2026-03-09T17:29:45.951944+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: cluster 2026-03-09T17:29:44.688740+0000 mgr.y (mgr.14505) 159 : cluster [DBG] pgmap v139: 492 pgs: 6 creating+peering, 58 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: cluster 2026-03-09T17:29:44.688740+0000 mgr.y (mgr.14505) 159 : cluster [DBG] pgmap v139: 492 pgs: 6 creating+peering, 58 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.468456+0000 mon.c (mon.2) 233 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.468456+0000 mon.c (mon.2) 233 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.885855+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.885855+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: cluster 2026-03-09T17:29:45.899060+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: cluster 2026-03-09T17:29:45.899060+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.904955+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.904955+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.911705+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.911705+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.951153+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/1233910470' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.951153+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/1233910470' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.951944+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:46 vm00 bash[20770]: audit 2026-03-09T17:29:45.951944+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:29:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:29:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: cluster 2026-03-09T17:29:44.688740+0000 mgr.y (mgr.14505) 159 : cluster [DBG] pgmap v139: 492 pgs: 6 creating+peering, 58 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: cluster 2026-03-09T17:29:44.688740+0000 mgr.y (mgr.14505) 159 : cluster [DBG] pgmap v139: 492 pgs: 6 creating+peering, 58 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.468456+0000 mon.c (mon.2) 233 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.468456+0000 mon.c (mon.2) 233 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.885855+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.885855+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: cluster 2026-03-09T17:29:45.899060+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: cluster 2026-03-09T17:29:45.899060+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.904955+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.904955+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.911705+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.911705+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.951153+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/1233910470' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.951153+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/1233910470' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.951944+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:46 vm00 bash[28333]: audit 2026-03-09T17:29:45.951944+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: audit 2026-03-09T17:29:46.472131+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: audit 2026-03-09T17:29:46.472131+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: cluster 2026-03-09T17:29:46.565794+0000 mon.a (mon.0) 1278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: cluster 2026-03-09T17:29:46.565794+0000 mon.a (mon.0) 1278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: cluster 2026-03-09T17:29:46.886186+0000 mon.a (mon.0) 1279 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: cluster 2026-03-09T17:29:46.886186+0000 mon.a (mon.0) 1279 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: audit 2026-03-09T17:29:46.933102+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]': finished 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: audit 2026-03-09T17:29:46.933102+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]': finished 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: audit 2026-03-09T17:29:46.933207+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: audit 2026-03-09T17:29:46.933207+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: cluster 2026-03-09T17:29:46.937932+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: cluster 2026-03-09T17:29:46.937932+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: audit 2026-03-09T17:29:46.992455+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:47 vm00 bash[20770]: audit 2026-03-09T17:29:46.992455+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: audit 2026-03-09T17:29:46.472131+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: audit 2026-03-09T17:29:46.472131+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: cluster 2026-03-09T17:29:46.565794+0000 mon.a (mon.0) 1278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: cluster 2026-03-09T17:29:46.565794+0000 mon.a (mon.0) 1278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: cluster 2026-03-09T17:29:46.886186+0000 mon.a (mon.0) 1279 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: cluster 2026-03-09T17:29:46.886186+0000 mon.a (mon.0) 1279 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: audit 2026-03-09T17:29:46.933102+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]': finished 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: audit 2026-03-09T17:29:46.933102+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]': finished 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: audit 2026-03-09T17:29:46.933207+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: audit 2026-03-09T17:29:46.933207+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: cluster 2026-03-09T17:29:46.937932+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: cluster 2026-03-09T17:29:46.937932+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: audit 2026-03-09T17:29:46.992455+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:47 vm00 bash[28333]: audit 2026-03-09T17:29:46.992455+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: audit 2026-03-09T17:29:46.472131+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: audit 2026-03-09T17:29:46.472131+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: cluster 2026-03-09T17:29:46.565794+0000 mon.a (mon.0) 1278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: cluster 2026-03-09T17:29:46.565794+0000 mon.a (mon.0) 1278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: cluster 2026-03-09T17:29:46.886186+0000 mon.a (mon.0) 1279 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: cluster 2026-03-09T17:29:46.886186+0000 mon.a (mon.0) 1279 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: audit 2026-03-09T17:29:46.933102+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]': finished 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: audit 2026-03-09T17:29:46.933102+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-13", "mode": "writeback"}]': finished 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: audit 2026-03-09T17:29:46.933207+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: audit 2026-03-09T17:29:46.933207+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59908-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: cluster 2026-03-09T17:29:46.937932+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: cluster 2026-03-09T17:29:46.937932+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: audit 2026-03-09T17:29:46.992455+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:47 vm02 bash[23351]: audit 2026-03-09T17:29:46.992455+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:48 vm00 bash[20770]: cluster 2026-03-09T17:29:46.689138+0000 mgr.y (mgr.14505) 160 : cluster [DBG] pgmap v142: 460 pgs: 32 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:48 vm00 bash[20770]: cluster 2026-03-09T17:29:46.689138+0000 mgr.y (mgr.14505) 160 : cluster [DBG] pgmap v142: 460 pgs: 32 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:48 vm00 bash[20770]: audit 2026-03-09T17:29:47.472917+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:48 vm00 bash[20770]: audit 2026-03-09T17:29:47.472917+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:48 vm00 bash[20770]: audit 2026-03-09T17:29:47.936866+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:48 vm00 bash[20770]: audit 2026-03-09T17:29:47.936866+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:48 vm00 bash[20770]: cluster 2026-03-09T17:29:47.947673+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:48 vm00 bash[20770]: cluster 2026-03-09T17:29:47.947673+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:48 vm00 bash[28333]: cluster 2026-03-09T17:29:46.689138+0000 mgr.y (mgr.14505) 160 : cluster [DBG] pgmap v142: 460 pgs: 32 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:48 vm00 bash[28333]: cluster 2026-03-09T17:29:46.689138+0000 mgr.y (mgr.14505) 160 : cluster [DBG] pgmap v142: 460 pgs: 32 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:48 vm00 bash[28333]: audit 2026-03-09T17:29:47.472917+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:48 vm00 bash[28333]: audit 2026-03-09T17:29:47.472917+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:48 vm00 bash[28333]: audit 2026-03-09T17:29:47.936866+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:48 vm00 bash[28333]: audit 2026-03-09T17:29:47.936866+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:48 vm00 bash[28333]: cluster 2026-03-09T17:29:47.947673+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T17:29:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:48 vm00 bash[28333]: cluster 2026-03-09T17:29:47.947673+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T17:29:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:48 vm02 bash[23351]: cluster 2026-03-09T17:29:46.689138+0000 mgr.y (mgr.14505) 160 : cluster [DBG] pgmap v142: 460 pgs: 32 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:48 vm02 bash[23351]: cluster 2026-03-09T17:29:46.689138+0000 mgr.y (mgr.14505) 160 : cluster [DBG] pgmap v142: 460 pgs: 32 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 88 op/s 2026-03-09T17:29:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:48 vm02 bash[23351]: audit 2026-03-09T17:29:47.472917+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:48 vm02 bash[23351]: audit 2026-03-09T17:29:47.472917+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:48 vm02 bash[23351]: audit 2026-03-09T17:29:47.936866+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:48 vm02 bash[23351]: audit 2026-03-09T17:29:47.936866+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.100:0/1477347227' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59916-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:48 vm02 bash[23351]: cluster 2026-03-09T17:29:47.947673+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T17:29:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:48 vm02 bash[23351]: cluster 2026-03-09T17:29:47.947673+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T17:29:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:48.473923+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:48.473923+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: cluster 2026-03-09T17:29:49.006966+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: cluster 2026-03-09T17:29:49.006966+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.064679+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2308530249' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.064679+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2308530249' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.066612+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.066612+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.204828+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.204828+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.205838+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.205838+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.206597+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.206597+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.207252+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.207252+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.208976+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.208976+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.210624+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.210624+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.211536+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.211536+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.212316+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.212316+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.213362+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.213362+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:49.656 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.214017+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:49.657 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:49 vm02 bash[23351]: audit 2026-03-09T17:29:49.214017+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:49.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:48.473923+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:48.473923+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: cluster 2026-03-09T17:29:49.006966+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: cluster 2026-03-09T17:29:49.006966+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.064679+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2308530249' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.064679+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2308530249' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.066612+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.066612+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.204828+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.204828+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.205838+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.205838+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.206597+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.206597+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.207252+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.207252+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.208976+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.208976+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.210624+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.210624+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.211536+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.211536+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.212316+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.212316+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.213362+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.213362+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.214017+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:49 vm00 bash[20770]: audit 2026-03-09T17:29:49.214017+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:48.473923+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:48.473923+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: cluster 2026-03-09T17:29:49.006966+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: cluster 2026-03-09T17:29:49.006966+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.064679+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2308530249' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.064679+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2308530249' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.066612+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.066612+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.204828+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.204828+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.205838+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.205838+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.206597+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.206597+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.207252+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.207252+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.208976+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.208976+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.210624+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.210624+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.211536+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.211536+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.212316+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.212316+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.213362+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.213362+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.214017+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:49.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:49 vm00 bash[28333]: audit 2026-03-09T17:29:49.214017+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:48.689534+0000 mgr.y (mgr.14505) 161 : cluster [DBG] pgmap v145: 460 pgs: 32 creating+peering, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 75 op/s 2026-03-09T17:29:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:48.689534+0000 mgr.y (mgr.14505) 161 : cluster [DBG] pgmap v145: 460 pgs: 32 creating+peering, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 75 op/s 2026-03-09T17:29:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.205946+0000 mgr.y (mgr.14505) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.205946+0000 mgr.y (mgr.14505) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.206744+0000 mgr.y (mgr.14505) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.206744+0000 mgr.y (mgr.14505) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.207476+0000 mgr.y (mgr.14505) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.207476+0000 mgr.y (mgr.14505) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.208317+0000 mgr.y (mgr.14505) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.208317+0000 mgr.y (mgr.14505) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.209879+0000 mgr.y (mgr.14505) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.209879+0000 mgr.y (mgr.14505) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.211578+0000 mgr.y (mgr.14505) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.211578+0000 mgr.y (mgr.14505) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.212536+0000 mgr.y (mgr.14505) 168 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.212536+0000 mgr.y (mgr.14505) 168 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.213235+0000 mgr.y (mgr.14505) 169 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.213235+0000 mgr.y (mgr.14505) 169 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.214307+0000 mgr.y (mgr.14505) 170 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.214307+0000 mgr.y (mgr.14505) 170 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.214964+0000 mgr.y (mgr.14505) 171 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.214964+0000 mgr.y (mgr.14505) 171 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.320817+0000 osd.2 (osd.2) 15 : cluster [DBG] 159.7 scrub starts 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.320817+0000 osd.2 (osd.2) 15 : cluster [DBG] 159.7 scrub starts 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.322725+0000 osd.2 (osd.2) 16 : cluster [DBG] 159.7 scrub ok 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.322725+0000 osd.2 (osd.2) 16 : cluster [DBG] 159.7 scrub ok 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.601392+0000 osd.1 (osd.1) 11 : cluster [DBG] 159.2 deep-scrub starts 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.601392+0000 osd.1 (osd.1) 11 : cluster [DBG] 159.2 deep-scrub starts 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.707767+0000 osd.7 (osd.7) 3 : cluster [DBG] 159.3 scrub starts 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.707767+0000 osd.7 (osd.7) 3 : cluster [DBG] 159.3 scrub starts 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.759559+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:49.759559+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.769801+0000 osd.7 (osd.7) 4 : cluster [DBG] 159.3 scrub ok 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.769801+0000 osd.7 (osd.7) 4 : cluster [DBG] 159.3 scrub ok 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.813653+0000 osd.1 (osd.1) 12 : cluster [DBG] 159.2 deep-scrub ok 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:49.813653+0000 osd.1 (osd.1) 12 : cluster [DBG] 159.2 deep-scrub ok 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:50.013686+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:50.013686+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:50.016744+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: cluster 2026-03-09T17:29:50.016744+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:50.070073+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:50.070073+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:50.083288+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:50 vm02 bash[23351]: audit 2026-03-09T17:29:50.083288+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:48.689534+0000 mgr.y (mgr.14505) 161 : cluster [DBG] pgmap v145: 460 pgs: 32 creating+peering, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 75 op/s 2026-03-09T17:29:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:48.689534+0000 mgr.y (mgr.14505) 161 : cluster [DBG] pgmap v145: 460 pgs: 32 creating+peering, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 75 op/s 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.205946+0000 mgr.y (mgr.14505) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.205946+0000 mgr.y (mgr.14505) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.206744+0000 mgr.y (mgr.14505) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.206744+0000 mgr.y (mgr.14505) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.207476+0000 mgr.y (mgr.14505) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.207476+0000 mgr.y (mgr.14505) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.208317+0000 mgr.y (mgr.14505) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.208317+0000 mgr.y (mgr.14505) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.209879+0000 mgr.y (mgr.14505) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.209879+0000 mgr.y (mgr.14505) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.211578+0000 mgr.y (mgr.14505) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.211578+0000 mgr.y (mgr.14505) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.212536+0000 mgr.y (mgr.14505) 168 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.212536+0000 mgr.y (mgr.14505) 168 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.213235+0000 mgr.y (mgr.14505) 169 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.213235+0000 mgr.y (mgr.14505) 169 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.214307+0000 mgr.y (mgr.14505) 170 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.214307+0000 mgr.y (mgr.14505) 170 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.214964+0000 mgr.y (mgr.14505) 171 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.214964+0000 mgr.y (mgr.14505) 171 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.320817+0000 osd.2 (osd.2) 15 : cluster [DBG] 159.7 scrub starts 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.320817+0000 osd.2 (osd.2) 15 : cluster [DBG] 159.7 scrub starts 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.322725+0000 osd.2 (osd.2) 16 : cluster [DBG] 159.7 scrub ok 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.322725+0000 osd.2 (osd.2) 16 : cluster [DBG] 159.7 scrub ok 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.601392+0000 osd.1 (osd.1) 11 : cluster [DBG] 159.2 deep-scrub starts 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.601392+0000 osd.1 (osd.1) 11 : cluster [DBG] 159.2 deep-scrub starts 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.707767+0000 osd.7 (osd.7) 3 : cluster [DBG] 159.3 scrub starts 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.707767+0000 osd.7 (osd.7) 3 : cluster [DBG] 159.3 scrub starts 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.759559+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:49.759559+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.769801+0000 osd.7 (osd.7) 4 : cluster [DBG] 159.3 scrub ok 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.769801+0000 osd.7 (osd.7) 4 : cluster [DBG] 159.3 scrub ok 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.813653+0000 osd.1 (osd.1) 12 : cluster [DBG] 159.2 deep-scrub ok 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:49.813653+0000 osd.1 (osd.1) 12 : cluster [DBG] 159.2 deep-scrub ok 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:50.013686+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:50.013686+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:50.016744+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: cluster 2026-03-09T17:29:50.016744+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:50.070073+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:50.070073+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:50.083288+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:50 vm00 bash[28333]: audit 2026-03-09T17:29:50.083288+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:48.689534+0000 mgr.y (mgr.14505) 161 : cluster [DBG] pgmap v145: 460 pgs: 32 creating+peering, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 75 op/s 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:48.689534+0000 mgr.y (mgr.14505) 161 : cluster [DBG] pgmap v145: 460 pgs: 32 creating+peering, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 75 op/s 2026-03-09T17:29:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.205946+0000 mgr.y (mgr.14505) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.205946+0000 mgr.y (mgr.14505) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.0"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.206744+0000 mgr.y (mgr.14505) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.206744+0000 mgr.y (mgr.14505) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.1"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.207476+0000 mgr.y (mgr.14505) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.207476+0000 mgr.y (mgr.14505) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.2"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.208317+0000 mgr.y (mgr.14505) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.208317+0000 mgr.y (mgr.14505) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.3"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.209879+0000 mgr.y (mgr.14505) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.209879+0000 mgr.y (mgr.14505) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.4"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.211578+0000 mgr.y (mgr.14505) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.211578+0000 mgr.y (mgr.14505) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.5"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.212536+0000 mgr.y (mgr.14505) 168 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.212536+0000 mgr.y (mgr.14505) 168 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.6"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.213235+0000 mgr.y (mgr.14505) 169 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.213235+0000 mgr.y (mgr.14505) 169 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.7"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.214307+0000 mgr.y (mgr.14505) 170 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.214307+0000 mgr.y (mgr.14505) 170 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.8"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.214964+0000 mgr.y (mgr.14505) 171 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.214964+0000 mgr.y (mgr.14505) 171 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "159.9"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.320817+0000 osd.2 (osd.2) 15 : cluster [DBG] 159.7 scrub starts 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.320817+0000 osd.2 (osd.2) 15 : cluster [DBG] 159.7 scrub starts 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.322725+0000 osd.2 (osd.2) 16 : cluster [DBG] 159.7 scrub ok 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.322725+0000 osd.2 (osd.2) 16 : cluster [DBG] 159.7 scrub ok 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.601392+0000 osd.1 (osd.1) 11 : cluster [DBG] 159.2 deep-scrub starts 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.601392+0000 osd.1 (osd.1) 11 : cluster [DBG] 159.2 deep-scrub starts 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.707767+0000 osd.7 (osd.7) 3 : cluster [DBG] 159.3 scrub starts 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.707767+0000 osd.7 (osd.7) 3 : cluster [DBG] 159.3 scrub starts 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.759559+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:49.759559+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.769801+0000 osd.7 (osd.7) 4 : cluster [DBG] 159.3 scrub ok 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.769801+0000 osd.7 (osd.7) 4 : cluster [DBG] 159.3 scrub ok 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.813653+0000 osd.1 (osd.1) 12 : cluster [DBG] 159.2 deep-scrub ok 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:49.813653+0000 osd.1 (osd.1) 12 : cluster [DBG] 159.2 deep-scrub ok 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:50.013686+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:50.013686+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59908-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:50.016744+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: cluster 2026-03-09T17:29:50.016744+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:50.070073+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:50.070073+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:50.083288+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:50 vm00 bash[20770]: audit 2026-03-09T17:29:50.083288+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:51.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:29:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:49.448421+0000 osd.6 (osd.6) 15 : cluster [DBG] 159.4 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:49.448421+0000 osd.6 (osd.6) 15 : cluster [DBG] 159.4 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:49.449706+0000 osd.6 (osd.6) 16 : cluster [DBG] 159.4 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:49.449706+0000 osd.6 (osd.6) 16 : cluster [DBG] 159.4 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:49.925512+0000 osd.3 (osd.3) 3 : cluster [DBG] 159.5 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:49.925512+0000 osd.3 (osd.3) 3 : cluster [DBG] 159.5 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:49.926913+0000 osd.3 (osd.3) 4 : cluster [DBG] 159.5 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:49.926913+0000 osd.3 (osd.3) 4 : cluster [DBG] 159.5 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.143373+0000 osd.0 (osd.0) 7 : cluster [DBG] 159.8 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.143373+0000 osd.0 (osd.0) 7 : cluster [DBG] 159.8 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.144736+0000 osd.0 (osd.0) 8 : cluster [DBG] 159.8 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.144736+0000 osd.0 (osd.0) 8 : cluster [DBG] 159.8 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.301636+0000 osd.2 (osd.2) 17 : cluster [DBG] 159.6 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.301636+0000 osd.2 (osd.2) 17 : cluster [DBG] 159.6 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.303746+0000 osd.2 (osd.2) 18 : cluster [DBG] 159.6 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.303746+0000 osd.2 (osd.2) 18 : cluster [DBG] 159.6 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.600969+0000 osd.1 (osd.1) 13 : cluster [DBG] 159.0 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.600969+0000 osd.1 (osd.1) 13 : cluster [DBG] 159.0 scrub starts 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.602370+0000 osd.1 (osd.1) 14 : cluster [DBG] 159.0 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:50.602370+0000 osd.1 (osd.1) 14 : cluster [DBG] 159.0 scrub ok 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:50.773660+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:50.773660+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:51.059712+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:51.059712+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:51.059802+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:51.059802+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:51.133081+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: cluster 2026-03-09T17:29:51.133081+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:51.144590+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:51.144590+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:51.174635+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:51 vm02 bash[23351]: audit 2026-03-09T17:29:51.174635+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:49.448421+0000 osd.6 (osd.6) 15 : cluster [DBG] 159.4 scrub starts 2026-03-09T17:29:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:49.448421+0000 osd.6 (osd.6) 15 : cluster [DBG] 159.4 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:49.449706+0000 osd.6 (osd.6) 16 : cluster [DBG] 159.4 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:49.449706+0000 osd.6 (osd.6) 16 : cluster [DBG] 159.4 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:49.925512+0000 osd.3 (osd.3) 3 : cluster [DBG] 159.5 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:49.925512+0000 osd.3 (osd.3) 3 : cluster [DBG] 159.5 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:49.926913+0000 osd.3 (osd.3) 4 : cluster [DBG] 159.5 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:49.926913+0000 osd.3 (osd.3) 4 : cluster [DBG] 159.5 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.143373+0000 osd.0 (osd.0) 7 : cluster [DBG] 159.8 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.143373+0000 osd.0 (osd.0) 7 : cluster [DBG] 159.8 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.144736+0000 osd.0 (osd.0) 8 : cluster [DBG] 159.8 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.144736+0000 osd.0 (osd.0) 8 : cluster [DBG] 159.8 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.301636+0000 osd.2 (osd.2) 17 : cluster [DBG] 159.6 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.301636+0000 osd.2 (osd.2) 17 : cluster [DBG] 159.6 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.303746+0000 osd.2 (osd.2) 18 : cluster [DBG] 159.6 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.303746+0000 osd.2 (osd.2) 18 : cluster [DBG] 159.6 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.600969+0000 osd.1 (osd.1) 13 : cluster [DBG] 159.0 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.600969+0000 osd.1 (osd.1) 13 : cluster [DBG] 159.0 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.602370+0000 osd.1 (osd.1) 14 : cluster [DBG] 159.0 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:50.602370+0000 osd.1 (osd.1) 14 : cluster [DBG] 159.0 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:50.773660+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:50.773660+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:51.059712+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:51.059712+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:51.059802+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:51.059802+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:51.133081+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: cluster 2026-03-09T17:29:51.133081+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:51.144590+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:51.144590+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:51.174635+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:51 vm00 bash[20770]: audit 2026-03-09T17:29:51.174635+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:49.448421+0000 osd.6 (osd.6) 15 : cluster [DBG] 159.4 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:49.448421+0000 osd.6 (osd.6) 15 : cluster [DBG] 159.4 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:49.449706+0000 osd.6 (osd.6) 16 : cluster [DBG] 159.4 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:49.449706+0000 osd.6 (osd.6) 16 : cluster [DBG] 159.4 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:49.925512+0000 osd.3 (osd.3) 3 : cluster [DBG] 159.5 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:49.925512+0000 osd.3 (osd.3) 3 : cluster [DBG] 159.5 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:49.926913+0000 osd.3 (osd.3) 4 : cluster [DBG] 159.5 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:49.926913+0000 osd.3 (osd.3) 4 : cluster [DBG] 159.5 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.143373+0000 osd.0 (osd.0) 7 : cluster [DBG] 159.8 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.143373+0000 osd.0 (osd.0) 7 : cluster [DBG] 159.8 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.144736+0000 osd.0 (osd.0) 8 : cluster [DBG] 159.8 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.144736+0000 osd.0 (osd.0) 8 : cluster [DBG] 159.8 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.301636+0000 osd.2 (osd.2) 17 : cluster [DBG] 159.6 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.301636+0000 osd.2 (osd.2) 17 : cluster [DBG] 159.6 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.303746+0000 osd.2 (osd.2) 18 : cluster [DBG] 159.6 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.303746+0000 osd.2 (osd.2) 18 : cluster [DBG] 159.6 scrub ok 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.600969+0000 osd.1 (osd.1) 13 : cluster [DBG] 159.0 scrub starts 2026-03-09T17:29:52.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.600969+0000 osd.1 (osd.1) 13 : cluster [DBG] 159.0 scrub starts 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.602370+0000 osd.1 (osd.1) 14 : cluster [DBG] 159.0 scrub ok 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:50.602370+0000 osd.1 (osd.1) 14 : cluster [DBG] 159.0 scrub ok 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:50.773660+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:50.773660+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:51.059712+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:51.059712+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:51.059802+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:51.059802+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? 192.168.123.100:0/728919398' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59916-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:51.133081+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: cluster 2026-03-09T17:29:51.133081+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:51.144590+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:51.144590+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]: dispatch 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:51.174635+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:52.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:51 vm00 bash[28333]: audit 2026-03-09T17:29:51.174635+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:52.453 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: Running main() from gmock_main.cc 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [==========] Running 13 tests from 4 test suites. 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] Global test environment set-up. 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapList 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapList (2301 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapRemove 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapRemove (2173 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.Rollback 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshots.Rollback (2199 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapGetName 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapGetName (2252 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots (8925 ms total) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Snap 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Snap (4233 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Rollback 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Rollback (4501 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.FutureSnapRollback 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.FutureSnapRollback (5041 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged (13775 ms total) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapList 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapList (3531 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapRemove 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapRemove (2316 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.Rollback 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.Rollback (1428 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapGetName 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapGetName (2372 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC (9647 ms total) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Snap 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Snap (3893 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Rollback 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Rollback (4122 ms) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC (8015 ms total) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] Global test environment tear-down 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [==========] 13 tests from 4 test suites ran. (60114 ms total) 2026-03-09T17:29:52.454 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ PASSED ] 13 tests. 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:50.429628+0000 osd.6 (osd.6) 17 : cluster [DBG] 159.1 scrub starts 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:50.429628+0000 osd.6 (osd.6) 17 : cluster [DBG] 159.1 scrub starts 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:50.430743+0000 osd.6 (osd.6) 18 : cluster [DBG] 159.1 scrub ok 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:50.430743+0000 osd.6 (osd.6) 18 : cluster [DBG] 159.1 scrub ok 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:50.690150+0000 mgr.y (mgr.14505) 172 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 25 KiB/s rd, 0 B/s wr, 63 op/s 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:50.690150+0000 mgr.y (mgr.14505) 172 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 25 KiB/s rd, 0 B/s wr, 63 op/s 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:51.305847+0000 osd.2 (osd.2) 19 : cluster [DBG] 159.9 scrub starts 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:51.305847+0000 osd.2 (osd.2) 19 : cluster [DBG] 159.9 scrub starts 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:51.307461+0000 osd.2 (osd.2) 20 : cluster [DBG] 159.9 scrub ok 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:51.307461+0000 osd.2 (osd.2) 20 : cluster [DBG] 159.9 scrub ok 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:51.568022+0000 mon.a (mon.0) 1297 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:51.568022+0000 mon.a (mon.0) 1297 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:51.577080+0000 mgr.y (mgr.14505) 173 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:51.577080+0000 mgr.y (mgr.14505) 173 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:51.785512+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:51.785512+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:52.391105+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:52.391105+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:52.391173+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:52.391173+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:52.424161+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: cluster 2026-03-09T17:29:52.424161+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T17:29:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:52.435129+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/2320759130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:52.435129+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/2320759130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:52.444669+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:52 vm02 bash[23351]: audit 2026-03-09T17:29:52.444669+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:50.429628+0000 osd.6 (osd.6) 17 : cluster [DBG] 159.1 scrub starts 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:50.429628+0000 osd.6 (osd.6) 17 : cluster [DBG] 159.1 scrub starts 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:50.430743+0000 osd.6 (osd.6) 18 : cluster [DBG] 159.1 scrub ok 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:50.430743+0000 osd.6 (osd.6) 18 : cluster [DBG] 159.1 scrub ok 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:50.690150+0000 mgr.y (mgr.14505) 172 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 25 KiB/s rd, 0 B/s wr, 63 op/s 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:50.690150+0000 mgr.y (mgr.14505) 172 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 25 KiB/s rd, 0 B/s wr, 63 op/s 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:51.305847+0000 osd.2 (osd.2) 19 : cluster [DBG] 159.9 scrub starts 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:51.305847+0000 osd.2 (osd.2) 19 : cluster [DBG] 159.9 scrub starts 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:51.307461+0000 osd.2 (osd.2) 20 : cluster [DBG] 159.9 scrub ok 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:51.307461+0000 osd.2 (osd.2) 20 : cluster [DBG] 159.9 scrub ok 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:51.568022+0000 mon.a (mon.0) 1297 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:51.568022+0000 mon.a (mon.0) 1297 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:51.577080+0000 mgr.y (mgr.14505) 173 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:51.577080+0000 mgr.y (mgr.14505) 173 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:51.785512+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:51.785512+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:52.391105+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:52.391105+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:52.391173+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:52.391173+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:52.424161+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: cluster 2026-03-09T17:29:52.424161+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:52.435129+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/2320759130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:52.435129+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/2320759130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:52.444669+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:52 vm00 bash[28333]: audit 2026-03-09T17:29:52.444669+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:50.429628+0000 osd.6 (osd.6) 17 : cluster [DBG] 159.1 scrub starts 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:50.429628+0000 osd.6 (osd.6) 17 : cluster [DBG] 159.1 scrub starts 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:50.430743+0000 osd.6 (osd.6) 18 : cluster [DBG] 159.1 scrub ok 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:50.430743+0000 osd.6 (osd.6) 18 : cluster [DBG] 159.1 scrub ok 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:50.690150+0000 mgr.y (mgr.14505) 172 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 25 KiB/s rd, 0 B/s wr, 63 op/s 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:50.690150+0000 mgr.y (mgr.14505) 172 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 25 KiB/s rd, 0 B/s wr, 63 op/s 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:51.305847+0000 osd.2 (osd.2) 19 : cluster [DBG] 159.9 scrub starts 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:51.305847+0000 osd.2 (osd.2) 19 : cluster [DBG] 159.9 scrub starts 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:51.307461+0000 osd.2 (osd.2) 20 : cluster [DBG] 159.9 scrub ok 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:51.307461+0000 osd.2 (osd.2) 20 : cluster [DBG] 159.9 scrub ok 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:51.568022+0000 mon.a (mon.0) 1297 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:51.568022+0000 mon.a (mon.0) 1297 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:51.577080+0000 mgr.y (mgr.14505) 173 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:51.577080+0000 mgr.y (mgr.14505) 173 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:51.785512+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:51.785512+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:52.391105+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:52.391105+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.100:0/3933478441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-60171-15"}]': finished 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:52.391173+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:52.391173+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.100:0/1905052089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60199-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:52.424161+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: cluster 2026-03-09T17:29:52.424161+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:52.435129+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/2320759130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:52.435129+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/2320759130' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:52.444669+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:52 vm00 bash[20770]: audit 2026-03-09T17:29:52.444669+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.426 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [==========] Running 12 tests from 4 test suites. 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] Global test environment set-up. 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMiscVersion.Version 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMiscVersion.Version (0 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion (1 ms total) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectFailure 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: unable to get monitor info from DNS SRV with service name: ceph-mon 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-09T17:28:52.331+0000 7f8a64ea0980 -1 failed for service _ceph-mon._tcp 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-09T17:28:52.331+0000 7f8a64ea0980 -1 monclient: get_monmap_and_config cannot identify monitors to contact 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectFailure (76 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectTimeout 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectTimeout (5022 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure (5098 ms total) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 1 test from LibRadosMiscPool 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMiscPool.PoolCreationRace 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: started 0x7f8a44067cd0 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: started 0x556d048ed670 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: started 2 aios 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: waiting 0x7f8a44067cd0 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: waiting 0x556d048ed670 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: done. 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMiscPool.PoolCreationRace (3947 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 1 test from LibRadosMiscPool (3947 ms total) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 8 tests from LibRadosMisc 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.ClusterFSID 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.ClusterFSID (0 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.Exec 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.Exec (148 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.WriteSame 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.WriteSame (7 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.CmpExt 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.CmpExt (2 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.Applications 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.Applications (4919 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatOSD 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatOSD (0 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatClient 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatClient (0 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.ShutdownRace 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.ShutdownRace (43945 ms) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 8 tests from LibRadosMisc (49022 ms total) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] Global test environment tear-down 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [==========] 12 tests from 4 test suites ran. (61169 ms total) 2026-03-09T17:29:53.427 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ PASSED ] 12 tests. 2026-03-09T17:29:53.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: cluster 2026-03-09T17:29:52.690567+0000 mgr.y (mgr.14505) 174 : cluster [DBG] pgmap v151: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: cluster 2026-03-09T17:29:52.690567+0000 mgr.y (mgr.14505) 174 : cluster [DBG] pgmap v151: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: audit 2026-03-09T17:29:52.788100+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: audit 2026-03-09T17:29:52.788100+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: audit 2026-03-09T17:29:53.412016+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: audit 2026-03-09T17:29:53.412016+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: cluster 2026-03-09T17:29:53.415838+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: cluster 2026-03-09T17:29:53.415838+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: audit 2026-03-09T17:29:53.420389+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/471185485' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: audit 2026-03-09T17:29:53.420389+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/471185485' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: audit 2026-03-09T17:29:53.429111+0000 mon.a (mon.0) 1304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:53 vm02 bash[23351]: audit 2026-03-09T17:29:53.429111+0000 mon.a (mon.0) 1304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: cluster 2026-03-09T17:29:52.690567+0000 mgr.y (mgr.14505) 174 : cluster [DBG] pgmap v151: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:54.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: cluster 2026-03-09T17:29:52.690567+0000 mgr.y (mgr.14505) 174 : cluster [DBG] pgmap v151: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:54.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: audit 2026-03-09T17:29:52.788100+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: audit 2026-03-09T17:29:52.788100+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: audit 2026-03-09T17:29:53.412016+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: audit 2026-03-09T17:29:53.412016+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: cluster 2026-03-09T17:29:53.415838+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: cluster 2026-03-09T17:29:53.415838+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: audit 2026-03-09T17:29:53.420389+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/471185485' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: audit 2026-03-09T17:29:53.420389+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/471185485' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: audit 2026-03-09T17:29:53.429111+0000 mon.a (mon.0) 1304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:53 vm00 bash[28333]: audit 2026-03-09T17:29:53.429111+0000 mon.a (mon.0) 1304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: cluster 2026-03-09T17:29:52.690567+0000 mgr.y (mgr.14505) 174 : cluster [DBG] pgmap v151: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: cluster 2026-03-09T17:29:52.690567+0000 mgr.y (mgr.14505) 174 : cluster [DBG] pgmap v151: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: audit 2026-03-09T17:29:52.788100+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: audit 2026-03-09T17:29:52.788100+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: audit 2026-03-09T17:29:53.412016+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: audit 2026-03-09T17:29:53.412016+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59908-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: cluster 2026-03-09T17:29:53.415838+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: cluster 2026-03-09T17:29:53.415838+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: audit 2026-03-09T17:29:53.420389+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/471185485' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: audit 2026-03-09T17:29:53.420389+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/471185485' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: audit 2026-03-09T17:29:53.429111+0000 mon.a (mon.0) 1304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:53 vm00 bash[20770]: audit 2026-03-09T17:29:53.429111+0000 mon.a (mon.0) 1304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:54 vm02 bash[23351]: audit 2026-03-09T17:29:53.788947+0000 mon.c (mon.2) 243 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:54 vm02 bash[23351]: audit 2026-03-09T17:29:53.788947+0000 mon.c (mon.2) 243 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:54 vm02 bash[23351]: audit 2026-03-09T17:29:54.482276+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:54 vm02 bash[23351]: audit 2026-03-09T17:29:54.482276+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:54 vm02 bash[23351]: cluster 2026-03-09T17:29:54.489060+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T17:29:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:54 vm02 bash[23351]: cluster 2026-03-09T17:29:54.489060+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T17:29:55.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:54 vm00 bash[28333]: audit 2026-03-09T17:29:53.788947+0000 mon.c (mon.2) 243 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:54 vm00 bash[28333]: audit 2026-03-09T17:29:53.788947+0000 mon.c (mon.2) 243 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:54 vm00 bash[28333]: audit 2026-03-09T17:29:54.482276+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:54 vm00 bash[28333]: audit 2026-03-09T17:29:54.482276+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:54 vm00 bash[28333]: cluster 2026-03-09T17:29:54.489060+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:54 vm00 bash[28333]: cluster 2026-03-09T17:29:54.489060+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:54 vm00 bash[20770]: audit 2026-03-09T17:29:53.788947+0000 mon.c (mon.2) 243 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:54 vm00 bash[20770]: audit 2026-03-09T17:29:53.788947+0000 mon.c (mon.2) 243 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:54 vm00 bash[20770]: audit 2026-03-09T17:29:54.482276+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:54 vm00 bash[20770]: audit 2026-03-09T17:29:54.482276+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59916-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:54 vm00 bash[20770]: cluster 2026-03-09T17:29:54.489060+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T17:29:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:54 vm00 bash[20770]: cluster 2026-03-09T17:29:54.489060+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: cluster 2026-03-09T17:29:54.691096+0000 mgr.y (mgr.14505) 175 : cluster [DBG] pgmap v154: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: cluster 2026-03-09T17:29:54.691096+0000 mgr.y (mgr.14505) 175 : cluster [DBG] pgmap v154: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:54.789898+0000 mon.c (mon.2) 244 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:54.789898+0000 mon.c (mon.2) 244 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: cluster 2026-03-09T17:29:55.491387+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: cluster 2026-03-09T17:29:55.491387+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.503886+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/3390221674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.503886+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/3390221674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.506049+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.506049+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.579541+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.579541+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.580083+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.580083+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.581940+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.581940+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.582294+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.582294+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.582965+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.582965+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.583313+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:55 vm02 bash[23351]: audit 2026-03-09T17:29:55.583313+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: cluster 2026-03-09T17:29:54.691096+0000 mgr.y (mgr.14505) 175 : cluster [DBG] pgmap v154: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: cluster 2026-03-09T17:29:54.691096+0000 mgr.y (mgr.14505) 175 : cluster [DBG] pgmap v154: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:54.789898+0000 mon.c (mon.2) 244 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:54.789898+0000 mon.c (mon.2) 244 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: cluster 2026-03-09T17:29:55.491387+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: cluster 2026-03-09T17:29:55.491387+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.503886+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/3390221674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.503886+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/3390221674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.506049+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.506049+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.579541+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.579541+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.580083+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.580083+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.581940+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.581940+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.582294+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.582294+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.582965+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.582965+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.583313+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:55 vm00 bash[28333]: audit 2026-03-09T17:29:55.583313+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: cluster 2026-03-09T17:29:54.691096+0000 mgr.y (mgr.14505) 175 : cluster [DBG] pgmap v154: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: cluster 2026-03-09T17:29:54.691096+0000 mgr.y (mgr.14505) 175 : cluster [DBG] pgmap v154: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:54.789898+0000 mon.c (mon.2) 244 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:54.789898+0000 mon.c (mon.2) 244 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: cluster 2026-03-09T17:29:55.491387+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: cluster 2026-03-09T17:29:55.491387+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.503886+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/3390221674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.503886+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/3390221674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.506049+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.506049+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.579541+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.579541+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.580083+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.580083+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.581940+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.581940+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.582294+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.582294+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.582965+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.582965+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.583313+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:55 vm00 bash[20770]: audit 2026-03-09T17:29:55.583313+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:29:56.635 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:29:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:29:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:55.790586+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:55.790586+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.490609+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.490609+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.490692+0000 mon.a (mon.0) 1313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.490692+0000 mon.a (mon.0) 1313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: cluster 2026-03-09T17:29:56.494808+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: cluster 2026-03-09T17:29:56.494808+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.526441+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.526441+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.526815+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.526815+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.528544+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/4224719348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.528544+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/4224719348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.528772+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: audit 2026-03-09T17:29:56.528772+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: cluster 2026-03-09T17:29:56.569065+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:56 vm02 bash[23351]: cluster 2026-03-09T17:29:56.569065+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:57.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:55.790586+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:57.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:55.790586+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:57.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.490609+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.490609+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.490692+0000 mon.a (mon.0) 1313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.490692+0000 mon.a (mon.0) 1313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: cluster 2026-03-09T17:29:56.494808+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: cluster 2026-03-09T17:29:56.494808+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.526441+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.526441+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.526815+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.526815+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.528544+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/4224719348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.528544+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/4224719348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.528772+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: audit 2026-03-09T17:29:56.528772+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: cluster 2026-03-09T17:29:56.569065+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:56 vm00 bash[28333]: cluster 2026-03-09T17:29:56.569065+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:55.790586+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:55.790586+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.490609+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.490609+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59908-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.490692+0000 mon.a (mon.0) 1313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.490692+0000 mon.a (mon.0) 1313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: cluster 2026-03-09T17:29:56.494808+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: cluster 2026-03-09T17:29:56.494808+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.526441+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.526441+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.526815+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.526815+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.528544+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/4224719348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.528544+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/4224719348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.528772+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: audit 2026-03-09T17:29:56.528772+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: cluster 2026-03-09T17:29:56.569065+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:56 vm00 bash[20770]: cluster 2026-03-09T17:29:56.569065+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:29:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: cluster 2026-03-09T17:29:56.692567+0000 mgr.y (mgr.14505) 176 : cluster [DBG] pgmap v157: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: cluster 2026-03-09T17:29:56.692567+0000 mgr.y (mgr.14505) 176 : cluster [DBG] pgmap v157: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: audit 2026-03-09T17:29:56.791427+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: audit 2026-03-09T17:29:56.791427+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: audit 2026-03-09T17:29:57.285105+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: audit 2026-03-09T17:29:57.285105+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: audit 2026-03-09T17:29:57.288631+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: audit 2026-03-09T17:29:57.288631+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: audit 2026-03-09T17:29:57.515406+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: audit 2026-03-09T17:29:57.515406+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: cluster 2026-03-09T17:29:57.542577+0000 mon.a (mon.0) 1320 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:57 vm00 bash[28333]: cluster 2026-03-09T17:29:57.542577+0000 mon.a (mon.0) 1320 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: cluster 2026-03-09T17:29:56.692567+0000 mgr.y (mgr.14505) 176 : cluster [DBG] pgmap v157: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: cluster 2026-03-09T17:29:56.692567+0000 mgr.y (mgr.14505) 176 : cluster [DBG] pgmap v157: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: audit 2026-03-09T17:29:56.791427+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: audit 2026-03-09T17:29:56.791427+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: audit 2026-03-09T17:29:57.285105+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: audit 2026-03-09T17:29:57.285105+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: audit 2026-03-09T17:29:57.288631+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: audit 2026-03-09T17:29:57.288631+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: audit 2026-03-09T17:29:57.515406+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: audit 2026-03-09T17:29:57.515406+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: cluster 2026-03-09T17:29:57.542577+0000 mon.a (mon.0) 1320 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T17:29:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:57 vm00 bash[20770]: cluster 2026-03-09T17:29:57.542577+0000 mon.a (mon.0) 1320 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: cluster 2026-03-09T17:29:56.692567+0000 mgr.y (mgr.14505) 176 : cluster [DBG] pgmap v157: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: cluster 2026-03-09T17:29:56.692567+0000 mgr.y (mgr.14505) 176 : cluster [DBG] pgmap v157: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 39 op/s 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: audit 2026-03-09T17:29:56.791427+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: audit 2026-03-09T17:29:56.791427+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: audit 2026-03-09T17:29:57.285105+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: audit 2026-03-09T17:29:57.285105+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: audit 2026-03-09T17:29:57.288631+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: audit 2026-03-09T17:29:57.288631+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: audit 2026-03-09T17:29:57.515406+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: audit 2026-03-09T17:29:57.515406+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: cluster 2026-03-09T17:29:57.542577+0000 mon.a (mon.0) 1320 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T17:29:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:57 vm02 bash[23351]: cluster 2026-03-09T17:29:57.542577+0000 mon.a (mon.0) 1320 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T17:29:59.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: audit 2026-03-09T17:29:57.792396+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: audit 2026-03-09T17:29:57.792396+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: audit 2026-03-09T17:29:58.531391+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: audit 2026-03-09T17:29:58.531391+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: cluster 2026-03-09T17:29:58.541104+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: cluster 2026-03-09T17:29:58.541104+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: audit 2026-03-09T17:29:58.563173+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.100:0/1727122663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: audit 2026-03-09T17:29:58.563173+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.100:0/1727122663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: audit 2026-03-09T17:29:58.564022+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:29:58 vm02 bash[23351]: audit 2026-03-09T17:29:58.564022+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: audit 2026-03-09T17:29:57.792396+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: audit 2026-03-09T17:29:57.792396+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: audit 2026-03-09T17:29:58.531391+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: audit 2026-03-09T17:29:58.531391+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: cluster 2026-03-09T17:29:58.541104+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: cluster 2026-03-09T17:29:58.541104+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: audit 2026-03-09T17:29:58.563173+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.100:0/1727122663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: audit 2026-03-09T17:29:58.563173+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.100:0/1727122663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: audit 2026-03-09T17:29:58.564022+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:29:58 vm00 bash[20770]: audit 2026-03-09T17:29:58.564022+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: audit 2026-03-09T17:29:57.792396+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: audit 2026-03-09T17:29:57.792396+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: audit 2026-03-09T17:29:58.531391+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: audit 2026-03-09T17:29:58.531391+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-60199-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: cluster 2026-03-09T17:29:58.541104+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: cluster 2026-03-09T17:29:58.541104+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: audit 2026-03-09T17:29:58.563173+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.100:0/1727122663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: audit 2026-03-09T17:29:58.563173+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.100:0/1727122663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: audit 2026-03-09T17:29:58.564022+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:29:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:29:58 vm00 bash[28333]: audit 2026-03-09T17:29:58.564022+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:00.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:00 vm00 bash[20770]: cluster 2026-03-09T17:29:58.692982+0000 mgr.y (mgr.14505) 177 : cluster [DBG] pgmap v160: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:00.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:00 vm00 bash[20770]: cluster 2026-03-09T17:29:58.692982+0000 mgr.y (mgr.14505) 177 : cluster [DBG] pgmap v160: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:00.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:00 vm00 bash[20770]: audit 2026-03-09T17:29:58.793638+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:00.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:00 vm00 bash[20770]: audit 2026-03-09T17:29:58.793638+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:00 vm00 bash[28333]: cluster 2026-03-09T17:29:58.692982+0000 mgr.y (mgr.14505) 177 : cluster [DBG] pgmap v160: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:00 vm00 bash[28333]: cluster 2026-03-09T17:29:58.692982+0000 mgr.y (mgr.14505) 177 : cluster [DBG] pgmap v160: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:00 vm00 bash[28333]: audit 2026-03-09T17:29:58.793638+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:00 vm00 bash[28333]: audit 2026-03-09T17:29:58.793638+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:00 vm02 bash[23351]: cluster 2026-03-09T17:29:58.692982+0000 mgr.y (mgr.14505) 177 : cluster [DBG] pgmap v160: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:00 vm02 bash[23351]: cluster 2026-03-09T17:29:58.692982+0000 mgr.y (mgr.14505) 177 : cluster [DBG] pgmap v160: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:00 vm02 bash[23351]: audit 2026-03-09T17:29:58.793638+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:00 vm02 bash[23351]: audit 2026-03-09T17:29:58.793638+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:29:59.988638+0000 mon.c (mon.2) 257 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:29:59.988638+0000 mon.c (mon.2) 257 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000097+0000 mon.a (mon.0) 1324 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000097+0000 mon.a (mon.0) 1324 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000121+0000 mon.a (mon.0) 1325 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000121+0000 mon.a (mon.0) 1325 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000126+0000 mon.a (mon.0) 1326 : cluster [WRN] pool 'test-rados-api-vm00-60118-13' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000126+0000 mon.a (mon.0) 1326 : cluster [WRN] pool 'test-rados-api-vm00-60118-13' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000130+0000 mon.a (mon.0) 1327 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000130+0000 mon.a (mon.0) 1327 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000134+0000 mon.a (mon.0) 1328 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000134+0000 mon.a (mon.0) 1328 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000139+0000 mon.a (mon.0) 1329 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000139+0000 mon.a (mon.0) 1329 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000145+0000 mon.a (mon.0) 1330 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000145+0000 mon.a (mon.0) 1330 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000150+0000 mon.a (mon.0) 1331 : cluster [WRN] application not enabled on pool 'RoundTripWriteFullPP2_vm00-59916-17' 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000150+0000 mon.a (mon.0) 1331 : cluster [WRN] application not enabled on pool 'RoundTripWriteFullPP2_vm00-59916-17' 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000158+0000 mon.a (mon.0) 1332 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.000158+0000 mon.a (mon.0) 1332 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:30:00.109672+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:30:00.109672+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.124419+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:00.124419+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:30:00.143027+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:30:00.143027+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:30:00.989402+0000 mon.c (mon.2) 258 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:30:00.989402+0000 mon.c (mon.2) 258 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:30:01.113070+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: audit 2026-03-09T17:30:01.113070+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:01.117387+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:01 vm00 bash[20770]: cluster 2026-03-09T17:30:01.117387+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:29:59.988638+0000 mon.c (mon.2) 257 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:29:59.988638+0000 mon.c (mon.2) 257 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000097+0000 mon.a (mon.0) 1324 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000097+0000 mon.a (mon.0) 1324 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000121+0000 mon.a (mon.0) 1325 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000121+0000 mon.a (mon.0) 1325 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000126+0000 mon.a (mon.0) 1326 : cluster [WRN] pool 'test-rados-api-vm00-60118-13' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000126+0000 mon.a (mon.0) 1326 : cluster [WRN] pool 'test-rados-api-vm00-60118-13' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-09T17:30:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000130+0000 mon.a (mon.0) 1327 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000130+0000 mon.a (mon.0) 1327 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000134+0000 mon.a (mon.0) 1328 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000134+0000 mon.a (mon.0) 1328 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000139+0000 mon.a (mon.0) 1329 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000139+0000 mon.a (mon.0) 1329 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000145+0000 mon.a (mon.0) 1330 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000145+0000 mon.a (mon.0) 1330 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000150+0000 mon.a (mon.0) 1331 : cluster [WRN] application not enabled on pool 'RoundTripWriteFullPP2_vm00-59916-17' 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000150+0000 mon.a (mon.0) 1331 : cluster [WRN] application not enabled on pool 'RoundTripWriteFullPP2_vm00-59916-17' 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000158+0000 mon.a (mon.0) 1332 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.000158+0000 mon.a (mon.0) 1332 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:30:00.109672+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:30:00.109672+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.124419+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:00.124419+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:30:00.143027+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:30:00.143027+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:30:00.989402+0000 mon.c (mon.2) 258 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:30:00.989402+0000 mon.c (mon.2) 258 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:30:01.113070+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: audit 2026-03-09T17:30:01.113070+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:01.117387+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T17:30:01.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:01 vm00 bash[28333]: cluster 2026-03-09T17:30:01.117387+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:29:59.988638+0000 mon.c (mon.2) 257 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:29:59.988638+0000 mon.c (mon.2) 257 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000097+0000 mon.a (mon.0) 1324 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000097+0000 mon.a (mon.0) 1324 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000121+0000 mon.a (mon.0) 1325 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000121+0000 mon.a (mon.0) 1325 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000126+0000 mon.a (mon.0) 1326 : cluster [WRN] pool 'test-rados-api-vm00-60118-13' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000126+0000 mon.a (mon.0) 1326 : cluster [WRN] pool 'test-rados-api-vm00-60118-13' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000130+0000 mon.a (mon.0) 1327 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000130+0000 mon.a (mon.0) 1327 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000134+0000 mon.a (mon.0) 1328 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:30:01.586 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000134+0000 mon.a (mon.0) 1328 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000139+0000 mon.a (mon.0) 1329 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000139+0000 mon.a (mon.0) 1329 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000145+0000 mon.a (mon.0) 1330 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000145+0000 mon.a (mon.0) 1330 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000150+0000 mon.a (mon.0) 1331 : cluster [WRN] application not enabled on pool 'RoundTripWriteFullPP2_vm00-59916-17' 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000150+0000 mon.a (mon.0) 1331 : cluster [WRN] application not enabled on pool 'RoundTripWriteFullPP2_vm00-59916-17' 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000158+0000 mon.a (mon.0) 1332 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.000158+0000 mon.a (mon.0) 1332 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:30:00.109672+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:30:00.109672+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59908-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.124419+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:00.124419+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:30:00.143027+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:30:00.143027+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:30:00.989402+0000 mon.c (mon.2) 258 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:30:00.989402+0000 mon.c (mon.2) 258 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:30:01.113070+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: audit 2026-03-09T17:30:01.113070+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.100:0/2471639607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59916-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:01.117387+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T17:30:01.591 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:01 vm02 bash[23351]: cluster 2026-03-09T17:30:01.117387+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T17:30:01.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:30:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:30:02.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: cluster 2026-03-09T17:30:00.693383+0000 mgr.y (mgr.14505) 178 : cluster [DBG] pgmap v162: 428 pgs: 72 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: cluster 2026-03-09T17:30:00.693383+0000 mgr.y (mgr.14505) 178 : cluster [DBG] pgmap v162: 428 pgs: 72 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: cluster 2026-03-09T17:30:01.569665+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: cluster 2026-03-09T17:30:01.569665+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: audit 2026-03-09T17:30:01.990138+0000 mon.c (mon.2) 259 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: audit 2026-03-09T17:30:01.990138+0000 mon.c (mon.2) 259 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: cluster 2026-03-09T17:30:02.128365+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: cluster 2026-03-09T17:30:02.128365+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: audit 2026-03-09T17:30:02.139115+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3388209293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: audit 2026-03-09T17:30:02.139115+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3388209293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: audit 2026-03-09T17:30:02.151494+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:02 vm00 bash[28333]: audit 2026-03-09T17:30:02.151494+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: cluster 2026-03-09T17:30:00.693383+0000 mgr.y (mgr.14505) 178 : cluster [DBG] pgmap v162: 428 pgs: 72 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: cluster 2026-03-09T17:30:00.693383+0000 mgr.y (mgr.14505) 178 : cluster [DBG] pgmap v162: 428 pgs: 72 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: cluster 2026-03-09T17:30:01.569665+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: cluster 2026-03-09T17:30:01.569665+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: audit 2026-03-09T17:30:01.990138+0000 mon.c (mon.2) 259 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: audit 2026-03-09T17:30:01.990138+0000 mon.c (mon.2) 259 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: cluster 2026-03-09T17:30:02.128365+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: cluster 2026-03-09T17:30:02.128365+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: audit 2026-03-09T17:30:02.139115+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3388209293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: audit 2026-03-09T17:30:02.139115+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3388209293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: audit 2026-03-09T17:30:02.151494+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:02 vm00 bash[20770]: audit 2026-03-09T17:30:02.151494+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: cluster 2026-03-09T17:30:00.693383+0000 mgr.y (mgr.14505) 178 : cluster [DBG] pgmap v162: 428 pgs: 72 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: cluster 2026-03-09T17:30:00.693383+0000 mgr.y (mgr.14505) 178 : cluster [DBG] pgmap v162: 428 pgs: 72 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: cluster 2026-03-09T17:30:01.569665+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: cluster 2026-03-09T17:30:01.569665+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: audit 2026-03-09T17:30:01.990138+0000 mon.c (mon.2) 259 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: audit 2026-03-09T17:30:01.990138+0000 mon.c (mon.2) 259 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: cluster 2026-03-09T17:30:02.128365+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: cluster 2026-03-09T17:30:02.128365+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: audit 2026-03-09T17:30:02.139115+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3388209293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: audit 2026-03-09T17:30:02.139115+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3388209293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: audit 2026-03-09T17:30:02.151494+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:02 vm02 bash[23351]: audit 2026-03-09T17:30:02.151494+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:03.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: audit 2026-03-09T17:30:01.586035+0000 mgr.y (mgr.14505) 179 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: audit 2026-03-09T17:30:01.586035+0000 mgr.y (mgr.14505) 179 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: audit 2026-03-09T17:30:02.991009+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: audit 2026-03-09T17:30:02.991009+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: audit 2026-03-09T17:30:03.155892+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: audit 2026-03-09T17:30:03.155892+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: cluster 2026-03-09T17:30:03.168975+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: cluster 2026-03-09T17:30:03.168975+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: audit 2026-03-09T17:30:03.173495+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:03 vm00 bash[20770]: audit 2026-03-09T17:30:03.173495+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: audit 2026-03-09T17:30:01.586035+0000 mgr.y (mgr.14505) 179 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: audit 2026-03-09T17:30:01.586035+0000 mgr.y (mgr.14505) 179 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: audit 2026-03-09T17:30:02.991009+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: audit 2026-03-09T17:30:02.991009+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: audit 2026-03-09T17:30:03.155892+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: audit 2026-03-09T17:30:03.155892+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: cluster 2026-03-09T17:30:03.168975+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: cluster 2026-03-09T17:30:03.168975+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: audit 2026-03-09T17:30:03.173495+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:03 vm00 bash[28333]: audit 2026-03-09T17:30:03.173495+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: audit 2026-03-09T17:30:01.586035+0000 mgr.y (mgr.14505) 179 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: audit 2026-03-09T17:30:01.586035+0000 mgr.y (mgr.14505) 179 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: audit 2026-03-09T17:30:02.991009+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: audit 2026-03-09T17:30:02.991009+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: audit 2026-03-09T17:30:03.155892+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: audit 2026-03-09T17:30:03.155892+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59908-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: cluster 2026-03-09T17:30:03.168975+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: cluster 2026-03-09T17:30:03.168975+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: audit 2026-03-09T17:30:03.173495+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:03 vm02 bash[23351]: audit 2026-03-09T17:30:03.173495+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:04.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:04 vm00 bash[20770]: cluster 2026-03-09T17:30:02.693730+0000 mgr.y (mgr.14505) 180 : cluster [DBG] pgmap v165: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:04 vm00 bash[20770]: cluster 2026-03-09T17:30:02.693730+0000 mgr.y (mgr.14505) 180 : cluster [DBG] pgmap v165: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:04 vm00 bash[20770]: audit 2026-03-09T17:30:03.991828+0000 mon.c (mon.2) 261 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:04 vm00 bash[20770]: audit 2026-03-09T17:30:03.991828+0000 mon.c (mon.2) 261 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:04 vm00 bash[28333]: cluster 2026-03-09T17:30:02.693730+0000 mgr.y (mgr.14505) 180 : cluster [DBG] pgmap v165: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:04 vm00 bash[28333]: cluster 2026-03-09T17:30:02.693730+0000 mgr.y (mgr.14505) 180 : cluster [DBG] pgmap v165: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:04 vm00 bash[28333]: audit 2026-03-09T17:30:03.991828+0000 mon.c (mon.2) 261 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:04 vm00 bash[28333]: audit 2026-03-09T17:30:03.991828+0000 mon.c (mon.2) 261 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:04 vm02 bash[23351]: cluster 2026-03-09T17:30:02.693730+0000 mgr.y (mgr.14505) 180 : cluster [DBG] pgmap v165: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:04 vm02 bash[23351]: cluster 2026-03-09T17:30:02.693730+0000 mgr.y (mgr.14505) 180 : cluster [DBG] pgmap v165: 396 pgs: 40 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:04 vm02 bash[23351]: audit 2026-03-09T17:30:03.991828+0000 mon.c (mon.2) 261 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:04 vm02 bash[23351]: audit 2026-03-09T17:30:03.991828+0000 mon.c (mon.2) 261 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.364791+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.364791+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: cluster 2026-03-09T17:30:04.414480+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: cluster 2026-03-09T17:30:04.414480+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.487672+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.487672+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.489013+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.489013+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.489517+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.489517+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: cluster 2026-03-09T17:30:04.694222+0000 mgr.y (mgr.14505) 181 : cluster [DBG] pgmap v168: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: cluster 2026-03-09T17:30:04.694222+0000 mgr.y (mgr.14505) 181 : cluster [DBG] pgmap v168: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.992656+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:04.992656+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.385021+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.385021+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: cluster 2026-03-09T17:30:05.396981+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: cluster 2026-03-09T17:30:05.396981+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.422825+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/1065158796' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.422825+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/1065158796' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.426216+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.426216+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.427316+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.427316+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.434697+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.434697+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.434770+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:05 vm00 bash[20770]: audit 2026-03-09T17:30:05.434770+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.364791+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.364791+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: cluster 2026-03-09T17:30:04.414480+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: cluster 2026-03-09T17:30:04.414480+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.487672+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.487672+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.489013+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.489013+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.489517+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.489517+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: cluster 2026-03-09T17:30:04.694222+0000 mgr.y (mgr.14505) 181 : cluster [DBG] pgmap v168: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: cluster 2026-03-09T17:30:04.694222+0000 mgr.y (mgr.14505) 181 : cluster [DBG] pgmap v168: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.992656+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:04.992656+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.385021+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.385021+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: cluster 2026-03-09T17:30:05.396981+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: cluster 2026-03-09T17:30:05.396981+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.422825+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/1065158796' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.422825+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/1065158796' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.426216+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2"}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.426216+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2"}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.427316+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.427316+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.434697+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.434697+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.434770+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:05 vm00 bash[28333]: audit 2026-03-09T17:30:05.434770+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.364791+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.364791+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? 192.168.123.100:0/1262985489' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59916-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: cluster 2026-03-09T17:30:04.414480+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T17:30:05.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: cluster 2026-03-09T17:30:04.414480+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.487672+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.487672+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.489013+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.489013+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.489517+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.489517+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: cluster 2026-03-09T17:30:04.694222+0000 mgr.y (mgr.14505) 181 : cluster [DBG] pgmap v168: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: cluster 2026-03-09T17:30:04.694222+0000 mgr.y (mgr.14505) 181 : cluster [DBG] pgmap v168: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.992656+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:04.992656+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.385021+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.385021+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: cluster 2026-03-09T17:30:05.396981+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: cluster 2026-03-09T17:30:05.396981+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.422825+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/1065158796' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.422825+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/1065158796' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.426216+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2"}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.426216+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2"}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.427316+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.427316+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.434697+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.434697+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.434770+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:05 vm02 bash[23351]: audit 2026-03-09T17:30:05.434770+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: Running main() from gmock_main.cc 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [==========] Running 21 tests from 5 test suites. 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] Global test environment set-up. 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: seed 60199 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapListPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapListPP (2302 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapRemovePP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapRemovePP (2173 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.RollbackPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.RollbackPP (2220 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapGetNamePP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapGetNamePP (2230 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapCreateRemovePP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapCreateRemovePP (4030 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP (12955 ms total) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapPP (4726 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.RollbackPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.RollbackPP (3888 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP (6204 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.Bug11677 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.Bug11677 (4596 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.OrderSnap 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.OrderSnap (1916 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: ./src/test/librados/snapshots_cxx.cc:460: Skipped 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback (0 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: deleting snap 14 in pool LibRadosSnapshotsSelfManagedPP_vm00-60199-7 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: waiting for snaps to purge 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap (18065 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP (39395 ms total) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected (2 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance (5498 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode (5500 ms total) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapListPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapListPP (3587 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapRemovePP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapRemovePP (2284 ms) 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.RollbackPP 2026-03-09T17:30:06.428 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.RollbackPP (2012 ms) 2026-03-09T17:30:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:05.993454+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:05.993454+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.390791+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.390791+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.391104+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.391104+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.423652+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.423652+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.424469+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.424469+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: cluster 2026-03-09T17:30:06.424992+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: cluster 2026-03-09T17:30:06.424992+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.425927+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3379149359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.425927+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3379149359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.426479+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.426479+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.432232+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:06 vm00 bash[28333]: audit 2026-03-09T17:30:06.432232+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:05.993454+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:05.993454+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.390791+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.390791+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.391104+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.391104+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.423652+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.423652+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.424469+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.424469+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: cluster 2026-03-09T17:30:06.424992+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: cluster 2026-03-09T17:30:06.424992+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.425927+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3379149359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.425927+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3379149359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.426479+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.426479+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.432232+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:06 vm00 bash[20770]: audit 2026-03-09T17:30:06.432232+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:30:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:30:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:05.993454+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:05.993454+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.390791+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.390791+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59908-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.391104+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.391104+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-60103-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.423652+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.423652+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.424469+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.424469+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: cluster 2026-03-09T17:30:06.424992+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: cluster 2026-03-09T17:30:06.424992+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.425927+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3379149359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.425927+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3379149359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.426479+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.426479+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.432232+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:06 vm02 bash[23351]: audit 2026-03-09T17:30:06.432232+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: cluster 2026-03-09T17:30:06.570342+0000 mon.a (mon.0) 1356 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: cluster 2026-03-09T17:30:06.570342+0000 mon.a (mon.0) 1356 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.619015+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:30:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.619015+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:30:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.619064+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.619064+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.625271+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.625271+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: cluster 2026-03-09T17:30:06.625725+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: cluster 2026-03-09T17:30:06.625725+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.636174+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.636174+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: cluster 2026-03-09T17:30:06.694594+0000 mgr.y (mgr.14505) 182 : cluster [DBG] pgmap v172: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: cluster 2026-03-09T17:30:06.694594+0000 mgr.y (mgr.14505) 182 : cluster [DBG] pgmap v172: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.994233+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:07 vm00 bash[28333]: audit 2026-03-09T17:30:06.994233+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: cluster 2026-03-09T17:30:06.570342+0000 mon.a (mon.0) 1356 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: cluster 2026-03-09T17:30:06.570342+0000 mon.a (mon.0) 1356 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.619015+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.619015+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.619064+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.619064+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.625271+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.625271+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: cluster 2026-03-09T17:30:06.625725+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: cluster 2026-03-09T17:30:06.625725+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.636174+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.636174+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: cluster 2026-03-09T17:30:06.694594+0000 mgr.y (mgr.14505) 182 : cluster [DBG] pgmap v172: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: cluster 2026-03-09T17:30:06.694594+0000 mgr.y (mgr.14505) 182 : cluster [DBG] pgmap v172: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.994233+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:07 vm00 bash[20770]: audit 2026-03-09T17:30:06.994233+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: cluster 2026-03-09T17:30:06.570342+0000 mon.a (mon.0) 1356 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: cluster 2026-03-09T17:30:06.570342+0000 mon.a (mon.0) 1356 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.619015+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.619015+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.619064+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.619064+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59916-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.625271+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.625271+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: cluster 2026-03-09T17:30:06.625725+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: cluster 2026-03-09T17:30:06.625725+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.636174+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.636174+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: cluster 2026-03-09T17:30:06.694594+0000 mgr.y (mgr.14505) 182 : cluster [DBG] pgmap v172: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: cluster 2026-03-09T17:30:06.694594+0000 mgr.y (mgr.14505) 182 : cluster [DBG] pgmap v172: 396 pgs: 32 unknown, 364 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.994233+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:07 vm02 bash[23351]: audit 2026-03-09T17:30:06.994233+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.703115+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.703115+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: cluster 2026-03-09T17:30:07.743315+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: cluster 2026-03-09T17:30:07.743315+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.746332+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.746332+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.747014+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.747014+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.747116+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.747116+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.995000+0000 mon.c (mon.2) 274 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:08 vm02 bash[23351]: audit 2026-03-09T17:30:07.995000+0000 mon.c (mon.2) 274 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.703115+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:30:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.703115+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:30:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: cluster 2026-03-09T17:30:07.743315+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T17:30:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: cluster 2026-03-09T17:30:07.743315+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T17:30:09.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.746332+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.746332+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.747014+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.747014+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.747116+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.747116+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.995000+0000 mon.c (mon.2) 274 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:08 vm00 bash[28333]: audit 2026-03-09T17:30:07.995000+0000 mon.c (mon.2) 274 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.703115+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.703115+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: cluster 2026-03-09T17:30:07.743315+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: cluster 2026-03-09T17:30:07.743315+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.746332+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.746332+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.100:0/4068933287' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.747014+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.747014+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.747116+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.747116+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.995000+0000 mon.c (mon.2) 274 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:08 vm00 bash[20770]: audit 2026-03-09T17:30:07.995000+0000 mon.c (mon.2) 274 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ api_misc_pp: [==========] Running 31 tests from 7 test suites. 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] Global test environment set-up. 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscVersion.VersionPP 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscVersion.VersionPP (0 ms) 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 22 tests from LibRadosMiscPP 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: seed 60103 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WaitOSDMapPP 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WaitOSDMapPP (0 ms) 2026-03-09T17:30:09.933 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNamePP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNamePP (607 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongLocatorPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongLocatorPP (48 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNSpacePP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNSpacePP (38 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongAttrNamePP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongAttrNamePP (53 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.ExecPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.ExecPP (7 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BadFlagsPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BadFlagsPP (7 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate1PP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate1PP (10 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate2PP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate2PP (3 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigObjectPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigObjectPP (33 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AioOperatePP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AioOperatePP (138 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertExistsPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertExistsPP (60 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertVersionPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertVersionPP (447 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigAttrPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: osd_max_attr_size = 0 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: osd_max_attr_size == 0; skipping test 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigAttrPP (4957 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyPP (1210 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyScrubPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: waiting for initial deep scrubs... 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: done waiting, doing copies 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: waiting for final deep scrubs... 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: done waiting 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyScrubPP (62286 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WriteSamePP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WriteSamePP (108 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CmpExtPP 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CmpExtPP (2 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Applications 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Applications (4376 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatOSD 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatOSD (0 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatClient 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatClient (0 ms) 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Conf 2026-03-09T17:30:09.934 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Conf (0 ms) 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: cluster 2026-03-09T17:30:08.695156+0000 mgr.y (mgr.14505) 183 : cluster [DBG] pgmap v174: 396 pgs: 2 creating+peering, 1 active+clean+snaptrim, 30 unknown, 363 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: cluster 2026-03-09T17:30:08.695156+0000 mgr.y (mgr.14505) 183 : cluster [DBG] pgmap v174: 396 pgs: 2 creating+peering, 1 active+clean+snaptrim, 30 unknown, 363 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.800710+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.800710+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.800834+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.800834+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: cluster 2026-03-09T17:30:08.858602+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: cluster 2026-03-09T17:30:08.858602+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.863637+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.863637+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.863997+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.100:0/1753854889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.863997+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.100:0/1753854889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.865442+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.865442+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.867170+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.867170+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.995912+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:09 vm00 bash[28333]: audit 2026-03-09T17:30:08.995912+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: cluster 2026-03-09T17:30:08.695156+0000 mgr.y (mgr.14505) 183 : cluster [DBG] pgmap v174: 396 pgs: 2 creating+peering, 1 active+clean+snaptrim, 30 unknown, 363 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: cluster 2026-03-09T17:30:08.695156+0000 mgr.y (mgr.14505) 183 : cluster [DBG] pgmap v174: 396 pgs: 2 creating+peering, 1 active+clean+snaptrim, 30 unknown, 363 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.800710+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.800710+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.800834+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.800834+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: cluster 2026-03-09T17:30:08.858602+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: cluster 2026-03-09T17:30:08.858602+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.863637+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.863637+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.863997+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.100:0/1753854889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.863997+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.100:0/1753854889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.865442+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.865442+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.867170+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.867170+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.995912+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:10.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:09 vm00 bash[20770]: audit 2026-03-09T17:30:08.995912+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: cluster 2026-03-09T17:30:08.695156+0000 mgr.y (mgr.14505) 183 : cluster [DBG] pgmap v174: 396 pgs: 2 creating+peering, 1 active+clean+snaptrim, 30 unknown, 363 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: cluster 2026-03-09T17:30:08.695156+0000 mgr.y (mgr.14505) 183 : cluster [DBG] pgmap v174: 396 pgs: 2 creating+peering, 1 active+clean+snaptrim, 30 unknown, 363 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.800710+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.800710+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? 192.168.123.100:0/136170204' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59908-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.800834+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.800834+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-60103-1","app":"app1","key":"key1"}]': finished 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: cluster 2026-03-09T17:30:08.858602+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: cluster 2026-03-09T17:30:08.858602+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.863637+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.863637+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.863997+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.100:0/1753854889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.863997+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.100:0/1753854889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.865442+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.865442+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.867170+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.867170+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.995912+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:09 vm02 bash[23351]: audit 2026-03-09T17:30:08.995912+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.865401+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.865401+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.865528+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.865528+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: cluster 2026-03-09T17:30:09.885487+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: cluster 2026-03-09T17:30:09.885487+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.930360+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.930360+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.942813+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.942813+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.960501+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.960501+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.964140+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.964140+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.974512+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.974512+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.974591+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.974591+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.975648+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.975648+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.981790+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.981790+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.996853+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:11 vm02 bash[23351]: audit 2026-03-09T17:30:09.996853+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.865401+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.865401+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.865528+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.865528+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: cluster 2026-03-09T17:30:09.885487+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: cluster 2026-03-09T17:30:09.885487+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.930360+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.930360+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.942813+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.942813+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.960501+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.960501+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.964140+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.964140+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.974512+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.974512+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.974591+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.974591+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.975648+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.975648+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.981790+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.981790+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.996853+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:11 vm00 bash[28333]: audit 2026-03-09T17:30:09.996853+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.865401+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.865401+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.865528+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.865528+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59916-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: cluster 2026-03-09T17:30:09.885487+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: cluster 2026-03-09T17:30:09.885487+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.930360+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.930360+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/4054499440' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.942813+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.942813+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.960501+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.960501+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.964140+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.964140+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.974512+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.974512+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.974591+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.974591+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.975648+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.975648+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.981790+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.981790+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.996853+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:11.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:11 vm00 bash[20770]: audit 2026-03-09T17:30:09.996853+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:11.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:30:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: cluster 2026-03-09T17:30:10.695566+0000 mgr.y (mgr.14505) 184 : cluster [DBG] pgmap v177: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: cluster 2026-03-09T17:30:10.695566+0000 mgr.y (mgr.14505) 184 : cluster [DBG] pgmap v177: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:10.997676+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:10.997676+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.057441+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.057441+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.057472+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.057472+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.062294+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.062294+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: cluster 2026-03-09T17:30:11.092844+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: cluster 2026-03-09T17:30:11.092844+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.093108+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4241654174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.093108+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4241654174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.102549+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.102549+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.102883+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.102883+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.138352+0000 mon.a (mon.0) 1382 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.138352+0000 mon.a (mon.0) 1382 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.139783+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.139783+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.140029+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.140029+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: cluster 2026-03-09T17:30:11.590335+0000 mon.a (mon.0) 1385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: cluster 2026-03-09T17:30:11.590335+0000 mon.a (mon.0) 1385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.998906+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:12 vm02 bash[23351]: audit 2026-03-09T17:30:11.998906+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: cluster 2026-03-09T17:30:10.695566+0000 mgr.y (mgr.14505) 184 : cluster [DBG] pgmap v177: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: cluster 2026-03-09T17:30:10.695566+0000 mgr.y (mgr.14505) 184 : cluster [DBG] pgmap v177: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:10.997676+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:10.997676+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.057441+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.057441+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.057472+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:12.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.057472+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.062294+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.062294+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: cluster 2026-03-09T17:30:11.092844+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: cluster 2026-03-09T17:30:11.092844+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.093108+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4241654174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.093108+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4241654174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.102549+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.102549+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.102883+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.102883+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.138352+0000 mon.a (mon.0) 1382 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.138352+0000 mon.a (mon.0) 1382 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.139783+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.139783+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.140029+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.140029+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: cluster 2026-03-09T17:30:11.590335+0000 mon.a (mon.0) 1385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: cluster 2026-03-09T17:30:11.590335+0000 mon.a (mon.0) 1385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.998906+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:12 vm00 bash[28333]: audit 2026-03-09T17:30:11.998906+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: cluster 2026-03-09T17:30:10.695566+0000 mgr.y (mgr.14505) 184 : cluster [DBG] pgmap v177: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: cluster 2026-03-09T17:30:10.695566+0000 mgr.y (mgr.14505) 184 : cluster [DBG] pgmap v177: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:10.997676+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:10.997676+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.057441+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.057441+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-60199-16"}]': finished 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.057472+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.057472+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60103-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.062294+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.062294+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: cluster 2026-03-09T17:30:11.092844+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: cluster 2026-03-09T17:30:11.092844+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.093108+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4241654174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.093108+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4241654174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.102549+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.102549+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.102883+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.102883+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.138352+0000 mon.a (mon.0) 1382 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.138352+0000 mon.a (mon.0) 1382 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.139783+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.139783+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.140029+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.140029+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: cluster 2026-03-09T17:30:11.590335+0000 mon.a (mon.0) 1385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: cluster 2026-03-09T17:30:11.590335+0000 mon.a (mon.0) 1385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.998906+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:12 vm00 bash[20770]: audit 2026-03-09T17:30:11.998906+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:11.594289+0000 mgr.y (mgr.14505) 185 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:11.594289+0000 mgr.y (mgr.14505) 185 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.079897+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.079897+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.080094+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.080094+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: cluster 2026-03-09T17:30:12.117130+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: cluster 2026-03-09T17:30:12.117130+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.118380+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.118380+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.121055+0000 mon.a (mon.0) 1390 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.121055+0000 mon.a (mon.0) 1390 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.296442+0000 mon.c (mon.2) 282 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.296442+0000 mon.c (mon.2) 282 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.999732+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:13 vm02 bash[23351]: audit 2026-03-09T17:30:12.999732+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:13.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:11.594289+0000 mgr.y (mgr.14505) 185 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:11.594289+0000 mgr.y (mgr.14505) 185 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.079897+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.079897+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.080094+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.080094+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: cluster 2026-03-09T17:30:12.117130+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: cluster 2026-03-09T17:30:12.117130+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.118380+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.118380+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.121055+0000 mon.a (mon.0) 1390 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.121055+0000 mon.a (mon.0) 1390 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.296442+0000 mon.c (mon.2) 282 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.296442+0000 mon.c (mon.2) 282 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.999732+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:13 vm00 bash[28333]: audit 2026-03-09T17:30:12.999732+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:11.594289+0000 mgr.y (mgr.14505) 185 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:11.594289+0000 mgr.y (mgr.14505) 185 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.079897+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.079897+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59908-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.080094+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.080094+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: cluster 2026-03-09T17:30:12.117130+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: cluster 2026-03-09T17:30:12.117130+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.118380+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.118380+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.121055+0000 mon.a (mon.0) 1390 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.121055+0000 mon.a (mon.0) 1390 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.296442+0000 mon.c (mon.2) 282 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.296442+0000 mon.c (mon.2) 282 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.999732+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:13 vm00 bash[20770]: audit 2026-03-09T17:30:12.999732+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: cluster 2026-03-09T17:30:12.696239+0000 mgr.y (mgr.14505) 186 : cluster [DBG] pgmap v180: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: cluster 2026-03-09T17:30:12.696239+0000 mgr.y (mgr.14505) 186 : cluster [DBG] pgmap v180: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.152035+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.152035+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.152239+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.152239+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: cluster 2026-03-09T17:30:13.164504+0000 mon.a (mon.0) 1393 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: cluster 2026-03-09T17:30:13.164504+0000 mon.a (mon.0) 1393 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.194806+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.194806+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.196845+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.196845+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.198347+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.198347+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.199742+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.199742+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.201091+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.201091+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.208274+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:13.208274+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:14.000578+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:14 vm00 bash[20770]: audit 2026-03-09T17:30:14.000578+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: cluster 2026-03-09T17:30:12.696239+0000 mgr.y (mgr.14505) 186 : cluster [DBG] pgmap v180: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: cluster 2026-03-09T17:30:12.696239+0000 mgr.y (mgr.14505) 186 : cluster [DBG] pgmap v180: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.152035+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.152035+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.152239+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.152239+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: cluster 2026-03-09T17:30:13.164504+0000 mon.a (mon.0) 1393 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: cluster 2026-03-09T17:30:13.164504+0000 mon.a (mon.0) 1393 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.194806+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.194806+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.196845+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.196845+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.198347+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.198347+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.199742+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.199742+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.201091+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.201091+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.208274+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:13.208274+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:14.000578+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:14 vm00 bash[28333]: audit 2026-03-09T17:30:14.000578+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: cluster 2026-03-09T17:30:12.696239+0000 mgr.y (mgr.14505) 186 : cluster [DBG] pgmap v180: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: cluster 2026-03-09T17:30:12.696239+0000 mgr.y (mgr.14505) 186 : cluster [DBG] pgmap v180: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.152035+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.152035+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60103-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.152239+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.152239+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? 192.168.123.100:0/2099738498' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59916-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: cluster 2026-03-09T17:30:13.164504+0000 mon.a (mon.0) 1393 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: cluster 2026-03-09T17:30:13.164504+0000 mon.a (mon.0) 1393 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.194806+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.194806+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.196845+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.196845+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.198347+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.198347+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.199742+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.199742+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.201091+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.201091+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.208274+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:13.208274+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:14.000578+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:14 vm02 bash[23351]: audit 2026-03-09T17:30:14.000578+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:15.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.208568+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:15.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.208568+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.208669+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.208669+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: cluster 2026-03-09T17:30:14.222241+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: cluster 2026-03-09T17:30:14.222241+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.223474+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.223474+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.225146+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.225146+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.225561+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.225561+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.234490+0000 mon.a (mon.0) 1401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:14.234490+0000 mon.a (mon.0) 1401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:15.001369+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:15 vm00 bash[20770]: audit 2026-03-09T17:30:15.001369+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.208568+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.208568+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.208669+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.208669+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: cluster 2026-03-09T17:30:14.222241+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: cluster 2026-03-09T17:30:14.222241+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.223474+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.223474+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.225146+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.225146+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.225561+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.225561+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.234490+0000 mon.a (mon.0) 1401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:14.234490+0000 mon.a (mon.0) 1401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:15.001369+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:15 vm00 bash[28333]: audit 2026-03-09T17:30:15.001369+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.208568+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.208568+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-60199-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.208669+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.208669+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59908-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: cluster 2026-03-09T17:30:14.222241+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: cluster 2026-03-09T17:30:14.222241+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.223474+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.223474+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.225146+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.225146+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.225561+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.225561+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.234490+0000 mon.a (mon.0) 1401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:14.234490+0000 mon.a (mon.0) 1401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:15.001369+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:15.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:15 vm02 bash[23351]: audit 2026-03-09T17:30:15.001369+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: cluster 2026-03-09T17:30:14.696896+0000 mgr.y (mgr.14505) 187 : cluster [DBG] pgmap v183: 372 pgs: 48 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: cluster 2026-03-09T17:30:14.696896+0000 mgr.y (mgr.14505) 187 : cluster [DBG] pgmap v183: 372 pgs: 48 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.297687+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.297687+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.315921+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.315921+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: cluster 2026-03-09T17:30:15.339718+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: cluster 2026-03-09T17:30:15.339718+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.353117+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/3088453572' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.353117+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/3088453572' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.361082+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.361082+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.370081+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:15.370081+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:16.002190+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:16 vm02 bash[23351]: audit 2026-03-09T17:30:16.002190+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: cluster 2026-03-09T17:30:14.696896+0000 mgr.y (mgr.14505) 187 : cluster [DBG] pgmap v183: 372 pgs: 48 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: cluster 2026-03-09T17:30:14.696896+0000 mgr.y (mgr.14505) 187 : cluster [DBG] pgmap v183: 372 pgs: 48 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.297687+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.297687+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.315921+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.315921+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: cluster 2026-03-09T17:30:15.339718+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: cluster 2026-03-09T17:30:15.339718+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.353117+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/3088453572' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.353117+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/3088453572' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.361082+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.361082+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.370081+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:15.370081+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:16.002190+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:16 vm00 bash[20770]: audit 2026-03-09T17:30:16.002190+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: cluster 2026-03-09T17:30:14.696896+0000 mgr.y (mgr.14505) 187 : cluster [DBG] pgmap v183: 372 pgs: 48 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: cluster 2026-03-09T17:30:14.696896+0000 mgr.y (mgr.14505) 187 : cluster [DBG] pgmap v183: 372 pgs: 48 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.297687+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.297687+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.315921+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.315921+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: cluster 2026-03-09T17:30:15.339718+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: cluster 2026-03-09T17:30:15.339718+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.353117+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/3088453572' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.353117+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/3088453572' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.361082+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.361082+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.370081+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:15.370081+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:16.002190+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:16 vm00 bash[28333]: audit 2026-03-09T17:30:16.002190+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:30:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:30:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:30:17.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: audit 2026-03-09T17:30:16.303261+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:17.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: audit 2026-03-09T17:30:16.303261+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: audit 2026-03-09T17:30:16.303306+0000 mon.a (mon.0) 1407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: audit 2026-03-09T17:30:16.303306+0000 mon.a (mon.0) 1407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: audit 2026-03-09T17:30:16.303346+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: audit 2026-03-09T17:30:16.303346+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: cluster 2026-03-09T17:30:16.316070+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: cluster 2026-03-09T17:30:16.316070+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: cluster 2026-03-09T17:30:16.591141+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: cluster 2026-03-09T17:30:16.591141+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: audit 2026-03-09T17:30:17.003036+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: audit 2026-03-09T17:30:17.003036+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: cluster 2026-03-09T17:30:17.311659+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:17 vm00 bash[28333]: cluster 2026-03-09T17:30:17.311659+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: audit 2026-03-09T17:30:16.303261+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: audit 2026-03-09T17:30:16.303261+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: audit 2026-03-09T17:30:16.303306+0000 mon.a (mon.0) 1407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: audit 2026-03-09T17:30:16.303306+0000 mon.a (mon.0) 1407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: audit 2026-03-09T17:30:16.303346+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: audit 2026-03-09T17:30:16.303346+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: cluster 2026-03-09T17:30:16.316070+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: cluster 2026-03-09T17:30:16.316070+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: cluster 2026-03-09T17:30:16.591141+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: cluster 2026-03-09T17:30:16.591141+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: audit 2026-03-09T17:30:17.003036+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: audit 2026-03-09T17:30:17.003036+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: cluster 2026-03-09T17:30:17.311659+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T17:30:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:17 vm00 bash[20770]: cluster 2026-03-09T17:30:17.311659+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: audit 2026-03-09T17:30:16.303261+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: audit 2026-03-09T17:30:16.303261+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59908-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: audit 2026-03-09T17:30:16.303306+0000 mon.a (mon.0) 1407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: audit 2026-03-09T17:30:16.303306+0000 mon.a (mon.0) 1407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60103-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: audit 2026-03-09T17:30:16.303346+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: audit 2026-03-09T17:30:16.303346+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59916-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: cluster 2026-03-09T17:30:16.316070+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: cluster 2026-03-09T17:30:16.316070+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: cluster 2026-03-09T17:30:16.591141+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: cluster 2026-03-09T17:30:16.591141+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: audit 2026-03-09T17:30:17.003036+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: audit 2026-03-09T17:30:17.003036+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: cluster 2026-03-09T17:30:17.311659+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T17:30:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:17 vm02 bash[23351]: cluster 2026-03-09T17:30:17.311659+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T17:30:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:18 vm00 bash[28333]: cluster 2026-03-09T17:30:16.697418+0000 mgr.y (mgr.14505) 188 : cluster [DBG] pgmap v186: 412 pgs: 88 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:18 vm00 bash[28333]: cluster 2026-03-09T17:30:16.697418+0000 mgr.y (mgr.14505) 188 : cluster [DBG] pgmap v186: 412 pgs: 88 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:18 vm00 bash[28333]: audit 2026-03-09T17:30:18.003992+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:18 vm00 bash[28333]: audit 2026-03-09T17:30:18.003992+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:18.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:18 vm00 bash[20770]: cluster 2026-03-09T17:30:16.697418+0000 mgr.y (mgr.14505) 188 : cluster [DBG] pgmap v186: 412 pgs: 88 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:18.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:18 vm00 bash[20770]: cluster 2026-03-09T17:30:16.697418+0000 mgr.y (mgr.14505) 188 : cluster [DBG] pgmap v186: 412 pgs: 88 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:18 vm00 bash[20770]: audit 2026-03-09T17:30:18.003992+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:18 vm00 bash[20770]: audit 2026-03-09T17:30:18.003992+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:18.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:18 vm02 bash[23351]: cluster 2026-03-09T17:30:16.697418+0000 mgr.y (mgr.14505) 188 : cluster [DBG] pgmap v186: 412 pgs: 88 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:18 vm02 bash[23351]: cluster 2026-03-09T17:30:16.697418+0000 mgr.y (mgr.14505) 188 : cluster [DBG] pgmap v186: 412 pgs: 88 unknown, 324 active+clean; 464 KiB data, 606 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:18 vm02 bash[23351]: audit 2026-03-09T17:30:18.003992+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:18 vm02 bash[23351]: audit 2026-03-09T17:30:18.003992+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: cluster 2026-03-09T17:30:18.504802+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: cluster 2026-03-09T17:30:18.504802+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.513817+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.513817+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.514917+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/4182465656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.514917+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/4182465656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.520893+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.520893+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.521012+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.521012+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.525123+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.525123+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.559349+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:18.559349+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: cluster 2026-03-09T17:30:18.697843+0000 mgr.y (mgr.14505) 189 : cluster [DBG] pgmap v189: 364 pgs: 32 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: cluster 2026-03-09T17:30:18.697843+0000 mgr.y (mgr.14505) 189 : cluster [DBG] pgmap v189: 364 pgs: 32 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.004697+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.004697+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.498698+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.498698+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.498885+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.498885+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.499079+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.499079+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.502394+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.502394+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.505053+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.505053+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: cluster 2026-03-09T17:30:19.509080+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: cluster 2026-03-09T17:30:19.509080+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.512612+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.512612+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.516033+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:19 vm02 bash[23351]: audit 2026-03-09T17:30:19.516033+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: cluster 2026-03-09T17:30:18.504802+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: cluster 2026-03-09T17:30:18.504802+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.513817+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.513817+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.514917+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/4182465656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.514917+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/4182465656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.520893+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.520893+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.521012+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.521012+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.525123+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.525123+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.559349+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:18.559349+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: cluster 2026-03-09T17:30:18.697843+0000 mgr.y (mgr.14505) 189 : cluster [DBG] pgmap v189: 364 pgs: 32 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: cluster 2026-03-09T17:30:18.697843+0000 mgr.y (mgr.14505) 189 : cluster [DBG] pgmap v189: 364 pgs: 32 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.004697+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.004697+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.498698+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.498698+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.498885+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.498885+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.499079+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.499079+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.502394+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.502394+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.505053+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.505053+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: cluster 2026-03-09T17:30:19.509080+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: cluster 2026-03-09T17:30:19.509080+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.512612+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.512612+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.516033+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:19 vm00 bash[28333]: audit 2026-03-09T17:30:19.516033+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: cluster 2026-03-09T17:30:18.504802+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: cluster 2026-03-09T17:30:18.504802+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.513817+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.513817+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.514917+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/4182465656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.514917+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/4182465656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.520893+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.520893+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.521012+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.521012+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.525123+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.525123+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.559349+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:18.559349+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: cluster 2026-03-09T17:30:18.697843+0000 mgr.y (mgr.14505) 189 : cluster [DBG] pgmap v189: 364 pgs: 32 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: cluster 2026-03-09T17:30:18.697843+0000 mgr.y (mgr.14505) 189 : cluster [DBG] pgmap v189: 364 pgs: 32 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.004697+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.004697+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.498698+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.498698+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.498885+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.498885+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59916-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.499079+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.499079+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.502394+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.502394+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/2399366056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.505053+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.505053+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.100:0/2513359939' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: cluster 2026-03-09T17:30:19.509080+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: cluster 2026-03-09T17:30:19.509080+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.512612+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.512612+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]: dispatch 2026-03-09T17:30:20.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.516033+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.040 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:19 vm00 bash[20770]: audit 2026-03-09T17:30:19.516033+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]: dispatch 2026-03-09T17:30:20.521 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 22 tests from LibRa api_aio: Running main() from gmock_main.cc 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [==========] Running 42 tests from 2 test suites. 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] Global test environment set-up. 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] 26 tests from LibRadosAio 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.TooBig 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.TooBig (2394 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.SimpleWrite 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.SimpleWrite (3363 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.WaitForSafe 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.WaitForSafe (3282 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip (2240 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip2 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip2 (3037 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip3 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip3 (3016 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripAppend 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTripAppend (3037 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RemoveTest 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RemoveTest (3527 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.XattrsRoundTrip 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.XattrsRoundTrip (3146 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RmXattr 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RmXattr (2979 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.XattrIter 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.XattrIter (3212 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.IsComplete 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.IsComplete (3425 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.IsSafe 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.IsSafe (2696 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.ReturnValue 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.ReturnValue (3488 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.Flush 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.Flush (3119 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.FlushAsync 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.FlushAsync (2996 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteFull 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteFull (3868 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteSame 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteSame (3063 ms) 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStat 2026-03-09T17:30:20.522 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.SimpleStat (3158 ms) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.OperateMtime 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.OperateMtime (3333 ms) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.Operate2Mtime 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.Operate2Mtime (3038 ms) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStatNS 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.SimpleStatNS (3581 ms) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.StatRemove 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.StatRemove (3291 ms) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.ExecuteClass 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.ExecuteClass (2209 ms) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.MultiWrite 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.MultiWrite (3273 ms) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.AioUnlock 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.AioUnlock (3268 ms) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] 26 tests from LibRadosAio (81039 ms total) 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] 16 tests from LibRadosAioEC 2026-03-09T17:30:20.523 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleWrite 2026-03-09T17:30:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:20 vm02 bash[23351]: audit 2026-03-09T17:30:20.005416+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:20 vm02 bash[23351]: audit 2026-03-09T17:30:20.005416+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:20 vm02 bash[23351]: audit 2026-03-09T17:30:20.502355+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:20 vm02 bash[23351]: audit 2026-03-09T17:30:20.502355+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:20 vm02 bash[23351]: audit 2026-03-09T17:30:20.502479+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:20 vm02 bash[23351]: audit 2026-03-09T17:30:20.502479+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:20 vm02 bash[23351]: cluster 2026-03-09T17:30:20.505938+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T17:30:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:20 vm02 bash[23351]: cluster 2026-03-09T17:30:20.505938+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T17:30:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:20 vm00 bash[28333]: audit 2026-03-09T17:30:20.005416+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:20 vm00 bash[28333]: audit 2026-03-09T17:30:20.005416+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:20 vm00 bash[28333]: audit 2026-03-09T17:30:20.502355+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:20 vm00 bash[28333]: audit 2026-03-09T17:30:20.502355+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:20 vm00 bash[28333]: audit 2026-03-09T17:30:20.502479+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:20 vm00 bash[28333]: audit 2026-03-09T17:30:20.502479+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:20 vm00 bash[28333]: cluster 2026-03-09T17:30:20.505938+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T17:30:21.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:20 vm00 bash[28333]: cluster 2026-03-09T17:30:20.505938+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T17:30:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:20 vm00 bash[20770]: audit 2026-03-09T17:30:20.005416+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:20 vm00 bash[20770]: audit 2026-03-09T17:30:20.005416+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:20 vm00 bash[20770]: audit 2026-03-09T17:30:20.502355+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:20 vm00 bash[20770]: audit 2026-03-09T17:30:20.502355+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60103-24"}]': finished 2026-03-09T17:30:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:20 vm00 bash[20770]: audit 2026-03-09T17:30:20.502479+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:20 vm00 bash[20770]: audit 2026-03-09T17:30:20.502479+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59908-27"}]': finished 2026-03-09T17:30:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:20 vm00 bash[20770]: cluster 2026-03-09T17:30:20.505938+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T17:30:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:20 vm00 bash[20770]: cluster 2026-03-09T17:30:20.505938+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.569110+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.569110+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.606754+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.606754+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.607520+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.607520+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.607919+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.607919+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.608366+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.608366+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.608548+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:20.608548+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: cluster 2026-03-09T17:30:20.698303+0000 mgr.y (mgr.14505) 190 : cluster [DBG] pgmap v192: 332 pgs: 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: cluster 2026-03-09T17:30:20.698303+0000 mgr.y (mgr.14505) 190 : cluster [DBG] pgmap v192: 332 pgs: 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.006362+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.006362+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.506186+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.506186+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.527019+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/4052475357' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.527019+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/4052475357' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.527624+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.527624+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: cluster 2026-03-09T17:30:21.530916+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: cluster 2026-03-09T17:30:21.530916+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T17:30:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.531468+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.100:0/3342022144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:21.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.531468+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.100:0/3342022144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:21.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.534927+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.534927+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:21.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.535347+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:21.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.535347+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:21.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.535417+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:21.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:21 vm02 bash[23351]: audit 2026-03-09T17:30:21.535417+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:21.887 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:30:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:30:22.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.569110+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.569110+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.606754+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.606754+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.607520+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.607520+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.607919+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.607919+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.608366+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.608366+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.608548+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:20.608548+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: cluster 2026-03-09T17:30:20.698303+0000 mgr.y (mgr.14505) 190 : cluster [DBG] pgmap v192: 332 pgs: 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: cluster 2026-03-09T17:30:20.698303+0000 mgr.y (mgr.14505) 190 : cluster [DBG] pgmap v192: 332 pgs: 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.006362+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.006362+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.506186+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.506186+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.527019+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/4052475357' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.527019+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/4052475357' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.527624+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.527624+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: cluster 2026-03-09T17:30:21.530916+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: cluster 2026-03-09T17:30:21.530916+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.531468+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.100:0/3342022144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.531468+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.100:0/3342022144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.534927+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.534927+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.535347+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.535347+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.535417+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:21 vm00 bash[28333]: audit 2026-03-09T17:30:21.535417+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.569110+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.569110+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.606754+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.606754+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.607520+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.607520+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.607919+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.607919+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.608366+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.608366+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.608548+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:20.608548+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: cluster 2026-03-09T17:30:20.698303+0000 mgr.y (mgr.14505) 190 : cluster [DBG] pgmap v192: 332 pgs: 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: cluster 2026-03-09T17:30:20.698303+0000 mgr.y (mgr.14505) 190 : cluster [DBG] pgmap v192: 332 pgs: 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.006362+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.006362+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.506186+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.506186+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59908-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.527019+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/4052475357' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.527019+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/4052475357' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.527624+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.527624+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: cluster 2026-03-09T17:30:21.530916+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: cluster 2026-03-09T17:30:21.530916+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.531468+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.100:0/3342022144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.531468+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.100:0/3342022144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.534927+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.534927+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.535347+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.535347+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.535417+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:21 vm00 bash[20770]: audit 2026-03-09T17:30:21.535417+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: cluster 2026-03-09T17:30:21.597284+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: cluster 2026-03-09T17:30:21.597284+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.601035+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.601035+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.601080+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.601080+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.601994+0000 mgr.y (mgr.14505) 191 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.601994+0000 mgr.y (mgr.14505) 191 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: cluster 2026-03-09T17:30:21.622093+0000 mon.a (mon.0) 1436 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: cluster 2026-03-09T17:30:21.622093+0000 mon.a (mon.0) 1436 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.806644+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.806644+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.807858+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:21.807858+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:22.007275+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:22.007275+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:22.007688+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:22.007688+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:22.007957+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:22 vm02 bash[23351]: audit 2026-03-09T17:30:22.007957+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:23.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: cluster 2026-03-09T17:30:21.597284+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: cluster 2026-03-09T17:30:21.597284+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.601035+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.601035+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.601080+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.601080+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.601994+0000 mgr.y (mgr.14505) 191 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.601994+0000 mgr.y (mgr.14505) 191 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: cluster 2026-03-09T17:30:21.622093+0000 mon.a (mon.0) 1436 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: cluster 2026-03-09T17:30:21.622093+0000 mon.a (mon.0) 1436 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.806644+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.806644+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.807858+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:21.807858+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:22.007275+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:22.007275+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:22.007688+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:22.007688+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:22.007957+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:22 vm00 bash[28333]: audit 2026-03-09T17:30:22.007957+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: cluster 2026-03-09T17:30:21.597284+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: cluster 2026-03-09T17:30:21.597284+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.601035+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.601035+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-60103-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.601080+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.601080+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59916-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.601994+0000 mgr.y (mgr.14505) 191 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.601994+0000 mgr.y (mgr.14505) 191 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: cluster 2026-03-09T17:30:21.622093+0000 mon.a (mon.0) 1436 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: cluster 2026-03-09T17:30:21.622093+0000 mon.a (mon.0) 1436 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.806644+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.806644+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.807858+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:21.807858+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:22.007275+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:22.007275+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:22.007688+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:22.007688+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:22.007957+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:22 vm00 bash[20770]: audit 2026-03-09T17:30:22.007957+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.607277+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.607277+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.607416+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.607416+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.607507+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.607507+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.609520+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.609520+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: cluster 2026-03-09T17:30:22.623515+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: cluster 2026-03-09T17:30:22.623515+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.631274+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.631274+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.643794+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.643794+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.650659+0000 mon.c (mon.2) 306 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.650659+0000 mon.c (mon.2) 306 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.660327+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.660327+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.665190+0000 mon.a (mon.0) 1444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: audit 2026-03-09T17:30:22.665190+0000 mon.a (mon.0) 1444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: cluster 2026-03-09T17:30:22.698757+0000 mgr.y (mgr.14505) 192 : cluster [DBG] pgmap v196: 340 pgs: 8 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:23 vm00 bash[28333]: cluster 2026-03-09T17:30:22.698757+0000 mgr.y (mgr.14505) 192 : cluster [DBG] pgmap v196: 340 pgs: 8 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.607277+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.607277+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.607416+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.607416+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.607507+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.607507+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.609520+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.609520+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: cluster 2026-03-09T17:30:22.623515+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: cluster 2026-03-09T17:30:22.623515+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.631274+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.631274+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.643794+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.643794+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.650659+0000 mon.c (mon.2) 306 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.650659+0000 mon.c (mon.2) 306 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.660327+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.660327+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.665190+0000 mon.a (mon.0) 1444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: audit 2026-03-09T17:30:22.665190+0000 mon.a (mon.0) 1444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: cluster 2026-03-09T17:30:22.698757+0000 mgr.y (mgr.14505) 192 : cluster [DBG] pgmap v196: 340 pgs: 8 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:23 vm00 bash[20770]: cluster 2026-03-09T17:30:22.698757+0000 mgr.y (mgr.14505) 192 : cluster [DBG] pgmap v196: 340 pgs: 8 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.607277+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.607277+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59908-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.607416+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.607416+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.607507+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.607507+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.609520+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.609520+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: cluster 2026-03-09T17:30:22.623515+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: cluster 2026-03-09T17:30:22.623515+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.631274+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.631274+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.643794+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.643794+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.650659+0000 mon.c (mon.2) 306 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.650659+0000 mon.c (mon.2) 306 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.660327+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.660327+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.665190+0000 mon.a (mon.0) 1444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: audit 2026-03-09T17:30:22.665190+0000 mon.a (mon.0) 1444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: cluster 2026-03-09T17:30:22.698757+0000 mgr.y (mgr.14505) 192 : cluster [DBG] pgmap v196: 340 pgs: 8 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:23 vm02 bash[23351]: cluster 2026-03-09T17:30:22.698757+0000 mgr.y (mgr.14505) 192 : cluster [DBG] pgmap v196: 340 pgs: 8 unknown, 332 active+clean; 480 KiB data, 628 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: cluster 2026-03-09T17:30:23.607744+0000 mon.a (mon.0) 1445 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: cluster 2026-03-09T17:30:23.607744+0000 mon.a (mon.0) 1445 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.706462+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.706462+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.706499+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.706499+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.712813+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.712813+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: cluster 2026-03-09T17:30:23.716314+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: cluster 2026-03-09T17:30:23.716314+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.734565+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/68040802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.734565+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/68040802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.734676+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.100:0/120929540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.734676+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.100:0/120929540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.767519+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.767519+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.767607+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.767607+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.864111+0000 mon.c (mon.2) 310 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:24 vm00 bash[28333]: audit 2026-03-09T17:30:23.864111+0000 mon.c (mon.2) 310 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:24 vm00 bash[20770]: cluster 2026-03-09T17:30:23.607744+0000 mon.a (mon.0) 1445 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:24 vm00 bash[20770]: cluster 2026-03-09T17:30:23.607744+0000 mon.a (mon.0) 1445 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:24 vm00 bash[20770]: audit 2026-03-09T17:30:23.706462+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:24 vm00 bash[20770]: audit 2026-03-09T17:30:23.706462+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:24 vm00 bash[20770]: audit 2026-03-09T17:30:23.706499+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:24 vm00 bash[20770]: audit 2026-03-09T17:30:23.706499+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:30:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.712813+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.712813+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: cluster 2026-03-09T17:30:23.716314+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: cluster 2026-03-09T17:30:23.716314+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.734565+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/68040802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.734565+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/68040802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.734676+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.100:0/120929540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.734676+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.100:0/120929540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.767519+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.767519+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.767607+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.767607+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.864111+0000 mon.c (mon.2) 310 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:25 vm00 bash[20770]: audit 2026-03-09T17:30:23.864111+0000 mon.c (mon.2) 310 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: cluster 2026-03-09T17:30:23.607744+0000 mon.a (mon.0) 1445 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: cluster 2026-03-09T17:30:23.607744+0000 mon.a (mon.0) 1445 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.706462+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.706462+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-13"}]': finished 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.706499+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.706499+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.712813+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.712813+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: cluster 2026-03-09T17:30:23.716314+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: cluster 2026-03-09T17:30:23.716314+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.734565+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/68040802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.734565+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/68040802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.734676+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.100:0/120929540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.734676+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.100:0/120929540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.767519+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.767519+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.767607+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.767607+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.864111+0000 mon.c (mon.2) 310 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:24 vm02 bash[23351]: audit 2026-03-09T17:30:23.864111+0000 mon.c (mon.2) 310 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: cluster 2026-03-09T17:30:24.699147+0000 mgr.y (mgr.14505) 193 : cluster [DBG] pgmap v198: 404 pgs: 64 unknown, 8 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 281 active+clean; 493 KiB data, 623 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 KiB/s wr, 12 op/s 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: cluster 2026-03-09T17:30:24.699147+0000 mgr.y (mgr.14505) 193 : cluster [DBG] pgmap v198: 404 pgs: 64 unknown, 8 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 281 active+clean; 493 KiB data, 623 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 KiB/s wr, 12 op/s 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.864906+0000 mon.c (mon.2) 311 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.864906+0000 mon.c (mon.2) 311 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.902281+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.902281+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.902461+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.902461+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: cluster 2026-03-09T17:30:24.937449+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: cluster 2026-03-09T17:30:24.937449+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.941944+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.941944+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.979399+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:24.979399+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.865657+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.865657+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.905738+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.905738+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: cluster 2026-03-09T17:30:25.927148+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: cluster 2026-03-09T17:30:25.927148+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.937568+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.937568+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.952539+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.952539+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.958195+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.958195+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.958654+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:26 vm00 bash[28333]: audit 2026-03-09T17:30:25.958654+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: cluster 2026-03-09T17:30:24.699147+0000 mgr.y (mgr.14505) 193 : cluster [DBG] pgmap v198: 404 pgs: 64 unknown, 8 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 281 active+clean; 493 KiB data, 623 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 KiB/s wr, 12 op/s 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: cluster 2026-03-09T17:30:24.699147+0000 mgr.y (mgr.14505) 193 : cluster [DBG] pgmap v198: 404 pgs: 64 unknown, 8 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 281 active+clean; 493 KiB data, 623 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 KiB/s wr, 12 op/s 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.864906+0000 mon.c (mon.2) 311 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.864906+0000 mon.c (mon.2) 311 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.902281+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.902281+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.902461+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.902461+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: cluster 2026-03-09T17:30:24.937449+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: cluster 2026-03-09T17:30:24.937449+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T17:30:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.941944+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.941944+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.979399+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:24.979399+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.865657+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.865657+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.905738+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.905738+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: cluster 2026-03-09T17:30:25.927148+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: cluster 2026-03-09T17:30:25.927148+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.937568+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.937568+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.952539+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.952539+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.958195+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.958195+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.958654+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:26 vm00 bash[20770]: audit 2026-03-09T17:30:25.958654+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: cluster 2026-03-09T17:30:24.699147+0000 mgr.y (mgr.14505) 193 : cluster [DBG] pgmap v198: 404 pgs: 64 unknown, 8 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 281 active+clean; 493 KiB data, 623 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 KiB/s wr, 12 op/s 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: cluster 2026-03-09T17:30:24.699147+0000 mgr.y (mgr.14505) 193 : cluster [DBG] pgmap v198: 404 pgs: 64 unknown, 8 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 281 active+clean; 493 KiB data, 623 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 KiB/s wr, 12 op/s 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.864906+0000 mon.c (mon.2) 311 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.864906+0000 mon.c (mon.2) 311 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.902281+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.902281+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59916-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.902461+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.902461+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-60103-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: cluster 2026-03-09T17:30:24.937449+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: cluster 2026-03-09T17:30:24.937449+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.941944+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.941944+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.979399+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:24.979399+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.865657+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.865657+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.905738+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.905738+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: cluster 2026-03-09T17:30:25.927148+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: cluster 2026-03-09T17:30:25.927148+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.937568+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.937568+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.952539+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.952539+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.100:0/1684717558' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.958195+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.958195+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.958654+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:26 vm02 bash[23351]: audit 2026-03-09T17:30:25.958654+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:30:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:30:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: cluster 2026-03-09T17:30:26.599365+0000 mon.a (mon.0) 1459 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: cluster 2026-03-09T17:30:26.599365+0000 mon.a (mon.0) 1459 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.603467+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.603467+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.603506+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.603506+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: cluster 2026-03-09T17:30:26.629791+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: cluster 2026-03-09T17:30:26.629791+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.630712+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/1450498284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.630712+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/1450498284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.632577+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.632577+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.633051+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.633051+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.694158+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/1322853465' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.694158+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/1322853465' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.695486+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.695486+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.701725+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.701725+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.703890+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.703890+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.704420+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.704420+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.706117+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.706117+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.707027+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.707027+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.708150+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.708150+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.866588+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:27 vm00 bash[28333]: audit 2026-03-09T17:30:26.866588+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: cluster 2026-03-09T17:30:26.599365+0000 mon.a (mon.0) 1459 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: cluster 2026-03-09T17:30:26.599365+0000 mon.a (mon.0) 1459 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.603467+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.603467+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.603506+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.603506+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: cluster 2026-03-09T17:30:26.629791+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T17:30:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: cluster 2026-03-09T17:30:26.629791+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.630712+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/1450498284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.630712+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/1450498284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.632577+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.632577+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.633051+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.633051+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.694158+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/1322853465' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.694158+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/1322853465' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.695486+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.695486+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.701725+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.701725+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.703890+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.703890+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.704420+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.704420+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.706117+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.706117+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.707027+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.707027+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.708150+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.708150+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.866588+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:27.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:27 vm00 bash[20770]: audit 2026-03-09T17:30:26.866588+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: cluster 2026-03-09T17:30:26.599365+0000 mon.a (mon.0) 1459 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: cluster 2026-03-09T17:30:26.599365+0000 mon.a (mon.0) 1459 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.603467+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.603467+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59908-28"}]': finished 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.603506+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.603506+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: cluster 2026-03-09T17:30:26.629791+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: cluster 2026-03-09T17:30:26.629791+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.630712+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/1450498284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.630712+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/1450498284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.632577+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.632577+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.633051+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.633051+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.694158+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/1322853465' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.694158+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/1322853465' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.695486+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.695486+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.701725+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.701725+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.703890+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.703890+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.704420+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.704420+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.706117+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.706117+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.707027+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.707027+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.708150+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.708150+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.866588+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:27 vm02 bash[23351]: audit 2026-03-09T17:30:26.866588+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:28.659 INFO:tasks.workunit.client.0.vm00.stdout: RUN ] LibRadosSnapshotsECPP.SnapGetNamePP 2026-03-09T17:30:28.659 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapGetNamePP (1317 ms) 2026-03-09T17:30:28.659 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP (9200 ms total) 2026-03-09T17:30:28.659 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.SnapPP 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.SnapPP (4322 ms) 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.RollbackPP 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.RollbackPP (3078 ms) 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.Bug11677 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.Bug11677 (4293 ms) 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP (11693 ms total) 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] Global test environment tear-down 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [==========] 21 tests from 5 test suites ran. (96273 ms total) 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ PASSED ] 20 tests. 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ SKIPPED ] 1 test, listed below: 2026-03-09T17:30:28.660 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: cluster 2026-03-09T17:30:26.699569+0000 mgr.y (mgr.14505) 194 : cluster [DBG] pgmap v202: 388 pgs: 96 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 623 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: cluster 2026-03-09T17:30:26.699569+0000 mgr.y (mgr.14505) 194 : cluster [DBG] pgmap v202: 388 pgs: 96 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 623 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.422454+0000 mon.a (mon.0) 1469 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.422454+0000 mon.a (mon.0) 1469 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.481383+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.481383+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.615448+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.615448+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.615534+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.615534+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.615660+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.615660+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.615730+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.615730+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.627717+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.627717+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: cluster 2026-03-09T17:30:27.630334+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: cluster 2026-03-09T17:30:27.630334+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.631789+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.631789+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.632439+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.632439+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.700744+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.700744+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.702031+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.702031+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.867411+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:28 vm00 bash[28333]: audit 2026-03-09T17:30:27.867411+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: cluster 2026-03-09T17:30:26.699569+0000 mgr.y (mgr.14505) 194 : cluster [DBG] pgmap v202: 388 pgs: 96 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 623 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: cluster 2026-03-09T17:30:26.699569+0000 mgr.y (mgr.14505) 194 : cluster [DBG] pgmap v202: 388 pgs: 96 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 623 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.422454+0000 mon.a (mon.0) 1469 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.422454+0000 mon.a (mon.0) 1469 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.481383+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.481383+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.615448+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.615448+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.615534+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.615534+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.615660+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.615660+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.615730+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.615730+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.627717+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.627717+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: cluster 2026-03-09T17:30:27.630334+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: cluster 2026-03-09T17:30:27.630334+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.631789+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.631789+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.632439+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.632439+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.700744+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.700744+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.702031+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.702031+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.867411+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:28 vm00 bash[20770]: audit 2026-03-09T17:30:27.867411+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: cluster 2026-03-09T17:30:26.699569+0000 mgr.y (mgr.14505) 194 : cluster [DBG] pgmap v202: 388 pgs: 96 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 623 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: cluster 2026-03-09T17:30:26.699569+0000 mgr.y (mgr.14505) 194 : cluster [DBG] pgmap v202: 388 pgs: 96 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 623 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.422454+0000 mon.a (mon.0) 1469 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.422454+0000 mon.a (mon.0) 1469 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.481383+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.481383+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.615448+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.615448+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-60103-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.615534+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.615534+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.615660+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.615660+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59916-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.615730+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.615730+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59908-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.627717+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.627717+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: cluster 2026-03-09T17:30:27.630334+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: cluster 2026-03-09T17:30:27.630334+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.631789+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.631789+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.632439+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.632439+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.700744+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.700744+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.702031+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.702031+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.867411+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:28.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:28 vm02 bash[23351]: audit 2026-03-09T17:30:27.867411+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.624786+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.624786+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.624895+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.624895+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.640367+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.640367+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: cluster 2026-03-09T17:30:28.643954+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: cluster 2026-03-09T17:30:28.643954+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.645530+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.645530+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.680117+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.680117+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.681567+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.681567+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.685542+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.685542+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.692697+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.692697+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.694041+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.694041+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.695645+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.695645+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: cluster 2026-03-09T17:30:28.699974+0000 mgr.y (mgr.14505) 195 : cluster [DBG] pgmap v205: 324 pgs: 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 6 op/s 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: cluster 2026-03-09T17:30:28.699974+0000 mgr.y (mgr.14505) 195 : cluster [DBG] pgmap v205: 324 pgs: 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 6 op/s 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.700652+0000 mon.c (mon.2) 318 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.700652+0000 mon.c (mon.2) 318 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.701941+0000 mon.a (mon.0) 1485 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.701941+0000 mon.a (mon.0) 1485 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.868302+0000 mon.c (mon.2) 319 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:29 vm00 bash[28333]: audit 2026-03-09T17:30:28.868302+0000 mon.c (mon.2) 319 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.624786+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.624786+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.624895+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.624895+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.640367+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.640367+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: cluster 2026-03-09T17:30:28.643954+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: cluster 2026-03-09T17:30:28.643954+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.645530+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.645530+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.680117+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.680117+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.681567+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.681567+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.685542+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.685542+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.692697+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.692697+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.694041+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.694041+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.695645+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.695645+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: cluster 2026-03-09T17:30:28.699974+0000 mgr.y (mgr.14505) 195 : cluster [DBG] pgmap v205: 324 pgs: 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 6 op/s 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: cluster 2026-03-09T17:30:28.699974+0000 mgr.y (mgr.14505) 195 : cluster [DBG] pgmap v205: 324 pgs: 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 6 op/s 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.700652+0000 mon.c (mon.2) 318 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.700652+0000 mon.c (mon.2) 318 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.701941+0000 mon.a (mon.0) 1485 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.701941+0000 mon.a (mon.0) 1485 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.868302+0000 mon.c (mon.2) 319 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:29 vm00 bash[20770]: audit 2026-03-09T17:30:28.868302+0000 mon.c (mon.2) 319 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.624786+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.624786+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/3571048153' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-60199-21"}]': finished 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.624895+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.624895+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.640367+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.640367+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: cluster 2026-03-09T17:30:28.643954+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: cluster 2026-03-09T17:30:28.643954+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.645530+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.645530+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.680117+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.680117+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.681567+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.681567+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.685542+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.685542+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.692697+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.692697+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.694041+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.694041+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.695645+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.695645+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: cluster 2026-03-09T17:30:28.699974+0000 mgr.y (mgr.14505) 195 : cluster [DBG] pgmap v205: 324 pgs: 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 6 op/s 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: cluster 2026-03-09T17:30:28.699974+0000 mgr.y (mgr.14505) 195 : cluster [DBG] pgmap v205: 324 pgs: 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 6 op/s 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.700652+0000 mon.c (mon.2) 318 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.700652+0000 mon.c (mon.2) 318 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.701941+0000 mon.a (mon.0) 1485 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.701941+0000 mon.a (mon.0) 1485 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.868302+0000 mon.c (mon.2) 319 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:29 vm02 bash[23351]: audit 2026-03-09T17:30:28.868302+0000 mon.c (mon.2) 319 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.643164+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.643164+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.643390+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.643390+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.643449+0000 mon.a (mon.0) 1488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.643449+0000 mon.a (mon.0) 1488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.643497+0000 mon.a (mon.0) 1489 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.643497+0000 mon.a (mon.0) 1489 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: cluster 2026-03-09T17:30:29.669526+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: cluster 2026-03-09T17:30:29.669526+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.676518+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.676518+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.676891+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.676891+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.678878+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.678878+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.678984+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.678984+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.681406+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.100:0/20559573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.681406+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.100:0/20559573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.681693+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.681693+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.869147+0000 mon.c (mon.2) 321 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:30 vm00 bash[28333]: audit 2026-03-09T17:30:29.869147+0000 mon.c (mon.2) 321 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.643164+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.643164+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.643390+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.643390+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.643449+0000 mon.a (mon.0) 1488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.643449+0000 mon.a (mon.0) 1488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.643497+0000 mon.a (mon.0) 1489 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.643497+0000 mon.a (mon.0) 1489 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: cluster 2026-03-09T17:30:29.669526+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: cluster 2026-03-09T17:30:29.669526+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T17:30:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.676518+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.676518+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.676891+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.676891+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.678878+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.678878+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.678984+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.678984+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.681406+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.100:0/20559573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.681406+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.100:0/20559573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.681693+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.681693+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.869147+0000 mon.c (mon.2) 321 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:30 vm00 bash[20770]: audit 2026-03-09T17:30:29.869147+0000 mon.c (mon.2) 321 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.643164+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.643164+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59908-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.643390+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.643390+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.643449+0000 mon.a (mon.0) 1488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.643449+0000 mon.a (mon.0) 1488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.643497+0000 mon.a (mon.0) 1489 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.643497+0000 mon.a (mon.0) 1489 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: cluster 2026-03-09T17:30:29.669526+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: cluster 2026-03-09T17:30:29.669526+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.676518+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.676518+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.676891+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.676891+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.678878+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.678878+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.678984+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.678984+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.681406+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.100:0/20559573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.681406+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.100:0/20559573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.681693+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.681693+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.869147+0000 mon.c (mon.2) 321 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:30 vm02 bash[23351]: audit 2026-03-09T17:30:29.869147+0000 mon.c (mon.2) 321 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:30.643687+0000 mon.a (mon.0) 1494 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:30.643687+0000 mon.a (mon.0) 1494 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:30.670042+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]': finished 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:30.670042+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]': finished 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:30.670118+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:30.670118+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:30.679653+0000 mon.a (mon.0) 1497 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:30.679653+0000 mon.a (mon.0) 1497 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:30.700576+0000 mgr.y (mgr.14505) 196 : cluster [DBG] pgmap v208: 364 pgs: 40 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.5 KiB/s wr, 9 op/s 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:30.700576+0000 mgr.y (mgr.14505) 196 : cluster [DBG] pgmap v208: 364 pgs: 40 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.5 KiB/s wr, 9 op/s 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:30.870028+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:30.870028+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:31.601624+0000 mon.a (mon.0) 1498 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:31.601624+0000 mon.a (mon.0) 1498 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:31.680623+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:31.680623+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:31.683502+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: cluster 2026-03-09T17:30:31.683502+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:31.684966+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:31.684966+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:31.687949+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:31 vm02 bash[23351]: audit 2026-03-09T17:30:31.687949+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:31.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:30:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:30.643687+0000 mon.a (mon.0) 1494 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:30.643687+0000 mon.a (mon.0) 1494 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:30.670042+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:30.670042+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:30.670118+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:30.670118+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:30.679653+0000 mon.a (mon.0) 1497 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:30.679653+0000 mon.a (mon.0) 1497 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:30.700576+0000 mgr.y (mgr.14505) 196 : cluster [DBG] pgmap v208: 364 pgs: 40 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.5 KiB/s wr, 9 op/s 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:30.700576+0000 mgr.y (mgr.14505) 196 : cluster [DBG] pgmap v208: 364 pgs: 40 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.5 KiB/s wr, 9 op/s 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:30.870028+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:30.870028+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:31.601624+0000 mon.a (mon.0) 1498 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:31.601624+0000 mon.a (mon.0) 1498 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:31.680623+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:31.680623+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:31.683502+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: cluster 2026-03-09T17:30:31.683502+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:31.684966+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:31.684966+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:31.687949+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:31 vm00 bash[28333]: audit 2026-03-09T17:30:31.687949+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:30.643687+0000 mon.a (mon.0) 1494 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:30.643687+0000 mon.a (mon.0) 1494 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:30.670042+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:30.670042+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-15", "mode": "writeback"}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:30.670118+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:30.670118+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59916-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:30.679653+0000 mon.a (mon.0) 1497 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:30.679653+0000 mon.a (mon.0) 1497 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:30.700576+0000 mgr.y (mgr.14505) 196 : cluster [DBG] pgmap v208: 364 pgs: 40 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.5 KiB/s wr, 9 op/s 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:30.700576+0000 mgr.y (mgr.14505) 196 : cluster [DBG] pgmap v208: 364 pgs: 40 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.5 KiB/s wr, 9 op/s 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:30.870028+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:30.870028+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:31.601624+0000 mon.a (mon.0) 1498 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:31.601624+0000 mon.a (mon.0) 1498 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:31.680623+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:31.680623+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-60103-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:31.683502+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T17:30:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: cluster 2026-03-09T17:30:31.683502+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T17:30:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:31.684966+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:31.684966+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:31.687949+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:31 vm00 bash[20770]: audit 2026-03-09T17:30:31.687949+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:31.610993+0000 mgr.y (mgr.14505) 197 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:31.610993+0000 mgr.y (mgr.14505) 197 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:31.766032+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:31.766032+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:31.767281+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:31.767281+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:31.870818+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:31.870818+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.692455+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.692455+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.692537+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.692537+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.697578+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.697578+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.697625+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.697625+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: cluster 2026-03-09T17:30:32.710481+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: cluster 2026-03-09T17:30:32.710481+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.711425+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.711425+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.711751+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.711751+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.712549+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/2811603610' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.712549+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/2811603610' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.714291+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:32 vm00 bash[28333]: audit 2026-03-09T17:30:32.714291+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:31.610993+0000 mgr.y (mgr.14505) 197 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:31.610993+0000 mgr.y (mgr.14505) 197 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:31.766032+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:31.766032+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:31.767281+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:31.767281+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:31.870818+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:31.870818+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.692455+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.692455+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.692537+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.692537+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.697578+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.697578+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.697625+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.697625+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: cluster 2026-03-09T17:30:32.710481+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: cluster 2026-03-09T17:30:32.710481+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.711425+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.711425+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.711751+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.711751+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.712549+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/2811603610' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.712549+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/2811603610' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.714291+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:32 vm00 bash[20770]: audit 2026-03-09T17:30:32.714291+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:31.610993+0000 mgr.y (mgr.14505) 197 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:31.610993+0000 mgr.y (mgr.14505) 197 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:31.766032+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:31.766032+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:31.767281+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:31.767281+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:31.870818+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:31.870818+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.692455+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.692455+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.692537+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.692537+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.697578+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.697578+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/2633048059' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.697625+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.697625+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: cluster 2026-03-09T17:30:32.710481+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: cluster 2026-03-09T17:30:32.710481+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.711425+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.711425+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.711751+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.711751+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.712549+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/2811603610' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.712549+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/2811603610' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.714291+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:32 vm02 bash[23351]: audit 2026-03-09T17:30:32.714291+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: cluster 2026-03-09T17:30:32.708377+0000 mgr.y (mgr.14505) 198 : cluster [DBG] pgmap v210: 332 pgs: 8 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: cluster 2026-03-09T17:30:32.708377+0000 mgr.y (mgr.14505) 198 : cluster [DBG] pgmap v210: 332 pgs: 8 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:32.871613+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:32.871613+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: cluster 2026-03-09T17:30:33.694966+0000 mon.a (mon.0) 1509 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: cluster 2026-03-09T17:30:33.694966+0000 mon.a (mon.0) 1509 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.704503+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.704503+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.704572+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.704572+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.704620+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.704620+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: cluster 2026-03-09T17:30:33.711990+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: cluster 2026-03-09T17:30:33.711990+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.720130+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.720130+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.723724+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:33 vm00 bash[28333]: audit 2026-03-09T17:30:33.723724+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: cluster 2026-03-09T17:30:32.708377+0000 mgr.y (mgr.14505) 198 : cluster [DBG] pgmap v210: 332 pgs: 8 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: cluster 2026-03-09T17:30:32.708377+0000 mgr.y (mgr.14505) 198 : cluster [DBG] pgmap v210: 332 pgs: 8 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:32.871613+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:32.871613+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: cluster 2026-03-09T17:30:33.694966+0000 mon.a (mon.0) 1509 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: cluster 2026-03-09T17:30:33.694966+0000 mon.a (mon.0) 1509 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.704503+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.704503+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.704572+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.704572+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.704620+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.704620+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: cluster 2026-03-09T17:30:33.711990+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: cluster 2026-03-09T17:30:33.711990+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.720130+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.720130+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.723724+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:33 vm00 bash[20770]: audit 2026-03-09T17:30:33.723724+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: cluster 2026-03-09T17:30:32.708377+0000 mgr.y (mgr.14505) 198 : cluster [DBG] pgmap v210: 332 pgs: 8 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: cluster 2026-03-09T17:30:32.708377+0000 mgr.y (mgr.14505) 198 : cluster [DBG] pgmap v210: 332 pgs: 8 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 460 KiB data, 621 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:32.871613+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:32.871613+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: cluster 2026-03-09T17:30:33.694966+0000 mon.a (mon.0) 1509 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: cluster 2026-03-09T17:30:33.694966+0000 mon.a (mon.0) 1509 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.704503+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.704503+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59908-29"}]': finished 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.704572+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.704572+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-15"}]': finished 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.704620+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.704620+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59916-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: cluster 2026-03-09T17:30:33.711990+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: cluster 2026-03-09T17:30:33.711990+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.720130+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.720130+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.723724+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:33 vm02 bash[23351]: audit 2026-03-09T17:30:33.723724+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:34.719 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: Running main() from gmock_main.cc 2026-03-09T17:30:34.719 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [==========] Running 57 tests from 4 test suites. 2026-03-09T17:30:34.719 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] Global test environment set-up. 2026-03-09T17:30:34.719 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio 2026-03-09T17:30:34.719 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.TooBigPP 2026-03-09T17:30:34.719 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.TooBigPP (2389 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolQuotaPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolQuotaPP (19531 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleWritePP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleWritePP (6107 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.WaitForSafePP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.WaitForSafePP (3039 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP (3221 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP2 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP2 (3643 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP3 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP3 (2536 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripSparseReadPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripSparseReadPP (3272 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsCompletePP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.IsCompletePP (3250 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsSafePP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.IsSafePP (3764 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.ReturnValuePP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.ReturnValuePP (3033 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushPP (3176 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushAsyncPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushAsyncPP (3364 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP (3074 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP2 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP2 (3027 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP (3589 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP2 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP2 (3254 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPPNS 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPPNS (2351 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPP (3354 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime (3123 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime2 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime2 (3093 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.StatRemovePP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.StatRemovePP (3193 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.ExecuteClassPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.ExecuteClassPP (2138 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.OmapPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.OmapPP (3303 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiWritePP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiWritePP (2692 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.AioUnlockPP 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.AioUnlockPP (3070 ms) 2026-03-09T17:30:34.720 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripAppendPP 2026-03-09T17:30:35.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.779424+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.779424+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.779633+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.779633+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.782370+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.782370+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.782583+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.782583+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.785064+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.785064+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.785342+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.785342+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.872413+0000 mon.c (mon.2) 328 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:33.872413+0000 mon.c (mon.2) 328 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.707593+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.707593+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.707659+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.707659+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: cluster 2026-03-09T17:30:34.711027+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: cluster 2026-03-09T17:30:34.711027+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.712304+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.712304+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.713934+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.713934+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.716712+0000 mon.c (mon.2) 329 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.716712+0000 mon.c (mon.2) 329 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.721000+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.721000+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.725053+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.725053+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.732518+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:34 vm00 bash[28333]: audit 2026-03-09T17:30:34.732518+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.779424+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.779424+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.779633+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.779633+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.782370+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.782370+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.782583+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.782583+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.785064+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.785064+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.785342+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.785342+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.872413+0000 mon.c (mon.2) 328 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:33.872413+0000 mon.c (mon.2) 328 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.707593+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.707593+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.707659+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.707659+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: cluster 2026-03-09T17:30:34.711027+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: cluster 2026-03-09T17:30:34.711027+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.712304+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.712304+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.713934+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.713934+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.716712+0000 mon.c (mon.2) 329 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.716712+0000 mon.c (mon.2) 329 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.721000+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.721000+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.725053+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.725053+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.732518+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:34 vm00 bash[20770]: audit 2026-03-09T17:30:34.732518+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.779424+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.779424+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.779633+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.779633+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.782370+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.782370+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.782583+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.782583+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.785064+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.785064+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.785342+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.785342+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.872413+0000 mon.c (mon.2) 328 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:33.872413+0000 mon.c (mon.2) 328 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.707593+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.707593+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.707659+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.707659+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59908-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: cluster 2026-03-09T17:30:34.711027+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: cluster 2026-03-09T17:30:34.711027+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.712304+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.712304+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1491358974' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.713934+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.713934+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.716712+0000 mon.c (mon.2) 329 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.716712+0000 mon.c (mon.2) 329 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.721000+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.721000+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.725053+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.725053+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.732518+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:34 vm02 bash[23351]: audit 2026-03-09T17:30:34.732518+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:35.746 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAiodosMiscPP (74390 ms total) 2026-03-09T17:30:35.746 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-09T17:30:35.746 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosTwoPoolsECPP.CopyFrom 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosTwoPoolsECPP.CopyFrom (204 ms) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP (204 ms total) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)0, Checksummer::xxhash32, ceph_le > 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Subset 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Subset (69 ms) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Chunked 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Chunked (3 ms) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0 (72 ms total) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)1, Checksummer::xxhash64, ceph_le > 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Subset 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Subset (98 ms) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Chunked 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Chunked (12 ms) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1 (110 ms total) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)2, Checksummer::crc32c, ceph_le > 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Subset 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Subset (89 ms) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Chunked 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Chunked (2 ms) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2 (91 ms total) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscECPP.CompareExtentRange 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscECPP.CompareExtentRange (1069 ms) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP (1069 ms total) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] Global test environment tear-down 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [==========] 31 tests from 7 test suites ran. (103476 ms total) 2026-03-09T17:30:35.747 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ PASSED ] 31 tests. 2026-03-09T17:30:36.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: cluster 2026-03-09T17:30:34.708767+0000 mgr.y (mgr.14505) 199 : cluster [DBG] pgmap v213: 356 pgs: 32 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 273 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:36.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: cluster 2026-03-09T17:30:34.708767+0000 mgr.y (mgr.14505) 199 : cluster [DBG] pgmap v213: 356 pgs: 32 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 273 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:36.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: audit 2026-03-09T17:30:34.873046+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:36.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: audit 2026-03-09T17:30:34.873046+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: audit 2026-03-09T17:30:35.721413+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: audit 2026-03-09T17:30:35.721413+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: audit 2026-03-09T17:30:35.721522+0000 mon.a (mon.0) 1525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: audit 2026-03-09T17:30:35.721522+0000 mon.a (mon.0) 1525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: audit 2026-03-09T17:30:35.741234+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: audit 2026-03-09T17:30:35.741234+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: cluster 2026-03-09T17:30:35.753934+0000 mon.a (mon.0) 1526 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:35 vm00 bash[28333]: cluster 2026-03-09T17:30:35.753934+0000 mon.a (mon.0) 1526 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: cluster 2026-03-09T17:30:34.708767+0000 mgr.y (mgr.14505) 199 : cluster [DBG] pgmap v213: 356 pgs: 32 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 273 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: cluster 2026-03-09T17:30:34.708767+0000 mgr.y (mgr.14505) 199 : cluster [DBG] pgmap v213: 356 pgs: 32 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 273 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: audit 2026-03-09T17:30:34.873046+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: audit 2026-03-09T17:30:34.873046+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: audit 2026-03-09T17:30:35.721413+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: audit 2026-03-09T17:30:35.721413+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: audit 2026-03-09T17:30:35.721522+0000 mon.a (mon.0) 1525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: audit 2026-03-09T17:30:35.721522+0000 mon.a (mon.0) 1525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: audit 2026-03-09T17:30:35.741234+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: audit 2026-03-09T17:30:35.741234+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: cluster 2026-03-09T17:30:35.753934+0000 mon.a (mon.0) 1526 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T17:30:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:35 vm00 bash[20770]: cluster 2026-03-09T17:30:35.753934+0000 mon.a (mon.0) 1526 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: cluster 2026-03-09T17:30:34.708767+0000 mgr.y (mgr.14505) 199 : cluster [DBG] pgmap v213: 356 pgs: 32 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 273 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: cluster 2026-03-09T17:30:34.708767+0000 mgr.y (mgr.14505) 199 : cluster [DBG] pgmap v213: 356 pgs: 32 creating+peering, 39 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 273 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: audit 2026-03-09T17:30:34.873046+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: audit 2026-03-09T17:30:34.873046+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: audit 2026-03-09T17:30:35.721413+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: audit 2026-03-09T17:30:35.721413+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-60103-36"}]': finished 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: audit 2026-03-09T17:30:35.721522+0000 mon.a (mon.0) 1525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: audit 2026-03-09T17:30:35.721522+0000 mon.a (mon.0) 1525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: audit 2026-03-09T17:30:35.741234+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: audit 2026-03-09T17:30:35.741234+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: cluster 2026-03-09T17:30:35.753934+0000 mon.a (mon.0) 1526 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T17:30:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:35 vm02 bash[23351]: cluster 2026-03-09T17:30:35.753934+0000 mon.a (mon.0) 1526 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T17:30:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:30:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:30:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:35.762939+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:35.762939+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:35.768501+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/1411525669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:35.768501+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/1411525669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:35.769909+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:35.769909+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:35.873708+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:35.873708+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:36.726009+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:36.726009+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:36.726089+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:36.726089+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:36.726150+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: audit 2026-03-09T17:30:36.726150+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: cluster 2026-03-09T17:30:36.736168+0000 mon.a (mon.0) 1532 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T17:30:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:36 vm02 bash[23351]: cluster 2026-03-09T17:30:36.736168+0000 mon.a (mon.0) 1532 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:35.762939+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:35.762939+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:35.768501+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/1411525669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:35.768501+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/1411525669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:35.769909+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:35.769909+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:35.873708+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:35.873708+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:36.726009+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:36.726009+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:37.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:36.726089+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:36.726089+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:36.726150+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: audit 2026-03-09T17:30:36.726150+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: cluster 2026-03-09T17:30:36.736168+0000 mon.a (mon.0) 1532 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:36 vm00 bash[28333]: cluster 2026-03-09T17:30:36.736168+0000 mon.a (mon.0) 1532 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:35.762939+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:35.762939+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:35.768501+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/1411525669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:35.768501+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/1411525669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:35.769909+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:35.769909+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:35.873708+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:35.873708+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:36.726009+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:36.726009+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59908-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:36.726089+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:36.726089+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:36.726150+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: audit 2026-03-09T17:30:36.726150+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59916-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: cluster 2026-03-09T17:30:36.736168+0000 mon.a (mon.0) 1532 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T17:30:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:36 vm00 bash[20770]: cluster 2026-03-09T17:30:36.736168+0000 mon.a (mon.0) 1532 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: cluster 2026-03-09T17:30:36.709185+0000 mgr.y (mgr.14505) 200 : cluster [DBG] pgmap v216: 356 pgs: 64 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: cluster 2026-03-09T17:30:36.709185+0000 mgr.y (mgr.14505) 200 : cluster [DBG] pgmap v216: 356 pgs: 64 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: cluster 2026-03-09T17:30:36.783118+0000 mon.a (mon.0) 1533 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: cluster 2026-03-09T17:30:36.783118+0000 mon.a (mon.0) 1533 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:36.784543+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:36.784543+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:36.822584+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:36.822584+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:36.874538+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:36.874538+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:37.743734+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:37.743734+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: cluster 2026-03-09T17:30:37.746417+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: cluster 2026-03-09T17:30:37.746417+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:37.747243+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:37.747243+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:37.748685+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:37 vm02 bash[23351]: audit 2026-03-09T17:30:37.748685+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: cluster 2026-03-09T17:30:36.709185+0000 mgr.y (mgr.14505) 200 : cluster [DBG] pgmap v216: 356 pgs: 64 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: cluster 2026-03-09T17:30:36.709185+0000 mgr.y (mgr.14505) 200 : cluster [DBG] pgmap v216: 356 pgs: 64 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: cluster 2026-03-09T17:30:36.783118+0000 mon.a (mon.0) 1533 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: cluster 2026-03-09T17:30:36.783118+0000 mon.a (mon.0) 1533 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:36.784543+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:36.784543+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:36.822584+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:36.822584+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:36.874538+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:36.874538+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:37.743734+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:37.743734+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: cluster 2026-03-09T17:30:37.746417+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: cluster 2026-03-09T17:30:37.746417+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:37.747243+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:37.747243+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:37.748685+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:37 vm00 bash[28333]: audit 2026-03-09T17:30:37.748685+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: cluster 2026-03-09T17:30:36.709185+0000 mgr.y (mgr.14505) 200 : cluster [DBG] pgmap v216: 356 pgs: 64 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: cluster 2026-03-09T17:30:36.709185+0000 mgr.y (mgr.14505) 200 : cluster [DBG] pgmap v216: 356 pgs: 64 unknown, 12 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 268 active+clean; 460 KiB data, 622 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: cluster 2026-03-09T17:30:36.783118+0000 mon.a (mon.0) 1533 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: cluster 2026-03-09T17:30:36.783118+0000 mon.a (mon.0) 1533 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:36.784543+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:36.784543+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:36.822584+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:36.822584+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:36.874538+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:36.874538+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:37.743734+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:37.743734+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: cluster 2026-03-09T17:30:37.746417+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T17:30:38.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: cluster 2026-03-09T17:30:37.746417+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T17:30:38.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:37.747243+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:37.747243+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:37.748685+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:38.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:37 vm00 bash[20770]: audit 2026-03-09T17:30:37.748685+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:38 vm02 bash[23351]: audit 2026-03-09T17:30:37.875585+0000 mon.c (mon.2) 335 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:38 vm02 bash[23351]: audit 2026-03-09T17:30:37.875585+0000 mon.c (mon.2) 335 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:39.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:38 vm00 bash[28333]: audit 2026-03-09T17:30:37.875585+0000 mon.c (mon.2) 335 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:38 vm00 bash[28333]: audit 2026-03-09T17:30:37.875585+0000 mon.c (mon.2) 335 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:38 vm00 bash[20770]: audit 2026-03-09T17:30:37.875585+0000 mon.c (mon.2) 335 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:38 vm00 bash[20770]: audit 2026-03-09T17:30:37.875585+0000 mon.c (mon.2) 335 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: cluster 2026-03-09T17:30:38.709768+0000 mgr.y (mgr.14505) 201 : cluster [DBG] pgmap v219: 331 pgs: 7 creating+peering, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: cluster 2026-03-09T17:30:38.709768+0000 mgr.y (mgr.14505) 201 : cluster [DBG] pgmap v219: 331 pgs: 7 creating+peering, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.842658+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.842658+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.846801+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.846801+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: cluster 2026-03-09T17:30:38.853716+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: cluster 2026-03-09T17:30:38.853716+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.857053+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.857053+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.857988+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.857988+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.872043+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.872043+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.883507+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.883507+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.883957+0000 mon.c (mon.2) 337 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: audit 2026-03-09T17:30:38.883957+0000 mon.c (mon.2) 337 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: cluster 2026-03-09T17:30:39.842831+0000 mon.a (mon.0) 1543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:39 vm00 bash[28333]: cluster 2026-03-09T17:30:39.842831+0000 mon.a (mon.0) 1543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: cluster 2026-03-09T17:30:38.709768+0000 mgr.y (mgr.14505) 201 : cluster [DBG] pgmap v219: 331 pgs: 7 creating+peering, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: cluster 2026-03-09T17:30:38.709768+0000 mgr.y (mgr.14505) 201 : cluster [DBG] pgmap v219: 331 pgs: 7 creating+peering, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.842658+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.842658+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.846801+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.846801+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: cluster 2026-03-09T17:30:38.853716+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: cluster 2026-03-09T17:30:38.853716+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.857053+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.857053+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.857988+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.857988+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.872043+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.872043+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.883507+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.883507+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.883957+0000 mon.c (mon.2) 337 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: audit 2026-03-09T17:30:38.883957+0000 mon.c (mon.2) 337 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: cluster 2026-03-09T17:30:39.842831+0000 mon.a (mon.0) 1543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:39 vm00 bash[20770]: cluster 2026-03-09T17:30:39.842831+0000 mon.a (mon.0) 1543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: cluster 2026-03-09T17:30:38.709768+0000 mgr.y (mgr.14505) 201 : cluster [DBG] pgmap v219: 331 pgs: 7 creating+peering, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: cluster 2026-03-09T17:30:38.709768+0000 mgr.y (mgr.14505) 201 : cluster [DBG] pgmap v219: 331 pgs: 7 creating+peering, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 300 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.842658+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.842658+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.846801+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.846801+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: cluster 2026-03-09T17:30:38.853716+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: cluster 2026-03-09T17:30:38.853716+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.857053+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.857053+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.857988+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.857988+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.872043+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.872043+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.883507+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.883507+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.883957+0000 mon.c (mon.2) 337 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: audit 2026-03-09T17:30:38.883957+0000 mon.c (mon.2) 337 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: cluster 2026-03-09T17:30:39.842831+0000 mon.a (mon.0) 1543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:39 vm02 bash[23351]: cluster 2026-03-09T17:30:39.842831+0000 mon.a (mon.0) 1543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:41.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.859745+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.859745+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.859907+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.859907+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.859998+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.859998+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: cluster 2026-03-09T17:30:39.872624+0000 mon.a (mon.0) 1547 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: cluster 2026-03-09T17:30:39.872624+0000 mon.a (mon.0) 1547 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.875460+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.875460+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.884620+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.884620+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.889609+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:39.889609+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.033251+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.033251+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.034541+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.034541+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.889177+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.889177+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.889215+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.889215+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.893759+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.893759+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: cluster 2026-03-09T17:30:40.895659+0000 mon.a (mon.0) 1552 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: cluster 2026-03-09T17:30:40.895659+0000 mon.a (mon.0) 1552 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.897208+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:40 vm00 bash[28333]: audit 2026-03-09T17:30:40.897208+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.859745+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.859745+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.859907+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.859907+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.859998+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.859998+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: cluster 2026-03-09T17:30:39.872624+0000 mon.a (mon.0) 1547 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: cluster 2026-03-09T17:30:39.872624+0000 mon.a (mon.0) 1547 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.875460+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.875460+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.884620+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.884620+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.889609+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:39.889609+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.033251+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.033251+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.034541+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.034541+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.889177+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.889177+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.889215+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.889215+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.893759+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.893759+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: cluster 2026-03-09T17:30:40.895659+0000 mon.a (mon.0) 1552 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: cluster 2026-03-09T17:30:40.895659+0000 mon.a (mon.0) 1552 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.897208+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:40 vm00 bash[20770]: audit 2026-03-09T17:30:40.897208+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.859745+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.859745+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-17", "mode": "writeback"}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.859907+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.859907+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.100:0/2749466399' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59916-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.859998+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.859998+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: cluster 2026-03-09T17:30:39.872624+0000 mon.a (mon.0) 1547 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: cluster 2026-03-09T17:30:39.872624+0000 mon.a (mon.0) 1547 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.875460+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.875460+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.100:0/3001520107' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.884620+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.884620+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.889609+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:39.889609+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.033251+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.033251+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.034541+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.034541+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.889177+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.889177+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59908-30"}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.889215+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.889215+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.893759+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.893759+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: cluster 2026-03-09T17:30:40.895659+0000 mon.a (mon.0) 1552 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T17:30:41.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: cluster 2026-03-09T17:30:40.895659+0000 mon.a (mon.0) 1552 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T17:30:41.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.897208+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:40 vm02 bash[23351]: audit 2026-03-09T17:30:40.897208+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]: dispatch 2026-03-09T17:30:41.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:30:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: cluster 2026-03-09T17:30:40.710348+0000 mgr.y (mgr.14505) 202 : cluster [DBG] pgmap v222: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: cluster 2026-03-09T17:30:40.710348+0000 mgr.y (mgr.14505) 202 : cluster [DBG] pgmap v222: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.888442+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.888442+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.934743+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.934743+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.955932+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.955932+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.958098+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.958098+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.962491+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.962491+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.963248+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.963248+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.963587+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:40.963587+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.889420+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.889420+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: cluster 2026-03-09T17:30:41.890457+0000 mon.a (mon.0) 1557 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: cluster 2026-03-09T17:30:41.890457+0000 mon.a (mon.0) 1557 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.895066+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.895066+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.895129+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.895129+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: cluster 2026-03-09T17:30:41.908220+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: cluster 2026-03-09T17:30:41.908220+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.923192+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/2946074285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.923192+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/2946074285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.930024+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:42 vm00 bash[28333]: audit 2026-03-09T17:30:41.930024+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: cluster 2026-03-09T17:30:40.710348+0000 mgr.y (mgr.14505) 202 : cluster [DBG] pgmap v222: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: cluster 2026-03-09T17:30:40.710348+0000 mgr.y (mgr.14505) 202 : cluster [DBG] pgmap v222: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.888442+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.888442+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.934743+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.934743+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.955932+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.955932+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.958098+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.958098+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.962491+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.962491+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.963248+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.963248+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.963587+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:40.963587+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.889420+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.889420+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: cluster 2026-03-09T17:30:41.890457+0000 mon.a (mon.0) 1557 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: cluster 2026-03-09T17:30:41.890457+0000 mon.a (mon.0) 1557 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.895066+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.895066+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.895129+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.895129+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: cluster 2026-03-09T17:30:41.908220+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: cluster 2026-03-09T17:30:41.908220+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.923192+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/2946074285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.923192+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/2946074285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.930024+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:41 vm00 bash[20770]: audit 2026-03-09T17:30:41.930024+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: cluster 2026-03-09T17:30:40.710348+0000 mgr.y (mgr.14505) 202 : cluster [DBG] pgmap v222: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: cluster 2026-03-09T17:30:40.710348+0000 mgr.y (mgr.14505) 202 : cluster [DBG] pgmap v222: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.888442+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.888442+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.934743+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.934743+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.955932+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.955932+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.958098+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.958098+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.962491+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.962491+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.963248+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.963248+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.963587+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:40.963587+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.889420+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.889420+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: cluster 2026-03-09T17:30:41.890457+0000 mon.a (mon.0) 1557 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: cluster 2026-03-09T17:30:41.890457+0000 mon.a (mon.0) 1557 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.895066+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.895066+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-17"}]': finished 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.895129+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.895129+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59908-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: cluster 2026-03-09T17:30:41.908220+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: cluster 2026-03-09T17:30:41.908220+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.923192+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/2946074285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.923192+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/2946074285' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.930024+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:42.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:42 vm02 bash[23351]: audit 2026-03-09T17:30:41.930024+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:43.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:41.622045+0000 mgr.y (mgr.14505) 203 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:43.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:41.622045+0000 mgr.y (mgr.14505) 203 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:43.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:41.939530+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:41.939530+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:41.957069+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:41.957069+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:42.489200+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:42.489200+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:42.890445+0000 mon.c (mon.2) 347 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:42.890445+0000 mon.c (mon.2) 347 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:42.903791+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: audit 2026-03-09T17:30:42.903791+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: cluster 2026-03-09T17:30:42.929991+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:43 vm00 bash[28333]: cluster 2026-03-09T17:30:42.929991+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:41.622045+0000 mgr.y (mgr.14505) 203 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:41.622045+0000 mgr.y (mgr.14505) 203 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:41.939530+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:41.939530+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:41.957069+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:41.957069+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:42.489200+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:42.489200+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:42.890445+0000 mon.c (mon.2) 347 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:42.890445+0000 mon.c (mon.2) 347 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:42.903791+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: audit 2026-03-09T17:30:42.903791+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: cluster 2026-03-09T17:30:42.929991+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T17:30:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:43 vm00 bash[20770]: cluster 2026-03-09T17:30:42.929991+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:41.622045+0000 mgr.y (mgr.14505) 203 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:41.622045+0000 mgr.y (mgr.14505) 203 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:41.939530+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:41.939530+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:41.957069+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:41.957069+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:42.489200+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:42.489200+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:42.890445+0000 mon.c (mon.2) 347 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:42.890445+0000 mon.c (mon.2) 347 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:42.903791+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: audit 2026-03-09T17:30:42.903791+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59916-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: cluster 2026-03-09T17:30:42.929991+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T17:30:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:43 vm02 bash[23351]: cluster 2026-03-09T17:30:42.929991+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T17:30:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: cluster 2026-03-09T17:30:42.710763+0000 mgr.y (mgr.14505) 204 : cluster [DBG] pgmap v225: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: cluster 2026-03-09T17:30:42.710763+0000 mgr.y (mgr.14505) 204 : cluster [DBG] pgmap v225: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:44.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: cluster 2026-03-09T17:30:42.974563+0000 mon.a (mon.0) 1565 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: cluster 2026-03-09T17:30:42.974563+0000 mon.a (mon.0) 1565 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: audit 2026-03-09T17:30:43.891418+0000 mon.c (mon.2) 348 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: audit 2026-03-09T17:30:43.891418+0000 mon.c (mon.2) 348 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: audit 2026-03-09T17:30:43.937084+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: audit 2026-03-09T17:30:43.937084+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: cluster 2026-03-09T17:30:43.950322+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: cluster 2026-03-09T17:30:43.950322+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: audit 2026-03-09T17:30:43.955960+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: audit 2026-03-09T17:30:43.955960+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: audit 2026-03-09T17:30:43.960774+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:44 vm00 bash[28333]: audit 2026-03-09T17:30:43.960774+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: cluster 2026-03-09T17:30:42.710763+0000 mgr.y (mgr.14505) 204 : cluster [DBG] pgmap v225: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: cluster 2026-03-09T17:30:42.710763+0000 mgr.y (mgr.14505) 204 : cluster [DBG] pgmap v225: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: cluster 2026-03-09T17:30:42.974563+0000 mon.a (mon.0) 1565 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: cluster 2026-03-09T17:30:42.974563+0000 mon.a (mon.0) 1565 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: audit 2026-03-09T17:30:43.891418+0000 mon.c (mon.2) 348 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: audit 2026-03-09T17:30:43.891418+0000 mon.c (mon.2) 348 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: audit 2026-03-09T17:30:43.937084+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: audit 2026-03-09T17:30:43.937084+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: cluster 2026-03-09T17:30:43.950322+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: cluster 2026-03-09T17:30:43.950322+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: audit 2026-03-09T17:30:43.955960+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: audit 2026-03-09T17:30:43.955960+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: audit 2026-03-09T17:30:43.960774+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:44 vm00 bash[20770]: audit 2026-03-09T17:30:43.960774+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: cluster 2026-03-09T17:30:42.710763+0000 mgr.y (mgr.14505) 204 : cluster [DBG] pgmap v225: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: cluster 2026-03-09T17:30:42.710763+0000 mgr.y (mgr.14505) 204 : cluster [DBG] pgmap v225: 355 pgs: 32 unknown, 1 clean+premerge+peered, 11 active+clean+snaptrim_wait, 12 active+clean+snaptrim, 299 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: cluster 2026-03-09T17:30:42.974563+0000 mon.a (mon.0) 1565 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: cluster 2026-03-09T17:30:42.974563+0000 mon.a (mon.0) 1565 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: audit 2026-03-09T17:30:43.891418+0000 mon.c (mon.2) 348 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: audit 2026-03-09T17:30:43.891418+0000 mon.c (mon.2) 348 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: audit 2026-03-09T17:30:43.937084+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: audit 2026-03-09T17:30:43.937084+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59908-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: cluster 2026-03-09T17:30:43.950322+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: cluster 2026-03-09T17:30:43.950322+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: audit 2026-03-09T17:30:43.955960+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: audit 2026-03-09T17:30:43.955960+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: audit 2026-03-09T17:30:43.960774+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:44.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:44 vm02 bash[23351]: audit 2026-03-09T17:30:43.960774+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.892347+0000 mon.c (mon.2) 349 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.892347+0000 mon.c (mon.2) 349 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.946541+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.946541+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: cluster 2026-03-09T17:30:44.964291+0000 mon.a (mon.0) 1570 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: cluster 2026-03-09T17:30:44.964291+0000 mon.a (mon.0) 1570 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.965366+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.965366+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.967697+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.967697+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.967749+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.967749+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.969850+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:45 vm02 bash[23351]: audit 2026-03-09T17:30:44.969850+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.892347+0000 mon.c (mon.2) 349 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.892347+0000 mon.c (mon.2) 349 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.946541+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.946541+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: cluster 2026-03-09T17:30:44.964291+0000 mon.a (mon.0) 1570 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: cluster 2026-03-09T17:30:44.964291+0000 mon.a (mon.0) 1570 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.965366+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.965366+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.967697+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.967697+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.967749+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.967749+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.969850+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:45 vm00 bash[28333]: audit 2026-03-09T17:30:44.969850+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.892347+0000 mon.c (mon.2) 349 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.892347+0000 mon.c (mon.2) 349 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.946541+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.946541+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: cluster 2026-03-09T17:30:44.964291+0000 mon.a (mon.0) 1570 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: cluster 2026-03-09T17:30:44.964291+0000 mon.a (mon.0) 1570 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.965366+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.965366+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.967697+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.967697+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.967749+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.967749+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.969850+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:45 vm00 bash[20770]: audit 2026-03-09T17:30:44.969850+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:46.301 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: cluster 2026-03-09T17:30:44.711110+0000 mgr.y (mgr.14505) 205 : cluster [DBG] pgmap v228: 331 pgs: 40 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: cluster 2026-03-09T17:30:44.711110+0000 mgr.y (mgr.14505) 205 : cluster [DBG] pgmap v228: 331 pgs: 40 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.601911+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.601911+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.893277+0000 mon.c (mon.2) 351 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.893277+0000 mon.c (mon.2) 351 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.937470+0000 mon.a (mon.0) 1573 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.937470+0000 mon.a (mon.0) 1573 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.949712+0000 mon.a (mon.0) 1574 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.949712+0000 mon.a (mon.0) 1574 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.953096+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.953096+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.953187+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.953187+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.963633+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.963633+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: cluster 2026-03-09T17:30:45.980385+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: cluster 2026-03-09T17:30:45.980385+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.981077+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:45.981077+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:46.003424+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:46.003424+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:46.004612+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:46 vm00 bash[20770]: audit 2026-03-09T17:30:46.004612+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:30:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:30:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: cluster 2026-03-09T17:30:44.711110+0000 mgr.y (mgr.14505) 205 : cluster [DBG] pgmap v228: 331 pgs: 40 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: cluster 2026-03-09T17:30:44.711110+0000 mgr.y (mgr.14505) 205 : cluster [DBG] pgmap v228: 331 pgs: 40 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.601911+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.601911+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.893277+0000 mon.c (mon.2) 351 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.893277+0000 mon.c (mon.2) 351 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.937470+0000 mon.a (mon.0) 1573 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.937470+0000 mon.a (mon.0) 1573 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.949712+0000 mon.a (mon.0) 1574 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.949712+0000 mon.a (mon.0) 1574 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.953096+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.953096+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.953187+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.953187+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.963633+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.963633+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: cluster 2026-03-09T17:30:45.980385+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: cluster 2026-03-09T17:30:45.980385+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T17:30:46.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.981077+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.303 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:45.981077+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.303 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:46.003424+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.303 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:46.003424+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.303 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:46.004612+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.303 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:46 vm00 bash[28333]: audit 2026-03-09T17:30:46.004612+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: cluster 2026-03-09T17:30:44.711110+0000 mgr.y (mgr.14505) 205 : cluster [DBG] pgmap v228: 331 pgs: 40 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: cluster 2026-03-09T17:30:44.711110+0000 mgr.y (mgr.14505) 205 : cluster [DBG] pgmap v228: 331 pgs: 40 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.601911+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.601911+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.893277+0000 mon.c (mon.2) 351 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.893277+0000 mon.c (mon.2) 351 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.937470+0000 mon.a (mon.0) 1573 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.937470+0000 mon.a (mon.0) 1573 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.949712+0000 mon.a (mon.0) 1574 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.949712+0000 mon.a (mon.0) 1574 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.953096+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.953096+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.953187+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.953187+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59916-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.963633+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.963633+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: cluster 2026-03-09T17:30:45.980385+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: cluster 2026-03-09T17:30:45.980385+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.981077+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:45.981077+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:46.003424+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:46.003424+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:46.004612+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:46 vm02 bash[23351]: audit 2026-03-09T17:30:46.004612+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.341708+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.341708+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.343116+0000 mon.c (mon.2) 354 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.343116+0000 mon.c (mon.2) 354 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.349757+0000 mon.a (mon.0) 1580 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.349757+0000 mon.a (mon.0) 1580 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461078+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461078+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461141+0000 mon.b (mon.1) 173 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461141+0000 mon.b (mon.1) 173 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461182+0000 mon.b (mon.1) 174 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461182+0000 mon.b (mon.1) 174 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461223+0000 mon.b (mon.1) 175 : audit [INF] "var": "eio", 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461223+0000 mon.b (mon.1) 175 : audit [INF] "var": "eio", 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461265+0000 mon.b (mon.1) 176 : audit [INF] "val": "true" 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461265+0000 mon.b (mon.1) 176 : audit [INF] "val": "true" 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461317+0000 mon.b (mon.1) 177 : audit [INF] }]: dispatch 2026-03-09T17:30:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.461317+0000 mon.b (mon.1) 177 : audit [INF] }]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462638+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462638+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462651+0000 mon.a (mon.0) 1582 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462651+0000 mon.a (mon.0) 1582 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462656+0000 mon.a (mon.0) 1583 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462656+0000 mon.a (mon.0) 1583 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462660+0000 mon.a (mon.0) 1584 : audit [INF] "var": "eio", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462660+0000 mon.a (mon.0) 1584 : audit [INF] "var": "eio", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462664+0000 mon.a (mon.0) 1585 : audit [INF] "val": "true" 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462664+0000 mon.a (mon.0) 1585 : audit [INF] "val": "true" 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462668+0000 mon.a (mon.0) 1586 : audit [INF] }]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.462668+0000 mon.a (mon.0) 1586 : audit [INF] }]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.894369+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.894369+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956084+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956084+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956199+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956199+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956405+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956405+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956453+0000 mon.a (mon.0) 1590 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956453+0000 mon.a (mon.0) 1590 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956496+0000 mon.a (mon.0) 1591 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956496+0000 mon.a (mon.0) 1591 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956537+0000 mon.a (mon.0) 1592 : audit [INF] "var": "eio", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956537+0000 mon.a (mon.0) 1592 : audit [INF] "var": "eio", 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956579+0000 mon.a (mon.0) 1593 : audit [INF] "val": "true" 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956579+0000 mon.a (mon.0) 1593 : audit [INF] "val": "true" 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956620+0000 mon.a (mon.0) 1594 : audit [INF] }]': finished 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.956620+0000 mon.a (mon.0) 1594 : audit [INF] }]': finished 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.961725+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.961725+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.966988+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.966988+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: cluster 2026-03-09T17:30:46.967703+0000 mon.a (mon.0) 1595 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: cluster 2026-03-09T17:30:46.967703+0000 mon.a (mon.0) 1595 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.969461+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.969461+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.969524+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:47 vm02 bash[23351]: audit 2026-03-09T17:30:46.969524+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.341708+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.341708+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.343116+0000 mon.c (mon.2) 354 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.343116+0000 mon.c (mon.2) 354 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.349757+0000 mon.a (mon.0) 1580 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.349757+0000 mon.a (mon.0) 1580 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461078+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461078+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461141+0000 mon.b (mon.1) 173 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461141+0000 mon.b (mon.1) 173 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461182+0000 mon.b (mon.1) 174 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461182+0000 mon.b (mon.1) 174 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461223+0000 mon.b (mon.1) 175 : audit [INF] "var": "eio", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461223+0000 mon.b (mon.1) 175 : audit [INF] "var": "eio", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461265+0000 mon.b (mon.1) 176 : audit [INF] "val": "true" 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461265+0000 mon.b (mon.1) 176 : audit [INF] "val": "true" 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461317+0000 mon.b (mon.1) 177 : audit [INF] }]: dispatch 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.461317+0000 mon.b (mon.1) 177 : audit [INF] }]: dispatch 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462638+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462638+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462651+0000 mon.a (mon.0) 1582 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462651+0000 mon.a (mon.0) 1582 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462656+0000 mon.a (mon.0) 1583 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462656+0000 mon.a (mon.0) 1583 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462660+0000 mon.a (mon.0) 1584 : audit [INF] "var": "eio", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462660+0000 mon.a (mon.0) 1584 : audit [INF] "var": "eio", 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462664+0000 mon.a (mon.0) 1585 : audit [INF] "val": "true" 2026-03-09T17:30:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462664+0000 mon.a (mon.0) 1585 : audit [INF] "val": "true" 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462668+0000 mon.a (mon.0) 1586 : audit [INF] }]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.462668+0000 mon.a (mon.0) 1586 : audit [INF] }]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.894369+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.894369+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956084+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956084+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956199+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956199+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956405+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956405+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956453+0000 mon.a (mon.0) 1590 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956453+0000 mon.a (mon.0) 1590 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956496+0000 mon.a (mon.0) 1591 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956496+0000 mon.a (mon.0) 1591 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956537+0000 mon.a (mon.0) 1592 : audit [INF] "var": "eio", 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956537+0000 mon.a (mon.0) 1592 : audit [INF] "var": "eio", 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956579+0000 mon.a (mon.0) 1593 : audit [INF] "val": "true" 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956579+0000 mon.a (mon.0) 1593 : audit [INF] "val": "true" 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956620+0000 mon.a (mon.0) 1594 : audit [INF] }]': finished 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.956620+0000 mon.a (mon.0) 1594 : audit [INF] }]': finished 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.961725+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.961725+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.966988+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.966988+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: cluster 2026-03-09T17:30:46.967703+0000 mon.a (mon.0) 1595 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: cluster 2026-03-09T17:30:46.967703+0000 mon.a (mon.0) 1595 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.969461+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.969461+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.969524+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:47 vm00 bash[28333]: audit 2026-03-09T17:30:46.969524+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.341708+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.341708+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:30:47.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.343116+0000 mon.c (mon.2) 354 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.343116+0000 mon.c (mon.2) 354 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.349757+0000 mon.a (mon.0) 1580 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.349757+0000 mon.a (mon.0) 1580 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461078+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461078+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/4063142816' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461141+0000 mon.b (mon.1) 173 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461141+0000 mon.b (mon.1) 173 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461182+0000 mon.b (mon.1) 174 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461182+0000 mon.b (mon.1) 174 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461223+0000 mon.b (mon.1) 175 : audit [INF] "var": "eio", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461223+0000 mon.b (mon.1) 175 : audit [INF] "var": "eio", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461265+0000 mon.b (mon.1) 176 : audit [INF] "val": "true" 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461265+0000 mon.b (mon.1) 176 : audit [INF] "val": "true" 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461317+0000 mon.b (mon.1) 177 : audit [INF] }]: dispatch 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.461317+0000 mon.b (mon.1) 177 : audit [INF] }]: dispatch 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462638+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462638+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462651+0000 mon.a (mon.0) 1582 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462651+0000 mon.a (mon.0) 1582 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462656+0000 mon.a (mon.0) 1583 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462656+0000 mon.a (mon.0) 1583 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462660+0000 mon.a (mon.0) 1584 : audit [INF] "var": "eio", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462660+0000 mon.a (mon.0) 1584 : audit [INF] "var": "eio", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462664+0000 mon.a (mon.0) 1585 : audit [INF] "val": "true" 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462664+0000 mon.a (mon.0) 1585 : audit [INF] "val": "true" 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462668+0000 mon.a (mon.0) 1586 : audit [INF] }]: dispatch 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.462668+0000 mon.a (mon.0) 1586 : audit [INF] }]: dispatch 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.894369+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.894369+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956084+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956084+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956199+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956199+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956405+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956405+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956453+0000 mon.a (mon.0) 1590 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956453+0000 mon.a (mon.0) 1590 : audit [INF] "prefix": "osd pool set", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956496+0000 mon.a (mon.0) 1591 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956496+0000 mon.a (mon.0) 1591 : audit [INF] "pool": "PoolEIOFlag_vm00-59916-33", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956537+0000 mon.a (mon.0) 1592 : audit [INF] "var": "eio", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956537+0000 mon.a (mon.0) 1592 : audit [INF] "var": "eio", 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956579+0000 mon.a (mon.0) 1593 : audit [INF] "val": "true" 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956579+0000 mon.a (mon.0) 1593 : audit [INF] "val": "true" 2026-03-09T17:30:47.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956620+0000 mon.a (mon.0) 1594 : audit [INF] }]': finished 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.956620+0000 mon.a (mon.0) 1594 : audit [INF] }]': finished 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.961725+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.961725+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.966988+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.966988+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.100:0/4225327915' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: cluster 2026-03-09T17:30:46.967703+0000 mon.a (mon.0) 1595 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: cluster 2026-03-09T17:30:46.967703+0000 mon.a (mon.0) 1595 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.969461+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.969461+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]: dispatch 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.969524+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:47.541 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:47 vm00 bash[20770]: audit 2026-03-09T17:30:46.969524+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]: dispatch 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: cluster 2026-03-09T17:30:46.711496+0000 mgr.y (mgr.14505) 206 : cluster [DBG] pgmap v231: 355 pgs: 64 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: cluster 2026-03-09T17:30:46.711496+0000 mgr.y (mgr.14505) 206 : cluster [DBG] pgmap v231: 355 pgs: 64 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.895158+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.895158+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: cluster 2026-03-09T17:30:47.956297+0000 mon.a (mon.0) 1598 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: cluster 2026-03-09T17:30:47.956297+0000 mon.a (mon.0) 1598 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.962480+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]': finished 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.962480+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]': finished 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.962549+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.962549+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: cluster 2026-03-09T17:30:47.972160+0000 mon.a (mon.0) 1601 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: cluster 2026-03-09T17:30:47.972160+0000 mon.a (mon.0) 1601 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.993040+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.993040+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.993603+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.993603+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.994751+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:47.994751+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.001377+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.001377+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.002086+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.002086+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.002389+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.002389+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.043095+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.043095+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.044368+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:48 vm02 bash[23351]: audit 2026-03-09T17:30:48.044368+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: cluster 2026-03-09T17:30:46.711496+0000 mgr.y (mgr.14505) 206 : cluster [DBG] pgmap v231: 355 pgs: 64 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: cluster 2026-03-09T17:30:46.711496+0000 mgr.y (mgr.14505) 206 : cluster [DBG] pgmap v231: 355 pgs: 64 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.895158+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.895158+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: cluster 2026-03-09T17:30:47.956297+0000 mon.a (mon.0) 1598 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: cluster 2026-03-09T17:30:47.956297+0000 mon.a (mon.0) 1598 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.962480+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]': finished 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.962480+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]': finished 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.962549+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.962549+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: cluster 2026-03-09T17:30:47.972160+0000 mon.a (mon.0) 1601 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: cluster 2026-03-09T17:30:47.972160+0000 mon.a (mon.0) 1601 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.993040+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.993040+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.993603+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.993603+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.994751+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:47.994751+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.001377+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.001377+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.002086+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.002086+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.002389+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.002389+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.043095+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.043095+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.044368+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:48 vm00 bash[28333]: audit 2026-03-09T17:30:48.044368+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: cluster 2026-03-09T17:30:46.711496+0000 mgr.y (mgr.14505) 206 : cluster [DBG] pgmap v231: 355 pgs: 64 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: cluster 2026-03-09T17:30:46.711496+0000 mgr.y (mgr.14505) 206 : cluster [DBG] pgmap v231: 355 pgs: 64 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 627 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.895158+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.895158+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: cluster 2026-03-09T17:30:47.956297+0000 mon.a (mon.0) 1598 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: cluster 2026-03-09T17:30:47.956297+0000 mon.a (mon.0) 1598 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.962480+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]': finished 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.962480+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-19", "mode": "writeback"}]': finished 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.962549+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.962549+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59908-31"}]': finished 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: cluster 2026-03-09T17:30:47.972160+0000 mon.a (mon.0) 1601 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: cluster 2026-03-09T17:30:47.972160+0000 mon.a (mon.0) 1601 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.993040+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.993040+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.993603+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.993603+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.994751+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:47.994751+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.001377+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.001377+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.002086+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.002086+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.002389+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.002389+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.043095+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.043095+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.044368+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:48 vm00 bash[20770]: audit 2026-03-09T17:30:48.044368+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:49.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: cluster 2026-03-09T17:30:48.711864+0000 mgr.y (mgr.14505) 207 : cluster [DBG] pgmap v234: 323 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:49.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: cluster 2026-03-09T17:30:48.711864+0000 mgr.y (mgr.14505) 207 : cluster [DBG] pgmap v234: 323 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:49.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.712200+0000 mon.c (mon.2) 361 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.712200+0000 mon.c (mon.2) 361 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.712923+0000 mon.a (mon.0) 1606 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.712923+0000 mon.a (mon.0) 1606 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.895974+0000 mon.c (mon.2) 362 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.895974+0000 mon.c (mon.2) 362 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.966278+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.966278+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.966452+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.966452+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.966550+0000 mon.a (mon.0) 1609 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.966550+0000 mon.a (mon.0) 1609 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: cluster 2026-03-09T17:30:48.970343+0000 mon.a (mon.0) 1610 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: cluster 2026-03-09T17:30:48.970343+0000 mon.a (mon.0) 1610 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.971638+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.971638+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.974582+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.974582+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.981447+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.981447+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.987622+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.987622+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.988844+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.100:0/3055522931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.988844+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.100:0/3055522931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.990509+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:49 vm00 bash[28333]: audit 2026-03-09T17:30:48.990509+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: cluster 2026-03-09T17:30:48.711864+0000 mgr.y (mgr.14505) 207 : cluster [DBG] pgmap v234: 323 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: cluster 2026-03-09T17:30:48.711864+0000 mgr.y (mgr.14505) 207 : cluster [DBG] pgmap v234: 323 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.712200+0000 mon.c (mon.2) 361 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.712200+0000 mon.c (mon.2) 361 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.712923+0000 mon.a (mon.0) 1606 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.712923+0000 mon.a (mon.0) 1606 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.895974+0000 mon.c (mon.2) 362 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.895974+0000 mon.c (mon.2) 362 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.966278+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.966278+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.966452+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.966452+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.966550+0000 mon.a (mon.0) 1609 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.966550+0000 mon.a (mon.0) 1609 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: cluster 2026-03-09T17:30:48.970343+0000 mon.a (mon.0) 1610 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: cluster 2026-03-09T17:30:48.970343+0000 mon.a (mon.0) 1610 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.971638+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.971638+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.974582+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.974582+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.981447+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.981447+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.987622+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.987622+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.988844+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.100:0/3055522931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.988844+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.100:0/3055522931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.990509+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:49 vm00 bash[20770]: audit 2026-03-09T17:30:48.990509+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: cluster 2026-03-09T17:30:48.711864+0000 mgr.y (mgr.14505) 207 : cluster [DBG] pgmap v234: 323 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: cluster 2026-03-09T17:30:48.711864+0000 mgr.y (mgr.14505) 207 : cluster [DBG] pgmap v234: 323 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.712200+0000 mon.c (mon.2) 361 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.712200+0000 mon.c (mon.2) 361 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.712923+0000 mon.a (mon.0) 1606 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.712923+0000 mon.a (mon.0) 1606 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.895974+0000 mon.c (mon.2) 362 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.895974+0000 mon.c (mon.2) 362 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.966278+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.966278+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59908-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.966452+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.966452+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.966550+0000 mon.a (mon.0) 1609 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.966550+0000 mon.a (mon.0) 1609 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: cluster 2026-03-09T17:30:48.970343+0000 mon.a (mon.0) 1610 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: cluster 2026-03-09T17:30:48.970343+0000 mon.a (mon.0) 1610 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.971638+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.971638+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.974582+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.974582+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.981447+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.981447+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.987622+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.987622+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.988844+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.100:0/3055522931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.988844+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.100:0/3055522931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.990509+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:49 vm02 bash[23351]: audit 2026-03-09T17:30:48.990509+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: audit 2026-03-09T17:30:49.896719+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: audit 2026-03-09T17:30:49.896719+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: cluster 2026-03-09T17:30:49.966476+0000 mon.a (mon.0) 1614 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: cluster 2026-03-09T17:30:49.966476+0000 mon.a (mon.0) 1614 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: audit 2026-03-09T17:30:49.969298+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: audit 2026-03-09T17:30:49.969298+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: audit 2026-03-09T17:30:49.969385+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: audit 2026-03-09T17:30:49.969385+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:50.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: cluster 2026-03-09T17:30:50.019237+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:50 vm00 bash[28333]: cluster 2026-03-09T17:30:50.019237+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: audit 2026-03-09T17:30:49.896719+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: audit 2026-03-09T17:30:49.896719+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: cluster 2026-03-09T17:30:49.966476+0000 mon.a (mon.0) 1614 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: cluster 2026-03-09T17:30:49.966476+0000 mon.a (mon.0) 1614 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: audit 2026-03-09T17:30:49.969298+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: audit 2026-03-09T17:30:49.969298+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: audit 2026-03-09T17:30:49.969385+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: audit 2026-03-09T17:30:49.969385+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: cluster 2026-03-09T17:30:50.019237+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T17:30:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:50 vm00 bash[20770]: cluster 2026-03-09T17:30:50.019237+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T17:30:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: audit 2026-03-09T17:30:49.896719+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: audit 2026-03-09T17:30:49.896719+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: cluster 2026-03-09T17:30:49.966476+0000 mon.a (mon.0) 1614 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: cluster 2026-03-09T17:30:49.966476+0000 mon.a (mon.0) 1614 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: audit 2026-03-09T17:30:49.969298+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: audit 2026-03-09T17:30:49.969298+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-19"}]': finished 2026-03-09T17:30:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: audit 2026-03-09T17:30:49.969385+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: audit 2026-03-09T17:30:49.969385+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59916-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: cluster 2026-03-09T17:30:50.019237+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T17:30:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:50 vm02 bash[23351]: cluster 2026-03-09T17:30:50.019237+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T17:30:51.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:51 vm00 bash[28333]: cluster 2026-03-09T17:30:50.712214+0000 mgr.y (mgr.14505) 208 : cluster [DBG] pgmap v237: 355 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:51.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:51 vm00 bash[28333]: cluster 2026-03-09T17:30:50.712214+0000 mgr.y (mgr.14505) 208 : cluster [DBG] pgmap v237: 355 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:51.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:51 vm00 bash[28333]: audit 2026-03-09T17:30:50.897369+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:51.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:51 vm00 bash[28333]: audit 2026-03-09T17:30:50.897369+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:51.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:51 vm00 bash[28333]: audit 2026-03-09T17:30:50.972082+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:51.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:51 vm00 bash[28333]: audit 2026-03-09T17:30:50.972082+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:51 vm00 bash[28333]: cluster 2026-03-09T17:30:50.975135+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:51 vm00 bash[28333]: cluster 2026-03-09T17:30:50.975135+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:51 vm00 bash[20770]: cluster 2026-03-09T17:30:50.712214+0000 mgr.y (mgr.14505) 208 : cluster [DBG] pgmap v237: 355 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:51 vm00 bash[20770]: cluster 2026-03-09T17:30:50.712214+0000 mgr.y (mgr.14505) 208 : cluster [DBG] pgmap v237: 355 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:51 vm00 bash[20770]: audit 2026-03-09T17:30:50.897369+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:51 vm00 bash[20770]: audit 2026-03-09T17:30:50.897369+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:51 vm00 bash[20770]: audit 2026-03-09T17:30:50.972082+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:51 vm00 bash[20770]: audit 2026-03-09T17:30:50.972082+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:51 vm00 bash[20770]: cluster 2026-03-09T17:30:50.975135+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T17:30:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:51 vm00 bash[20770]: cluster 2026-03-09T17:30:50.975135+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T17:30:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:51 vm02 bash[23351]: cluster 2026-03-09T17:30:50.712214+0000 mgr.y (mgr.14505) 208 : cluster [DBG] pgmap v237: 355 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:51 vm02 bash[23351]: cluster 2026-03-09T17:30:50.712214+0000 mgr.y (mgr.14505) 208 : cluster [DBG] pgmap v237: 355 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 303 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:30:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:51 vm02 bash[23351]: audit 2026-03-09T17:30:50.897369+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:51 vm02 bash[23351]: audit 2026-03-09T17:30:50.897369+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:51 vm02 bash[23351]: audit 2026-03-09T17:30:50.972082+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:51 vm02 bash[23351]: audit 2026-03-09T17:30:50.972082+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59908-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:51 vm02 bash[23351]: cluster 2026-03-09T17:30:50.975135+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T17:30:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:51 vm02 bash[23351]: cluster 2026-03-09T17:30:50.975135+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T17:30:51.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:30:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:30:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: cluster 2026-03-09T17:30:51.604732+0000 mon.a (mon.0) 1620 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: cluster 2026-03-09T17:30:51.604732+0000 mon.a (mon.0) 1620 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:51.626141+0000 mgr.y (mgr.14505) 209 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:51.626141+0000 mgr.y (mgr.14505) 209 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:51.898306+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:51.898306+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: cluster 2026-03-09T17:30:51.985503+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: cluster 2026-03-09T17:30:51.985503+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:51.996891+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.100:0/2942118530' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:51.996891+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.100:0/2942118530' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:51.997998+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:51.997998+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:52.009368+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:52.009368+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:52.009476+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:52 vm00 bash[28333]: audit 2026-03-09T17:30:52.009476+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: cluster 2026-03-09T17:30:51.604732+0000 mon.a (mon.0) 1620 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: cluster 2026-03-09T17:30:51.604732+0000 mon.a (mon.0) 1620 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:51.626141+0000 mgr.y (mgr.14505) 209 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:51.626141+0000 mgr.y (mgr.14505) 209 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:51.898306+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:51.898306+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: cluster 2026-03-09T17:30:51.985503+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: cluster 2026-03-09T17:30:51.985503+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:51.996891+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.100:0/2942118530' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:51.996891+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.100:0/2942118530' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:51.997998+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:51.997998+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:52.009368+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:52.009368+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:52.009476+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:52 vm00 bash[20770]: audit 2026-03-09T17:30:52.009476+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: cluster 2026-03-09T17:30:51.604732+0000 mon.a (mon.0) 1620 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: cluster 2026-03-09T17:30:51.604732+0000 mon.a (mon.0) 1620 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:30:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:51.626141+0000 mgr.y (mgr.14505) 209 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:51.626141+0000 mgr.y (mgr.14505) 209 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:51.898306+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:51.898306+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: cluster 2026-03-09T17:30:51.985503+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: cluster 2026-03-09T17:30:51.985503+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:51.996891+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.100:0/2942118530' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:51.996891+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.100:0/2942118530' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:51.997998+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:51.997998+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:52.009368+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:52.009368+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:52.009476+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:52 vm02 bash[23351]: audit 2026-03-09T17:30:52.009476+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:53.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: cluster 2026-03-09T17:30:52.712559+0000 mgr.y (mgr.14505) 210 : cluster [DBG] pgmap v240: 363 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: cluster 2026-03-09T17:30:52.712559+0000 mgr.y (mgr.14505) 210 : cluster [DBG] pgmap v240: 363 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:52.899119+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:52.899119+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:52.980472+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:52.980472+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:52.980778+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:52.980778+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: cluster 2026-03-09T17:30:52.993795+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: cluster 2026-03-09T17:30:52.993795+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:53.013118+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:53.013118+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:53.014130+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:53.014130+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:53.042312+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:53.042312+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:53.043554+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:53 vm02 bash[23351]: audit 2026-03-09T17:30:53.043554+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: cluster 2026-03-09T17:30:52.712559+0000 mgr.y (mgr.14505) 210 : cluster [DBG] pgmap v240: 363 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: cluster 2026-03-09T17:30:52.712559+0000 mgr.y (mgr.14505) 210 : cluster [DBG] pgmap v240: 363 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:52.899119+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:52.899119+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:52.980472+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:52.980472+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:52.980778+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:52.980778+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: cluster 2026-03-09T17:30:52.993795+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: cluster 2026-03-09T17:30:52.993795+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:53.013118+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:53.013118+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:53.014130+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:53.014130+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:53.042312+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:53.042312+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:53.043554+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:53 vm00 bash[28333]: audit 2026-03-09T17:30:53.043554+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: cluster 2026-03-09T17:30:52.712559+0000 mgr.y (mgr.14505) 210 : cluster [DBG] pgmap v240: 363 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: cluster 2026-03-09T17:30:52.712559+0000 mgr.y (mgr.14505) 210 : cluster [DBG] pgmap v240: 363 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 632 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:52.899119+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:52.899119+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:52.980472+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:52.980472+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59916-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:52.980778+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:52.980778+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: cluster 2026-03-09T17:30:52.993795+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: cluster 2026-03-09T17:30:52.993795+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:53.013118+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:53.013118+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:53.014130+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:53.014130+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:53.042312+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:53.042312+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:53.043554+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:53 vm00 bash[20770]: audit 2026-03-09T17:30:53.043554+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:30:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.899784+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.899784+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.983576+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.983576+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.983664+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.983664+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.991711+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.991711+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: cluster 2026-03-09T17:30:53.993230+0000 mon.a (mon.0) 1631 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: cluster 2026-03-09T17:30:53.993230+0000 mon.a (mon.0) 1631 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.997439+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:53.997439+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:54.005006+0000 mon.a (mon.0) 1632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:54.005006+0000 mon.a (mon.0) 1632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:54.006083+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:54 vm02 bash[23351]: audit 2026-03-09T17:30:54.006083+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.037 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.899784+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.899784+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.983576+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.983576+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.983664+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.983664+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.991711+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.991711+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: cluster 2026-03-09T17:30:53.993230+0000 mon.a (mon.0) 1631 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: cluster 2026-03-09T17:30:53.993230+0000 mon.a (mon.0) 1631 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.997439+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:53.997439+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:54.005006+0000 mon.a (mon.0) 1632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:54.005006+0000 mon.a (mon.0) 1632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:54.006083+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:54 vm00 bash[20770]: audit 2026-03-09T17:30:54.006083+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.899784+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.899784+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.983576+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.983576+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.983664+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.983664+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.991711+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.991711+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: cluster 2026-03-09T17:30:53.993230+0000 mon.a (mon.0) 1631 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: cluster 2026-03-09T17:30:53.993230+0000 mon.a (mon.0) 1631 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.997439+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:53.997439+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.100:0/4194848828' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:54.005006+0000 mon.a (mon.0) 1632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:54.005006+0000 mon.a (mon.0) 1632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:54.006083+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:54 vm00 bash[28333]: audit 2026-03-09T17:30:54.006083+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: cluster 2026-03-09T17:30:54.712922+0000 mgr.y (mgr.14505) 211 : cluster [DBG] pgmap v243: 323 pgs: 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: cluster 2026-03-09T17:30:54.712922+0000 mgr.y (mgr.14505) 211 : cluster [DBG] pgmap v243: 323 pgs: 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.713339+0000 mon.c (mon.2) 373 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.713339+0000 mon.c (mon.2) 373 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.714085+0000 mon.a (mon.0) 1634 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.714085+0000 mon.a (mon.0) 1634 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.900385+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.900385+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.987056+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.987056+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.987149+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.987149+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.987249+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:54.987249+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.004420+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.004420+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: cluster 2026-03-09T17:30:55.008001+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: cluster 2026-03-09T17:30:55.008001+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.009825+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.009825+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.010889+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/1883091850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.010889+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/1883091850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.012593+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.012593+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.027904+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.027904+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.029103+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.029103+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.029162+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.029162+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.029907+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.029907+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.030252+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.030252+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.031054+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:55 vm02 bash[23351]: audit 2026-03-09T17:30:55.031054+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: cluster 2026-03-09T17:30:54.712922+0000 mgr.y (mgr.14505) 211 : cluster [DBG] pgmap v243: 323 pgs: 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: cluster 2026-03-09T17:30:54.712922+0000 mgr.y (mgr.14505) 211 : cluster [DBG] pgmap v243: 323 pgs: 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.713339+0000 mon.c (mon.2) 373 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.713339+0000 mon.c (mon.2) 373 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.714085+0000 mon.a (mon.0) 1634 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.714085+0000 mon.a (mon.0) 1634 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.900385+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.900385+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.987056+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.987056+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.987149+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.987149+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.987249+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:54.987249+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.004420+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.004420+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: cluster 2026-03-09T17:30:55.008001+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: cluster 2026-03-09T17:30:55.008001+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.009825+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.009825+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.010889+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/1883091850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.010889+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/1883091850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.012593+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.012593+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.027904+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.027904+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.029103+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.029103+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.029162+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.029162+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.029907+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.029907+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.030252+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.030252+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.031054+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:55 vm00 bash[20770]: audit 2026-03-09T17:30:55.031054+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: cluster 2026-03-09T17:30:54.712922+0000 mgr.y (mgr.14505) 211 : cluster [DBG] pgmap v243: 323 pgs: 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: cluster 2026-03-09T17:30:54.712922+0000 mgr.y (mgr.14505) 211 : cluster [DBG] pgmap v243: 323 pgs: 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.713339+0000 mon.c (mon.2) 373 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.713339+0000 mon.c (mon.2) 373 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.714085+0000 mon.a (mon.0) 1634 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.714085+0000 mon.a (mon.0) 1634 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.900385+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.900385+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.987056+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.987056+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59908-32"}]': finished 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.987149+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.987149+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.987249+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:54.987249+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.004420+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.004420+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: cluster 2026-03-09T17:30:55.008001+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: cluster 2026-03-09T17:30:55.008001+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.009825+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.009825+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.010889+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/1883091850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.010889+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/1883091850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.012593+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.012593+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.027904+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.027904+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.029103+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.029103+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.029162+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.029162+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.029907+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.029907+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.030252+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.030252+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.031054+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:55 vm00 bash[28333]: audit 2026-03-09T17:30:55.031054+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:30:56.581 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:30:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:30:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:30:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:55.900973+0000 mon.c (mon.2) 375 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:55.900973+0000 mon.c (mon.2) 375 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: cluster 2026-03-09T17:30:55.987158+0000 mon.a (mon.0) 1644 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: cluster 2026-03-09T17:30:55.987158+0000 mon.a (mon.0) 1644 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:55.990259+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]': finished 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:55.990259+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]': finished 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:55.990332+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:55.990332+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:55.990360+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:55.990360+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: cluster 2026-03-09T17:30:55.993371+0000 mon.a (mon.0) 1648 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: cluster 2026-03-09T17:30:55.993371+0000 mon.a (mon.0) 1648 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:56.006705+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:56.006705+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:56.012042+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:56.012042+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:56.072026+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:56.072026+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:56.073251+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:56 vm02 bash[23351]: audit 2026-03-09T17:30:56.073251+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:55.900973+0000 mon.c (mon.2) 375 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:55.900973+0000 mon.c (mon.2) 375 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: cluster 2026-03-09T17:30:55.987158+0000 mon.a (mon.0) 1644 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: cluster 2026-03-09T17:30:55.987158+0000 mon.a (mon.0) 1644 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:55.990259+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:55.990259+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:55.990332+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:55.990332+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:55.990360+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:55.990360+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: cluster 2026-03-09T17:30:55.993371+0000 mon.a (mon.0) 1648 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: cluster 2026-03-09T17:30:55.993371+0000 mon.a (mon.0) 1648 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:56.006705+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:56.006705+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:56.012042+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:56.012042+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:56.072026+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:56.072026+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:56.073251+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:56 vm00 bash[28333]: audit 2026-03-09T17:30:56.073251+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:55.900973+0000 mon.c (mon.2) 375 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:55.900973+0000 mon.c (mon.2) 375 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: cluster 2026-03-09T17:30:55.987158+0000 mon.a (mon.0) 1644 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: cluster 2026-03-09T17:30:55.987158+0000 mon.a (mon.0) 1644 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:55.990259+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:55.990259+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-21", "mode": "writeback"}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:55.990332+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:55.990332+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:55.990360+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:55.990360+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59908-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: cluster 2026-03-09T17:30:55.993371+0000 mon.a (mon.0) 1648 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: cluster 2026-03-09T17:30:55.993371+0000 mon.a (mon.0) 1648 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:56.006705+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:56.006705+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:56.012042+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:56.012042+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:56.072026+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:56.072026+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:56.073251+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:56 vm00 bash[20770]: audit 2026-03-09T17:30:56.073251+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: cluster 2026-03-09T17:30:56.713349+0000 mgr.y (mgr.14505) 212 : cluster [DBG] pgmap v246: 355 pgs: 32 unknown, 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: cluster 2026-03-09T17:30:56.713349+0000 mgr.y (mgr.14505) 212 : cluster [DBG] pgmap v246: 355 pgs: 32 unknown, 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:56.901683+0000 mon.c (mon.2) 376 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:56.901683+0000 mon.c (mon.2) 376 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:56.993729+0000 mon.a (mon.0) 1651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:56.993729+0000 mon.a (mon.0) 1651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.011396+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.011396+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: cluster 2026-03-09T17:30:57.020965+0000 mon.a (mon.0) 1652 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: cluster 2026-03-09T17:30:57.020965+0000 mon.a (mon.0) 1652 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.022068+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.022068+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.023008+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.100:0/1636484815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.023008+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.100:0/1636484815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.024700+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.024700+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.506947+0000 mon.a (mon.0) 1655 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.506947+0000 mon.a (mon.0) 1655 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.508127+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:57 vm02 bash[23351]: audit 2026-03-09T17:30:57.508127+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:58.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: cluster 2026-03-09T17:30:56.713349+0000 mgr.y (mgr.14505) 212 : cluster [DBG] pgmap v246: 355 pgs: 32 unknown, 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: cluster 2026-03-09T17:30:56.713349+0000 mgr.y (mgr.14505) 212 : cluster [DBG] pgmap v246: 355 pgs: 32 unknown, 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:56.901683+0000 mon.c (mon.2) 376 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:56.901683+0000 mon.c (mon.2) 376 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:56.993729+0000 mon.a (mon.0) 1651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:56.993729+0000 mon.a (mon.0) 1651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.011396+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.011396+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: cluster 2026-03-09T17:30:57.020965+0000 mon.a (mon.0) 1652 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: cluster 2026-03-09T17:30:57.020965+0000 mon.a (mon.0) 1652 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.022068+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.022068+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.023008+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.100:0/1636484815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.023008+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.100:0/1636484815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.024700+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.024700+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.506947+0000 mon.a (mon.0) 1655 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.506947+0000 mon.a (mon.0) 1655 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.508127+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:57 vm00 bash[28333]: audit 2026-03-09T17:30:57.508127+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: cluster 2026-03-09T17:30:56.713349+0000 mgr.y (mgr.14505) 212 : cluster [DBG] pgmap v246: 355 pgs: 32 unknown, 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: cluster 2026-03-09T17:30:56.713349+0000 mgr.y (mgr.14505) 212 : cluster [DBG] pgmap v246: 355 pgs: 32 unknown, 32 creating+peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 459 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:56.901683+0000 mon.c (mon.2) 376 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:56.901683+0000 mon.c (mon.2) 376 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:56.993729+0000 mon.a (mon.0) 1651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:56.993729+0000 mon.a (mon.0) 1651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.011396+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.011396+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: cluster 2026-03-09T17:30:57.020965+0000 mon.a (mon.0) 1652 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: cluster 2026-03-09T17:30:57.020965+0000 mon.a (mon.0) 1652 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.022068+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.022068+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]: dispatch 2026-03-09T17:30:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.023008+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.100:0/1636484815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.023008+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.100:0/1636484815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.024700+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.024700+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.506947+0000 mon.a (mon.0) 1655 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.506947+0000 mon.a (mon.0) 1655 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:30:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.508127+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:57 vm00 bash[20770]: audit 2026-03-09T17:30:57.508127+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: audit 2026-03-09T17:30:57.902396+0000 mon.c (mon.2) 379 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: audit 2026-03-09T17:30:57.902396+0000 mon.c (mon.2) 379 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: cluster 2026-03-09T17:30:57.994153+0000 mon.a (mon.0) 1656 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: cluster 2026-03-09T17:30:57.994153+0000 mon.a (mon.0) 1656 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: audit 2026-03-09T17:30:57.997862+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: audit 2026-03-09T17:30:57.997862+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: audit 2026-03-09T17:30:57.997938+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: audit 2026-03-09T17:30:57.997938+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: audit 2026-03-09T17:30:57.997980+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: audit 2026-03-09T17:30:57.997980+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: cluster 2026-03-09T17:30:58.029961+0000 mon.a (mon.0) 1660 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T17:30:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:58 vm02 bash[23351]: cluster 2026-03-09T17:30:58.029961+0000 mon.a (mon.0) 1660 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T17:30:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: audit 2026-03-09T17:30:57.902396+0000 mon.c (mon.2) 379 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: audit 2026-03-09T17:30:57.902396+0000 mon.c (mon.2) 379 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: cluster 2026-03-09T17:30:57.994153+0000 mon.a (mon.0) 1656 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: cluster 2026-03-09T17:30:57.994153+0000 mon.a (mon.0) 1656 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: audit 2026-03-09T17:30:57.997862+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:30:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: audit 2026-03-09T17:30:57.997862+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:30:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: audit 2026-03-09T17:30:57.997938+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:59.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: audit 2026-03-09T17:30:57.997938+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: audit 2026-03-09T17:30:57.997980+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: audit 2026-03-09T17:30:57.997980+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: cluster 2026-03-09T17:30:58.029961+0000 mon.a (mon.0) 1660 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:58 vm00 bash[28333]: cluster 2026-03-09T17:30:58.029961+0000 mon.a (mon.0) 1660 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: audit 2026-03-09T17:30:57.902396+0000 mon.c (mon.2) 379 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: audit 2026-03-09T17:30:57.902396+0000 mon.c (mon.2) 379 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: cluster 2026-03-09T17:30:57.994153+0000 mon.a (mon.0) 1656 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: cluster 2026-03-09T17:30:57.994153+0000 mon.a (mon.0) 1656 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: audit 2026-03-09T17:30:57.997862+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: audit 2026-03-09T17:30:57.997862+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59908-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: audit 2026-03-09T17:30:57.997938+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: audit 2026-03-09T17:30:57.997938+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-21"}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: audit 2026-03-09T17:30:57.997980+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: audit 2026-03-09T17:30:57.997980+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: cluster 2026-03-09T17:30:58.029961+0000 mon.a (mon.0) 1660 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T17:30:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:58 vm00 bash[20770]: cluster 2026-03-09T17:30:58.029961+0000 mon.a (mon.0) 1660 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T17:30:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:59 vm02 bash[23351]: cluster 2026-03-09T17:30:58.713689+0000 mgr.y (mgr.14505) 213 : cluster [DBG] pgmap v249: 394 pgs: 8 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 333 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:30:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:59 vm02 bash[23351]: cluster 2026-03-09T17:30:58.713689+0000 mgr.y (mgr.14505) 213 : cluster [DBG] pgmap v249: 394 pgs: 8 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 333 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:30:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:59 vm02 bash[23351]: audit 2026-03-09T17:30:58.903105+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:59 vm02 bash[23351]: audit 2026-03-09T17:30:58.903105+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:30:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:59 vm02 bash[23351]: cluster 2026-03-09T17:30:59.044544+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T17:30:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:59 vm02 bash[23351]: cluster 2026-03-09T17:30:59.044544+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T17:30:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:59 vm02 bash[23351]: audit 2026-03-09T17:30:59.074210+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:30:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:30:59 vm02 bash[23351]: audit 2026-03-09T17:30:59.074210+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:59 vm00 bash[28333]: cluster 2026-03-09T17:30:58.713689+0000 mgr.y (mgr.14505) 213 : cluster [DBG] pgmap v249: 394 pgs: 8 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 333 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:31:00.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:59 vm00 bash[28333]: cluster 2026-03-09T17:30:58.713689+0000 mgr.y (mgr.14505) 213 : cluster [DBG] pgmap v249: 394 pgs: 8 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 333 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:31:00.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:59 vm00 bash[28333]: audit 2026-03-09T17:30:58.903105+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:00.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:59 vm00 bash[28333]: audit 2026-03-09T17:30:58.903105+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:00.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:59 vm00 bash[28333]: cluster 2026-03-09T17:30:59.044544+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T17:31:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:59 vm00 bash[28333]: cluster 2026-03-09T17:30:59.044544+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T17:31:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:59 vm00 bash[28333]: audit 2026-03-09T17:30:59.074210+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:30:59 vm00 bash[28333]: audit 2026-03-09T17:30:59.074210+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:59 vm00 bash[20770]: cluster 2026-03-09T17:30:58.713689+0000 mgr.y (mgr.14505) 213 : cluster [DBG] pgmap v249: 394 pgs: 8 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 333 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:31:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:59 vm00 bash[20770]: cluster 2026-03-09T17:30:58.713689+0000 mgr.y (mgr.14505) 213 : cluster [DBG] pgmap v249: 394 pgs: 8 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 333 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:31:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:59 vm00 bash[20770]: audit 2026-03-09T17:30:58.903105+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:59 vm00 bash[20770]: audit 2026-03-09T17:30:58.903105+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:59 vm00 bash[20770]: cluster 2026-03-09T17:30:59.044544+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T17:31:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:59 vm00 bash[20770]: cluster 2026-03-09T17:30:59.044544+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T17:31:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:59 vm00 bash[20770]: audit 2026-03-09T17:30:59.074210+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:30:59 vm00 bash[20770]: audit 2026-03-09T17:30:59.074210+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: cluster 2026-03-09T17:30:59.593690+0000 mon.a (mon.0) 1663 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: cluster 2026-03-09T17:30:59.593690+0000 mon.a (mon.0) 1663 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:30:59.903780+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:30:59.903780+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.007169+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.007169+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: cluster 2026-03-09T17:31:00.036722+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: cluster 2026-03-09T17:31:00.036722+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.043568+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.043568+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.043944+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.043944+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.044861+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.044861+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.045888+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:00 vm02 bash[23351]: audit 2026-03-09T17:31:00.045888+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: cluster 2026-03-09T17:30:59.593690+0000 mon.a (mon.0) 1663 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: cluster 2026-03-09T17:30:59.593690+0000 mon.a (mon.0) 1663 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:30:59.903780+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:30:59.903780+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.007169+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.007169+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: cluster 2026-03-09T17:31:00.036722+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: cluster 2026-03-09T17:31:00.036722+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.043568+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.043568+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.043944+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.043944+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.044861+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.044861+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.045888+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:00 vm00 bash[28333]: audit 2026-03-09T17:31:00.045888+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: cluster 2026-03-09T17:30:59.593690+0000 mon.a (mon.0) 1663 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: cluster 2026-03-09T17:30:59.593690+0000 mon.a (mon.0) 1663 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:30:59.903780+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:30:59.903780+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.007169+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.007169+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.100:0/1799629309' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59916-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: cluster 2026-03-09T17:31:00.036722+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: cluster 2026-03-09T17:31:00.036722+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.043568+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.043568+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.043944+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.043944+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.044861+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.044861+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.045888+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:00 vm00 bash[20770]: audit 2026-03-09T17:31:00.045888+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: cluster 2026-03-09T17:31:00.714052+0000 mgr.y (mgr.14505) 214 : cluster [DBG] pgmap v252: 418 pgs: 64 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: cluster 2026-03-09T17:31:00.714052+0000 mgr.y (mgr.14505) 214 : cluster [DBG] pgmap v252: 418 pgs: 64 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:00.904381+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:00.904381+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:01.011016+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:01.011016+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:01.011226+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:01.011226+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: cluster 2026-03-09T17:31:01.025377+0000 mon.a (mon.0) 1670 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: cluster 2026-03-09T17:31:01.025377+0000 mon.a (mon.0) 1670 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:01.033025+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:01.033025+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:01.037970+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:01 vm02 bash[23351]: audit 2026-03-09T17:31:01.037970+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:01.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:31:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:31:02.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: cluster 2026-03-09T17:31:00.714052+0000 mgr.y (mgr.14505) 214 : cluster [DBG] pgmap v252: 418 pgs: 64 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: cluster 2026-03-09T17:31:00.714052+0000 mgr.y (mgr.14505) 214 : cluster [DBG] pgmap v252: 418 pgs: 64 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:00.904381+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:00.904381+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:01.011016+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:01.011016+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:01.011226+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:01.011226+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: cluster 2026-03-09T17:31:01.025377+0000 mon.a (mon.0) 1670 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: cluster 2026-03-09T17:31:01.025377+0000 mon.a (mon.0) 1670 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:01.033025+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:01.033025+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:01.037970+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:01 vm00 bash[28333]: audit 2026-03-09T17:31:01.037970+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: cluster 2026-03-09T17:31:00.714052+0000 mgr.y (mgr.14505) 214 : cluster [DBG] pgmap v252: 418 pgs: 64 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: cluster 2026-03-09T17:31:00.714052+0000 mgr.y (mgr.14505) 214 : cluster [DBG] pgmap v252: 418 pgs: 64 unknown, 32 creating+peering, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:00.904381+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:00.904381+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:01.011016+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:01.011016+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:01.011226+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:01.011226+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: cluster 2026-03-09T17:31:01.025377+0000 mon.a (mon.0) 1670 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: cluster 2026-03-09T17:31:01.025377+0000 mon.a (mon.0) 1670 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:01.033025+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:01.033025+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/2520215013' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:01.037970+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:01 vm00 bash[20770]: audit 2026-03-09T17:31:01.037970+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]: dispatch 2026-03-09T17:31:03.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:01.633592+0000 mgr.y (mgr.14505) 215 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:01.633592+0000 mgr.y (mgr.14505) 215 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:01.905133+0000 mon.c (mon.2) 383 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:01.905133+0000 mon.c (mon.2) 383 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.019213+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.019213+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: cluster 2026-03-09T17:31:02.022148+0000 mon.a (mon.0) 1673 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: cluster 2026-03-09T17:31:02.022148+0000 mon.a (mon.0) 1673 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.050792+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.050792+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.051644+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.051644+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.051993+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.051993+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.052361+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.052361+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.052787+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.052787+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.053533+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.053533+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.100200+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.100200+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.101829+0000 mon.a (mon.0) 1677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:02 vm00 bash[28333]: audit 2026-03-09T17:31:02.101829+0000 mon.a (mon.0) 1677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:01.633592+0000 mgr.y (mgr.14505) 215 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:03.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:01.633592+0000 mgr.y (mgr.14505) 215 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:01.905133+0000 mon.c (mon.2) 383 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:01.905133+0000 mon.c (mon.2) 383 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.019213+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.019213+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: cluster 2026-03-09T17:31:02.022148+0000 mon.a (mon.0) 1673 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: cluster 2026-03-09T17:31:02.022148+0000 mon.a (mon.0) 1673 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.050792+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.050792+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.051644+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.051644+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.051993+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.051993+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.052361+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.052361+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.052787+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.052787+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.053533+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.053533+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.100200+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.100200+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.101829+0000 mon.a (mon.0) 1677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:02 vm00 bash[20770]: audit 2026-03-09T17:31:02.101829+0000 mon.a (mon.0) 1677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:01.633592+0000 mgr.y (mgr.14505) 215 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:01.633592+0000 mgr.y (mgr.14505) 215 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:01.905133+0000 mon.c (mon.2) 383 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:01.905133+0000 mon.c (mon.2) 383 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.019213+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.019213+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59908-33"}]': finished 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: cluster 2026-03-09T17:31:02.022148+0000 mon.a (mon.0) 1673 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: cluster 2026-03-09T17:31:02.022148+0000 mon.a (mon.0) 1673 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.050792+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.050792+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.051644+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.051644+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.051993+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.051993+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.052361+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.052361+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.052787+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.052787+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.053533+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.053533+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.100200+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.100200+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.101829+0000 mon.a (mon.0) 1677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:02 vm02 bash[23351]: audit 2026-03-09T17:31:02.101829+0000 mon.a (mon.0) 1677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: cluster 2026-03-09T17:31:02.714397+0000 mgr.y (mgr.14505) 216 : cluster [DBG] pgmap v255: 354 pgs: 32 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: cluster 2026-03-09T17:31:02.714397+0000 mgr.y (mgr.14505) 216 : cluster [DBG] pgmap v255: 354 pgs: 32 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:02.906099+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:02.906099+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.026519+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.026519+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.026675+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.026675+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: cluster 2026-03-09T17:31:03.033497+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: cluster 2026-03-09T17:31:03.033497+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.039285+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.039285+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.039815+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.039815+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.041549+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.041549+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.041885+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:03 vm00 bash[28333]: audit 2026-03-09T17:31:03.041885+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: cluster 2026-03-09T17:31:02.714397+0000 mgr.y (mgr.14505) 216 : cluster [DBG] pgmap v255: 354 pgs: 32 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: cluster 2026-03-09T17:31:02.714397+0000 mgr.y (mgr.14505) 216 : cluster [DBG] pgmap v255: 354 pgs: 32 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:02.906099+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:02.906099+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.026519+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.026519+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.026675+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.026675+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: cluster 2026-03-09T17:31:03.033497+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: cluster 2026-03-09T17:31:03.033497+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.039285+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.039285+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.039815+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.039815+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.041549+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.041549+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.041885+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:03 vm00 bash[20770]: audit 2026-03-09T17:31:03.041885+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: cluster 2026-03-09T17:31:02.714397+0000 mgr.y (mgr.14505) 216 : cluster [DBG] pgmap v255: 354 pgs: 32 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: cluster 2026-03-09T17:31:02.714397+0000 mgr.y (mgr.14505) 216 : cluster [DBG] pgmap v255: 354 pgs: 32 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 301 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:02.906099+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:02.906099+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.026519+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.026519+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59908-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.026675+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.026675+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: cluster 2026-03-09T17:31:03.033497+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: cluster 2026-03-09T17:31:03.033497+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.039285+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.039285+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.039815+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.039815+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.041549+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.041549+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.041885+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:03 vm02 bash[23351]: audit 2026-03-09T17:31:03.041885+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:03.907032+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:03.907032+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.030243+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.030243+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.041503+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.041503+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.064534+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/4218697368' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.064534+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/4218697368' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: cluster 2026-03-09T17:31:04.064647+0000 mon.a (mon.0) 1684 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: cluster 2026-03-09T17:31:04.064647+0000 mon.a (mon.0) 1684 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.067149+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.067149+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.067346+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:04 vm00 bash[28333]: audit 2026-03-09T17:31:04.067346+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:03.907032+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:03.907032+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.030243+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.030243+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.041503+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.041503+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.064534+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/4218697368' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.064534+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/4218697368' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: cluster 2026-03-09T17:31:04.064647+0000 mon.a (mon.0) 1684 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: cluster 2026-03-09T17:31:04.064647+0000 mon.a (mon.0) 1684 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.067149+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.067149+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.067346+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:04 vm00 bash[20770]: audit 2026-03-09T17:31:04.067346+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:03.907032+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:03.907032+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.030243+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.030243+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.041503+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.041503+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.064534+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/4218697368' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.064534+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/4218697368' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: cluster 2026-03-09T17:31:04.064647+0000 mon.a (mon.0) 1684 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: cluster 2026-03-09T17:31:04.064647+0000 mon.a (mon.0) 1684 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.067149+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.067149+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.067346+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:04 vm02 bash[23351]: audit 2026-03-09T17:31:04.067346+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:06.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: cluster 2026-03-09T17:31:04.714752+0000 mgr.y (mgr.14505) 217 : cluster [DBG] pgmap v258: 354 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:06.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: cluster 2026-03-09T17:31:04.714752+0000 mgr.y (mgr.14505) 217 : cluster [DBG] pgmap v258: 354 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:06.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: audit 2026-03-09T17:31:04.907794+0000 mon.c (mon.2) 386 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: audit 2026-03-09T17:31:04.907794+0000 mon.c (mon.2) 386 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: cluster 2026-03-09T17:31:05.030164+0000 mon.a (mon.0) 1687 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: cluster 2026-03-09T17:31:05.030164+0000 mon.a (mon.0) 1687 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: cluster 2026-03-09T17:31:05.031339+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: cluster 2026-03-09T17:31:05.031339+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: audit 2026-03-09T17:31:05.040891+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: audit 2026-03-09T17:31:05.040891+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: audit 2026-03-09T17:31:05.040975+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: audit 2026-03-09T17:31:05.040975+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: audit 2026-03-09T17:31:05.041031+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: audit 2026-03-09T17:31:05.041031+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: cluster 2026-03-09T17:31:05.072214+0000 mon.a (mon.0) 1692 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:05 vm00 bash[28333]: cluster 2026-03-09T17:31:05.072214+0000 mon.a (mon.0) 1692 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: cluster 2026-03-09T17:31:04.714752+0000 mgr.y (mgr.14505) 217 : cluster [DBG] pgmap v258: 354 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: cluster 2026-03-09T17:31:04.714752+0000 mgr.y (mgr.14505) 217 : cluster [DBG] pgmap v258: 354 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: audit 2026-03-09T17:31:04.907794+0000 mon.c (mon.2) 386 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: audit 2026-03-09T17:31:04.907794+0000 mon.c (mon.2) 386 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: cluster 2026-03-09T17:31:05.030164+0000 mon.a (mon.0) 1687 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: cluster 2026-03-09T17:31:05.030164+0000 mon.a (mon.0) 1687 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: cluster 2026-03-09T17:31:05.031339+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: cluster 2026-03-09T17:31:05.031339+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: audit 2026-03-09T17:31:05.040891+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: audit 2026-03-09T17:31:05.040891+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: audit 2026-03-09T17:31:05.040975+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: audit 2026-03-09T17:31:05.040975+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: audit 2026-03-09T17:31:05.041031+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: audit 2026-03-09T17:31:05.041031+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: cluster 2026-03-09T17:31:05.072214+0000 mon.a (mon.0) 1692 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T17:31:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:05 vm00 bash[20770]: cluster 2026-03-09T17:31:05.072214+0000 mon.a (mon.0) 1692 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: cluster 2026-03-09T17:31:04.714752+0000 mgr.y (mgr.14505) 217 : cluster [DBG] pgmap v258: 354 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: cluster 2026-03-09T17:31:04.714752+0000 mgr.y (mgr.14505) 217 : cluster [DBG] pgmap v258: 354 pgs: 32 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: audit 2026-03-09T17:31:04.907794+0000 mon.c (mon.2) 386 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: audit 2026-03-09T17:31:04.907794+0000 mon.c (mon.2) 386 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: cluster 2026-03-09T17:31:05.030164+0000 mon.a (mon.0) 1687 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: cluster 2026-03-09T17:31:05.030164+0000 mon.a (mon.0) 1687 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: cluster 2026-03-09T17:31:05.031339+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: cluster 2026-03-09T17:31:05.031339+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: audit 2026-03-09T17:31:05.040891+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: audit 2026-03-09T17:31:05.040891+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59908-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: audit 2026-03-09T17:31:05.040975+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]': finished 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: audit 2026-03-09T17:31:05.040975+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-23", "mode": "writeback"}]': finished 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: audit 2026-03-09T17:31:05.041031+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: audit 2026-03-09T17:31:05.041031+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: cluster 2026-03-09T17:31:05.072214+0000 mon.a (mon.0) 1692 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T17:31:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:05 vm02 bash[23351]: cluster 2026-03-09T17:31:05.072214+0000 mon.a (mon.0) 1692 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T17:31:06.775 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:31:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:31:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:05.908572+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:05.908572+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: cluster 2026-03-09T17:31:06.058845+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: cluster 2026-03-09T17:31:06.058845+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:06.065202+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/2813338555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:06.065202+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/2813338555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:06.067064+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:06.067064+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:06.101879+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:06.101879+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:06.103148+0000 mon.a (mon.0) 1695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:06 vm00 bash[28333]: audit 2026-03-09T17:31:06.103148+0000 mon.a (mon.0) 1695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:05.908572+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:05.908572+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: cluster 2026-03-09T17:31:06.058845+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: cluster 2026-03-09T17:31:06.058845+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:06.065202+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/2813338555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:06.065202+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/2813338555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:06.067064+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:06.067064+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:06.101879+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:06.101879+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:06.103148+0000 mon.a (mon.0) 1695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:06 vm00 bash[20770]: audit 2026-03-09T17:31:06.103148+0000 mon.a (mon.0) 1695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:05.908572+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:05.908572+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: cluster 2026-03-09T17:31:06.058845+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: cluster 2026-03-09T17:31:06.058845+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:06.065202+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/2813338555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:06.065202+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/2813338555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:06.067064+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:06.067064+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:06.101879+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:06.101879+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:06.103148+0000 mon.a (mon.0) 1695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:06 vm02 bash[23351]: audit 2026-03-09T17:31:06.103148+0000 mon.a (mon.0) 1695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: cluster 2026-03-09T17:31:06.715109+0000 mgr.y (mgr.14505) 218 : cluster [DBG] pgmap v261: 394 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: cluster 2026-03-09T17:31:06.715109+0000 mgr.y (mgr.14505) 218 : cluster [DBG] pgmap v261: 394 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:06.909311+0000 mon.c (mon.2) 388 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:06.909311+0000 mon.c (mon.2) 388 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.052557+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.052557+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.052612+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.052612+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.062767+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.062767+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: cluster 2026-03-09T17:31:07.072696+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: cluster 2026-03-09T17:31:07.072696+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.073186+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.073186+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.077157+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.077157+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.077257+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:07 vm00 bash[28333]: audit 2026-03-09T17:31:07.077257+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: cluster 2026-03-09T17:31:06.715109+0000 mgr.y (mgr.14505) 218 : cluster [DBG] pgmap v261: 394 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: cluster 2026-03-09T17:31:06.715109+0000 mgr.y (mgr.14505) 218 : cluster [DBG] pgmap v261: 394 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:06.909311+0000 mon.c (mon.2) 388 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:06.909311+0000 mon.c (mon.2) 388 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.052557+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.052557+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.052612+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.052612+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.062767+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.062767+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: cluster 2026-03-09T17:31:07.072696+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T17:31:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: cluster 2026-03-09T17:31:07.072696+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T17:31:08.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.073186+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.073186+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.077157+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.077157+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.077257+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:07 vm00 bash[20770]: audit 2026-03-09T17:31:07.077257+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: cluster 2026-03-09T17:31:06.715109+0000 mgr.y (mgr.14505) 218 : cluster [DBG] pgmap v261: 394 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: cluster 2026-03-09T17:31:06.715109+0000 mgr.y (mgr.14505) 218 : cluster [DBG] pgmap v261: 394 pgs: 72 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 302 active+clean; 459 KiB data, 641 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:06.909311+0000 mon.c (mon.2) 388 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:06.909311+0000 mon.c (mon.2) 388 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.052557+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.052557+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.052612+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.052612+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.062767+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.062767+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: cluster 2026-03-09T17:31:07.072696+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: cluster 2026-03-09T17:31:07.072696+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.073186+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.073186+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.077157+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.077157+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.077257+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:07 vm02 bash[23351]: audit 2026-03-09T17:31:07.077257+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:07.910013+0000 mon.c (mon.2) 389 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:07.910013+0000 mon.c (mon.2) 389 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: cluster 2026-03-09T17:31:08.053032+0000 mon.a (mon.0) 1701 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: cluster 2026-03-09T17:31:08.053032+0000 mon.a (mon.0) 1701 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.057044+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.057044+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.057178+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.057178+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.064703+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.064703+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: cluster 2026-03-09T17:31:08.081270+0000 mon.a (mon.0) 1704 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: cluster 2026-03-09T17:31:08.081270+0000 mon.a (mon.0) 1704 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.081512+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.100:0/3401308440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.081512+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.100:0/3401308440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.083488+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.083488+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.083768+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:08 vm00 bash[28333]: audit 2026-03-09T17:31:08.083768+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:07.910013+0000 mon.c (mon.2) 389 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:07.910013+0000 mon.c (mon.2) 389 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: cluster 2026-03-09T17:31:08.053032+0000 mon.a (mon.0) 1701 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: cluster 2026-03-09T17:31:08.053032+0000 mon.a (mon.0) 1701 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.057044+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.057044+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.057178+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.057178+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.064703+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.064703+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: cluster 2026-03-09T17:31:08.081270+0000 mon.a (mon.0) 1704 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: cluster 2026-03-09T17:31:08.081270+0000 mon.a (mon.0) 1704 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.081512+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.100:0/3401308440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.081512+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.100:0/3401308440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.083488+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.083488+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.083768+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:08 vm00 bash[20770]: audit 2026-03-09T17:31:08.083768+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:07.910013+0000 mon.c (mon.2) 389 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:07.910013+0000 mon.c (mon.2) 389 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: cluster 2026-03-09T17:31:08.053032+0000 mon.a (mon.0) 1701 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: cluster 2026-03-09T17:31:08.053032+0000 mon.a (mon.0) 1701 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.057044+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.057044+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-23"}]': finished 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.057178+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.057178+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.064703+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.064703+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/2902689793' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: cluster 2026-03-09T17:31:08.081270+0000 mon.a (mon.0) 1704 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: cluster 2026-03-09T17:31:08.081270+0000 mon.a (mon.0) 1704 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.081512+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.100:0/3401308440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.081512+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.100:0/3401308440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.083488+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.083488+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.083768+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:08 vm02 bash[23351]: audit 2026-03-09T17:31:08.083768+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:10.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: cluster 2026-03-09T17:31:08.715476+0000 mgr.y (mgr.14505) 219 : cluster [DBG] pgmap v264: 418 pgs: 32 unknown, 27 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 349 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: cluster 2026-03-09T17:31:08.715476+0000 mgr.y (mgr.14505) 219 : cluster [DBG] pgmap v264: 418 pgs: 32 unknown, 27 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 349 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:08.911102+0000 mon.c (mon.2) 391 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:08.911102+0000 mon.c (mon.2) 391 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.112405+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.112405+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.112556+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.112556+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: cluster 2026-03-09T17:31:09.130504+0000 mon.a (mon.0) 1709 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: cluster 2026-03-09T17:31:09.130504+0000 mon.a (mon.0) 1709 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.158186+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.158186+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.164702+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.164702+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.172186+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.172186+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.173263+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.173263+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.173641+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.173641+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.174497+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:09 vm00 bash[28333]: audit 2026-03-09T17:31:09.174497+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: cluster 2026-03-09T17:31:08.715476+0000 mgr.y (mgr.14505) 219 : cluster [DBG] pgmap v264: 418 pgs: 32 unknown, 27 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 349 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: cluster 2026-03-09T17:31:08.715476+0000 mgr.y (mgr.14505) 219 : cluster [DBG] pgmap v264: 418 pgs: 32 unknown, 27 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 349 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:08.911102+0000 mon.c (mon.2) 391 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:08.911102+0000 mon.c (mon.2) 391 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.112405+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.112405+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.112556+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.112556+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: cluster 2026-03-09T17:31:09.130504+0000 mon.a (mon.0) 1709 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: cluster 2026-03-09T17:31:09.130504+0000 mon.a (mon.0) 1709 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T17:31:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.158186+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.158186+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.164702+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.164702+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.172186+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.172186+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.173263+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.173263+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.173641+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.173641+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.174497+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:09 vm00 bash[20770]: audit 2026-03-09T17:31:09.174497+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: cluster 2026-03-09T17:31:08.715476+0000 mgr.y (mgr.14505) 219 : cluster [DBG] pgmap v264: 418 pgs: 32 unknown, 27 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 349 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: cluster 2026-03-09T17:31:08.715476+0000 mgr.y (mgr.14505) 219 : cluster [DBG] pgmap v264: 418 pgs: 32 unknown, 27 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 349 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:08.911102+0000 mon.c (mon.2) 391 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:08.911102+0000 mon.c (mon.2) 391 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.112405+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.112405+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59908-34"}]': finished 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.112556+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.112556+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: cluster 2026-03-09T17:31:09.130504+0000 mon.a (mon.0) 1709 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: cluster 2026-03-09T17:31:09.130504+0000 mon.a (mon.0) 1709 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.158186+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.158186+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.164702+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.164702+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.172186+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.172186+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.173263+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.173263+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.173641+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.173641+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.174497+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:09 vm02 bash[23351]: audit 2026-03-09T17:31:09.174497+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:09.912125+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:09.912125+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.116783+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.116783+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.137909+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.137909+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: cluster 2026-03-09T17:31:10.145596+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: cluster 2026-03-09T17:31:10.145596+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.146991+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.146991+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.147108+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.147108+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.148373+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.100:0/2053095336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.148373+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.100:0/2053095336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.149887+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.149887+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.150404+0000 mon.a (mon.0) 1717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:10 vm00 bash[28333]: audit 2026-03-09T17:31:10.150404+0000 mon.a (mon.0) 1717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:09.912125+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:09.912125+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.116783+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.116783+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.137909+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.137909+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: cluster 2026-03-09T17:31:10.145596+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: cluster 2026-03-09T17:31:10.145596+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.146991+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.146991+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.147108+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.147108+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.148373+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.100:0/2053095336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.148373+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.100:0/2053095336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.149887+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.149887+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.150404+0000 mon.a (mon.0) 1717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:10 vm00 bash[20770]: audit 2026-03-09T17:31:10.150404+0000 mon.a (mon.0) 1717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:09.912125+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:09.912125+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.116783+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.116783+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59908-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.137909+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.137909+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: cluster 2026-03-09T17:31:10.145596+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: cluster 2026-03-09T17:31:10.145596+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.146991+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.146991+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.147108+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.147108+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.148373+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.100:0/2053095336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.148373+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.100:0/2053095336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.149887+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.149887+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.150404+0000 mon.a (mon.0) 1717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:10 vm02 bash[23351]: audit 2026-03-09T17:31:10.150404+0000 mon.a (mon.0) 1717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: cluster 2026-03-09T17:31:10.715836+0000 mgr.y (mgr.14505) 220 : cluster [DBG] pgmap v267: 450 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: cluster 2026-03-09T17:31:10.715836+0000 mgr.y (mgr.14505) 220 : cluster [DBG] pgmap v267: 450 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: audit 2026-03-09T17:31:10.912792+0000 mon.c (mon.2) 394 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: audit 2026-03-09T17:31:10.912792+0000 mon.c (mon.2) 394 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: cluster 2026-03-09T17:31:11.116681+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: cluster 2026-03-09T17:31:11.116681+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: audit 2026-03-09T17:31:11.127351+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: audit 2026-03-09T17:31:11.127351+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: audit 2026-03-09T17:31:11.127397+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: audit 2026-03-09T17:31:11.127397+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: cluster 2026-03-09T17:31:11.162792+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T17:31:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:11 vm02 bash[23351]: cluster 2026-03-09T17:31:11.162792+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T17:31:12.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:31:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:31:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: cluster 2026-03-09T17:31:10.715836+0000 mgr.y (mgr.14505) 220 : cluster [DBG] pgmap v267: 450 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: cluster 2026-03-09T17:31:10.715836+0000 mgr.y (mgr.14505) 220 : cluster [DBG] pgmap v267: 450 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: audit 2026-03-09T17:31:10.912792+0000 mon.c (mon.2) 394 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: audit 2026-03-09T17:31:10.912792+0000 mon.c (mon.2) 394 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: cluster 2026-03-09T17:31:11.116681+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: cluster 2026-03-09T17:31:11.116681+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: audit 2026-03-09T17:31:11.127351+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: audit 2026-03-09T17:31:11.127351+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: audit 2026-03-09T17:31:11.127397+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: audit 2026-03-09T17:31:11.127397+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: cluster 2026-03-09T17:31:11.162792+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:11 vm00 bash[28333]: cluster 2026-03-09T17:31:11.162792+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: cluster 2026-03-09T17:31:10.715836+0000 mgr.y (mgr.14505) 220 : cluster [DBG] pgmap v267: 450 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: cluster 2026-03-09T17:31:10.715836+0000 mgr.y (mgr.14505) 220 : cluster [DBG] pgmap v267: 450 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: audit 2026-03-09T17:31:10.912792+0000 mon.c (mon.2) 394 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: audit 2026-03-09T17:31:10.912792+0000 mon.c (mon.2) 394 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: cluster 2026-03-09T17:31:11.116681+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: cluster 2026-03-09T17:31:11.116681+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: audit 2026-03-09T17:31:11.127351+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: audit 2026-03-09T17:31:11.127351+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: audit 2026-03-09T17:31:11.127397+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: audit 2026-03-09T17:31:11.127397+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: cluster 2026-03-09T17:31:11.162792+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T17:31:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:11 vm00 bash[20770]: cluster 2026-03-09T17:31:11.162792+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:11.644482+0000 mgr.y (mgr.14505) 221 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:11.644482+0000 mgr.y (mgr.14505) 221 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:11.913483+0000 mon.c (mon.2) 395 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:11.913483+0000 mon.c (mon.2) 395 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.131209+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.131209+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: cluster 2026-03-09T17:31:12.144883+0000 mon.a (mon.0) 1723 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: cluster 2026-03-09T17:31:12.144883+0000 mon.a (mon.0) 1723 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.153067+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/1955932716' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.153067+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/1955932716' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.156629+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.156629+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.173179+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.173179+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.174489+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.174489+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.519994+0000 mon.a (mon.0) 1726 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.519994+0000 mon.a (mon.0) 1726 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.522954+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:12 vm02 bash[23351]: audit 2026-03-09T17:31:12.522954+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:11.644482+0000 mgr.y (mgr.14505) 221 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:11.644482+0000 mgr.y (mgr.14505) 221 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:11.913483+0000 mon.c (mon.2) 395 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:11.913483+0000 mon.c (mon.2) 395 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.131209+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.131209+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: cluster 2026-03-09T17:31:12.144883+0000 mon.a (mon.0) 1723 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: cluster 2026-03-09T17:31:12.144883+0000 mon.a (mon.0) 1723 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.153067+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/1955932716' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.153067+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/1955932716' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.156629+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.156629+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.173179+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.173179+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.174489+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.174489+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.519994+0000 mon.a (mon.0) 1726 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.519994+0000 mon.a (mon.0) 1726 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.522954+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:12 vm00 bash[28333]: audit 2026-03-09T17:31:12.522954+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:11.644482+0000 mgr.y (mgr.14505) 221 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:11.644482+0000 mgr.y (mgr.14505) 221 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:11.913483+0000 mon.c (mon.2) 395 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:11.913483+0000 mon.c (mon.2) 395 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.131209+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.131209+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59908-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: cluster 2026-03-09T17:31:12.144883+0000 mon.a (mon.0) 1723 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: cluster 2026-03-09T17:31:12.144883+0000 mon.a (mon.0) 1723 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.153067+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/1955932716' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.153067+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/1955932716' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.156629+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.156629+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.173179+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.173179+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.174489+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.174489+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.519994+0000 mon.a (mon.0) 1726 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.519994+0000 mon.a (mon.0) 1726 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.522954+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:12 vm00 bash[20770]: audit 2026-03-09T17:31:12.522954+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: cluster 2026-03-09T17:31:12.716226+0000 mgr.y (mgr.14505) 222 : cluster [DBG] pgmap v270: 490 pgs: 136 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: cluster 2026-03-09T17:31:12.716226+0000 mgr.y (mgr.14505) 222 : cluster [DBG] pgmap v270: 490 pgs: 136 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:12.914130+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:12.914130+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:13.135010+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:13.135010+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:13.135109+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:13.135109+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:13.138153+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:13.138153+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: cluster 2026-03-09T17:31:13.143671+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: cluster 2026-03-09T17:31:13.143671+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:13.155791+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:13 vm02 bash[23351]: audit 2026-03-09T17:31:13.155791+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: cluster 2026-03-09T17:31:12.716226+0000 mgr.y (mgr.14505) 222 : cluster [DBG] pgmap v270: 490 pgs: 136 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: cluster 2026-03-09T17:31:12.716226+0000 mgr.y (mgr.14505) 222 : cluster [DBG] pgmap v270: 490 pgs: 136 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:12.914130+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:12.914130+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:13.135010+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:13.135010+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:13.135109+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:13.135109+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:13.138153+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:13.138153+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: cluster 2026-03-09T17:31:13.143671+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: cluster 2026-03-09T17:31:13.143671+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:13.155791+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:13 vm00 bash[28333]: audit 2026-03-09T17:31:13.155791+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: cluster 2026-03-09T17:31:12.716226+0000 mgr.y (mgr.14505) 222 : cluster [DBG] pgmap v270: 490 pgs: 136 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: cluster 2026-03-09T17:31:12.716226+0000 mgr.y (mgr.14505) 222 : cluster [DBG] pgmap v270: 490 pgs: 136 unknown, 11 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 333 active+clean; 459 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:12.914130+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:12.914130+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:13.135010+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:13.135010+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59916-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:13.135109+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:13.135109+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:13.138153+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:13.138153+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: cluster 2026-03-09T17:31:13.143671+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: cluster 2026-03-09T17:31:13.143671+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:13.155791+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:13 vm00 bash[20770]: audit 2026-03-09T17:31:13.155791+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:13.915176+0000 mon.c (mon.2) 398 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:13.915176+0000 mon.c (mon.2) 398 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.138958+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.138958+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.147211+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.147211+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.155226+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.155226+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: cluster 2026-03-09T17:31:14.155592+0000 mon.a (mon.0) 1732 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: cluster 2026-03-09T17:31:14.155592+0000 mon.a (mon.0) 1732 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.162127+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.162127+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.162292+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.162292+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.717369+0000 mon.c (mon.2) 399 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.717369+0000 mon.c (mon.2) 399 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.718100+0000 mon.a (mon.0) 1735 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:14 vm02 bash[23351]: audit 2026-03-09T17:31:14.718100+0000 mon.a (mon.0) 1735 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:13.915176+0000 mon.c (mon.2) 398 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:13.915176+0000 mon.c (mon.2) 398 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.138958+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.138958+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.147211+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.147211+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.155226+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.155226+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: cluster 2026-03-09T17:31:14.155592+0000 mon.a (mon.0) 1732 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: cluster 2026-03-09T17:31:14.155592+0000 mon.a (mon.0) 1732 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.162127+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.162127+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.162292+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.162292+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.717369+0000 mon.c (mon.2) 399 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.717369+0000 mon.c (mon.2) 399 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.718100+0000 mon.a (mon.0) 1735 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:14 vm00 bash[28333]: audit 2026-03-09T17:31:14.718100+0000 mon.a (mon.0) 1735 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:13.915176+0000 mon.c (mon.2) 398 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:13.915176+0000 mon.c (mon.2) 398 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.138958+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.138958+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.147211+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.147211+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.155226+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.155226+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: cluster 2026-03-09T17:31:14.155592+0000 mon.a (mon.0) 1732 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: cluster 2026-03-09T17:31:14.155592+0000 mon.a (mon.0) 1732 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.162127+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.162127+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.162292+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.162292+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.717369+0000 mon.c (mon.2) 399 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.717369+0000 mon.c (mon.2) 399 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.718100+0000 mon.a (mon.0) 1735 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:15.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:14 vm00 bash[20770]: audit 2026-03-09T17:31:14.718100+0000 mon.a (mon.0) 1735 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: cluster 2026-03-09T17:31:14.716658+0000 mgr.y (mgr.14505) 223 : cluster [DBG] pgmap v273: 450 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 430 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: cluster 2026-03-09T17:31:14.716658+0000 mgr.y (mgr.14505) 223 : cluster [DBG] pgmap v273: 450 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 430 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:14.916235+0000 mon.c (mon.2) 400 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:14.916235+0000 mon.c (mon.2) 400 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: cluster 2026-03-09T17:31:15.172176+0000 mon.a (mon.0) 1736 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: cluster 2026-03-09T17:31:15.172176+0000 mon.a (mon.0) 1736 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.214900+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]': finished 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.214900+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]': finished 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.214982+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.214982+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.215033+0000 mon.a (mon.0) 1739 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.215033+0000 mon.a (mon.0) 1739 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.244886+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.244886+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: cluster 2026-03-09T17:31:15.250266+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: cluster 2026-03-09T17:31:15.250266+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.252963+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.252963+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.382194+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.382194+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.383592+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:15 vm02 bash[23351]: audit 2026-03-09T17:31:15.383592+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: cluster 2026-03-09T17:31:14.716658+0000 mgr.y (mgr.14505) 223 : cluster [DBG] pgmap v273: 450 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 430 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: cluster 2026-03-09T17:31:14.716658+0000 mgr.y (mgr.14505) 223 : cluster [DBG] pgmap v273: 450 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 430 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:14.916235+0000 mon.c (mon.2) 400 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:14.916235+0000 mon.c (mon.2) 400 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: cluster 2026-03-09T17:31:15.172176+0000 mon.a (mon.0) 1736 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: cluster 2026-03-09T17:31:15.172176+0000 mon.a (mon.0) 1736 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.214900+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.214900+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.214982+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.214982+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.215033+0000 mon.a (mon.0) 1739 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.215033+0000 mon.a (mon.0) 1739 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.244886+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.244886+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: cluster 2026-03-09T17:31:15.250266+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: cluster 2026-03-09T17:31:15.250266+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.252963+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.252963+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.382194+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.382194+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.383592+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:15 vm00 bash[28333]: audit 2026-03-09T17:31:15.383592+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: cluster 2026-03-09T17:31:14.716658+0000 mgr.y (mgr.14505) 223 : cluster [DBG] pgmap v273: 450 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 430 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: cluster 2026-03-09T17:31:14.716658+0000 mgr.y (mgr.14505) 223 : cluster [DBG] pgmap v273: 450 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 430 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:14.916235+0000 mon.c (mon.2) 400 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:14.916235+0000 mon.c (mon.2) 400 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: cluster 2026-03-09T17:31:15.172176+0000 mon.a (mon.0) 1736 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: cluster 2026-03-09T17:31:15.172176+0000 mon.a (mon.0) 1736 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.214900+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.214900+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-25", "mode": "writeback"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.214982+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.214982+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.215033+0000 mon.a (mon.0) 1739 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.215033+0000 mon.a (mon.0) 1739 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.244886+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.244886+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/1985086394' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: cluster 2026-03-09T17:31:15.250266+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: cluster 2026-03-09T17:31:15.250266+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T17:31:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.252963+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.252963+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]: dispatch 2026-03-09T17:31:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.382194+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.382194+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.383592+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:15 vm00 bash[20770]: audit 2026-03-09T17:31:15.383592+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:16.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:31:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:31:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:15.917151+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:15.917151+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.220401+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.220401+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.220451+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.220451+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: cluster 2026-03-09T17:31:16.223175+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: cluster 2026-03-09T17:31:16.223175+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.224753+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.224753+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.241386+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.241386+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.257036+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.257036+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.257536+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.257536+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.259017+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.259017+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.259483+0000 mon.a (mon.0) 1748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.259483+0000 mon.a (mon.0) 1748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.260052+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.260052+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.260473+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.260473+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: cluster 2026-03-09T17:31:16.607935+0000 mon.a (mon.0) 1750 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: cluster 2026-03-09T17:31:16.607935+0000 mon.a (mon.0) 1750 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.717432+0000 mon.c (mon.2) 405 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.717432+0000 mon.c (mon.2) 405 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.718406+0000 mon.a (mon.0) 1751 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:16 vm02 bash[23351]: audit 2026-03-09T17:31:16.718406+0000 mon.a (mon.0) 1751 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:15.917151+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:15.917151+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.220401+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.220401+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.220451+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.220451+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: cluster 2026-03-09T17:31:16.223175+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: cluster 2026-03-09T17:31:16.223175+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.224753+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.224753+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.241386+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.241386+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.257036+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.257036+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.257536+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.257536+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.259017+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.259017+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.259483+0000 mon.a (mon.0) 1748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.259483+0000 mon.a (mon.0) 1748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.260052+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.260052+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.260473+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.260473+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: cluster 2026-03-09T17:31:16.607935+0000 mon.a (mon.0) 1750 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: cluster 2026-03-09T17:31:16.607935+0000 mon.a (mon.0) 1750 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.717432+0000 mon.c (mon.2) 405 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.717432+0000 mon.c (mon.2) 405 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.718406+0000 mon.a (mon.0) 1751 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:16 vm00 bash[28333]: audit 2026-03-09T17:31:16.718406+0000 mon.a (mon.0) 1751 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:15.917151+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:15.917151+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.220401+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.220401+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59908-35"}]': finished 2026-03-09T17:31:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.220451+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.220451+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: cluster 2026-03-09T17:31:16.223175+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: cluster 2026-03-09T17:31:16.223175+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.224753+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.224753+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.241386+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.241386+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.257036+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.257036+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.257536+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.257536+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.259017+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.259017+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.259483+0000 mon.a (mon.0) 1748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.259483+0000 mon.a (mon.0) 1748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.260052+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.260052+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.260473+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.260473+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: cluster 2026-03-09T17:31:16.607935+0000 mon.a (mon.0) 1750 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: cluster 2026-03-09T17:31:16.607935+0000 mon.a (mon.0) 1750 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.717432+0000 mon.c (mon.2) 405 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.717432+0000 mon.c (mon.2) 405 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.718406+0000 mon.a (mon.0) 1751 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:17.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:16 vm00 bash[20770]: audit 2026-03-09T17:31:16.718406+0000 mon.a (mon.0) 1751 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: cluster 2026-03-09T17:31:16.717075+0000 mgr.y (mgr.14505) 224 : cluster [DBG] pgmap v276: 386 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 366 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: cluster 2026-03-09T17:31:16.717075+0000 mgr.y (mgr.14505) 224 : cluster [DBG] pgmap v276: 386 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 366 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:16.918008+0000 mon.c (mon.2) 406 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:16.918008+0000 mon.c (mon.2) 406 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: cluster 2026-03-09T17:31:17.220630+0000 mon.a (mon.0) 1752 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: cluster 2026-03-09T17:31:17.220630+0000 mon.a (mon.0) 1752 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.223164+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.223164+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.223385+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.223385+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.223526+0000 mon.a (mon.0) 1755 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.223526+0000 mon.a (mon.0) 1755 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.233401+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.233401+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: cluster 2026-03-09T17:31:17.234920+0000 mon.a (mon.0) 1756 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: cluster 2026-03-09T17:31:17.234920+0000 mon.a (mon.0) 1756 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.244962+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:17 vm02 bash[23351]: audit 2026-03-09T17:31:17.244962+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: cluster 2026-03-09T17:31:16.717075+0000 mgr.y (mgr.14505) 224 : cluster [DBG] pgmap v276: 386 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 366 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: cluster 2026-03-09T17:31:16.717075+0000 mgr.y (mgr.14505) 224 : cluster [DBG] pgmap v276: 386 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 366 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:16.918008+0000 mon.c (mon.2) 406 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:16.918008+0000 mon.c (mon.2) 406 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: cluster 2026-03-09T17:31:17.220630+0000 mon.a (mon.0) 1752 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: cluster 2026-03-09T17:31:17.220630+0000 mon.a (mon.0) 1752 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.223164+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.223164+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.223385+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.223385+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.223526+0000 mon.a (mon.0) 1755 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.223526+0000 mon.a (mon.0) 1755 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.233401+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.233401+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: cluster 2026-03-09T17:31:17.234920+0000 mon.a (mon.0) 1756 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: cluster 2026-03-09T17:31:17.234920+0000 mon.a (mon.0) 1756 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.244962+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:17 vm00 bash[28333]: audit 2026-03-09T17:31:17.244962+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: cluster 2026-03-09T17:31:16.717075+0000 mgr.y (mgr.14505) 224 : cluster [DBG] pgmap v276: 386 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 366 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: cluster 2026-03-09T17:31:16.717075+0000 mgr.y (mgr.14505) 224 : cluster [DBG] pgmap v276: 386 pgs: 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 366 active+clean; 459 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:16.918008+0000 mon.c (mon.2) 406 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:16.918008+0000 mon.c (mon.2) 406 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: cluster 2026-03-09T17:31:17.220630+0000 mon.a (mon.0) 1752 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: cluster 2026-03-09T17:31:17.220630+0000 mon.a (mon.0) 1752 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.223164+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.223164+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-25"}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.223385+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.223385+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59908-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.223526+0000 mon.a (mon.0) 1755 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.223526+0000 mon.a (mon.0) 1755 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.233401+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.233401+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: cluster 2026-03-09T17:31:17.234920+0000 mon.a (mon.0) 1756 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: cluster 2026-03-09T17:31:17.234920+0000 mon.a (mon.0) 1756 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.244962+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:17 vm00 bash[20770]: audit 2026-03-09T17:31:17.244962+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:19.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:18 vm02 bash[23351]: audit 2026-03-09T17:31:17.918689+0000 mon.c (mon.2) 408 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:18 vm02 bash[23351]: audit 2026-03-09T17:31:17.918689+0000 mon.c (mon.2) 408 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:18 vm02 bash[23351]: cluster 2026-03-09T17:31:18.238570+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T17:31:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:18 vm02 bash[23351]: cluster 2026-03-09T17:31:18.238570+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T17:31:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:18 vm02 bash[23351]: audit 2026-03-09T17:31:18.718169+0000 mon.c (mon.2) 409 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:18 vm02 bash[23351]: audit 2026-03-09T17:31:18.718169+0000 mon.c (mon.2) 409 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:18 vm02 bash[23351]: audit 2026-03-09T17:31:18.719417+0000 mon.a (mon.0) 1759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:18 vm02 bash[23351]: audit 2026-03-09T17:31:18.719417+0000 mon.a (mon.0) 1759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:18 vm00 bash[28333]: audit 2026-03-09T17:31:17.918689+0000 mon.c (mon.2) 408 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:18 vm00 bash[28333]: audit 2026-03-09T17:31:17.918689+0000 mon.c (mon.2) 408 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:18 vm00 bash[28333]: cluster 2026-03-09T17:31:18.238570+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T17:31:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:18 vm00 bash[28333]: cluster 2026-03-09T17:31:18.238570+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T17:31:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:18 vm00 bash[28333]: audit 2026-03-09T17:31:18.718169+0000 mon.c (mon.2) 409 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:18 vm00 bash[28333]: audit 2026-03-09T17:31:18.718169+0000 mon.c (mon.2) 409 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:18 vm00 bash[28333]: audit 2026-03-09T17:31:18.719417+0000 mon.a (mon.0) 1759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:18 vm00 bash[28333]: audit 2026-03-09T17:31:18.719417+0000 mon.a (mon.0) 1759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:18 vm00 bash[20770]: audit 2026-03-09T17:31:17.918689+0000 mon.c (mon.2) 408 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:18 vm00 bash[20770]: audit 2026-03-09T17:31:17.918689+0000 mon.c (mon.2) 408 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:18 vm00 bash[20770]: cluster 2026-03-09T17:31:18.238570+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T17:31:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:18 vm00 bash[20770]: cluster 2026-03-09T17:31:18.238570+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T17:31:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:18 vm00 bash[20770]: audit 2026-03-09T17:31:18.718169+0000 mon.c (mon.2) 409 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:18 vm00 bash[20770]: audit 2026-03-09T17:31:18.718169+0000 mon.c (mon.2) 409 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:18 vm00 bash[20770]: audit 2026-03-09T17:31:18.719417+0000 mon.a (mon.0) 1759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:18 vm00 bash[20770]: audit 2026-03-09T17:31:18.719417+0000 mon.a (mon.0) 1759 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: cluster 2026-03-09T17:31:18.717595+0000 mgr.y (mgr.14505) 225 : cluster [DBG] pgmap v279: 290 pgs: 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: cluster 2026-03-09T17:31:18.717595+0000 mgr.y (mgr.14505) 225 : cluster [DBG] pgmap v279: 290 pgs: 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:18.919594+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:18.919594+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: cluster 2026-03-09T17:31:19.226086+0000 mon.a (mon.0) 1760 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: cluster 2026-03-09T17:31:19.226086+0000 mon.a (mon.0) 1760 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.246309+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.246309+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.246628+0000 mon.a (mon.0) 1762 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.246628+0000 mon.a (mon.0) 1762 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.258645+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.258645+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.258956+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3590968337' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.258956+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3590968337' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: cluster 2026-03-09T17:31:19.259855+0000 mon.a (mon.0) 1763 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: cluster 2026-03-09T17:31:19.259855+0000 mon.a (mon.0) 1763 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.260995+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.260995+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.261097+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:19 vm02 bash[23351]: audit 2026-03-09T17:31:19.261097+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: cluster 2026-03-09T17:31:18.717595+0000 mgr.y (mgr.14505) 225 : cluster [DBG] pgmap v279: 290 pgs: 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: cluster 2026-03-09T17:31:18.717595+0000 mgr.y (mgr.14505) 225 : cluster [DBG] pgmap v279: 290 pgs: 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:18.919594+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:18.919594+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: cluster 2026-03-09T17:31:19.226086+0000 mon.a (mon.0) 1760 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: cluster 2026-03-09T17:31:19.226086+0000 mon.a (mon.0) 1760 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.246309+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.246309+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.246628+0000 mon.a (mon.0) 1762 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.246628+0000 mon.a (mon.0) 1762 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.258645+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.258645+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.258956+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3590968337' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.258956+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3590968337' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: cluster 2026-03-09T17:31:19.259855+0000 mon.a (mon.0) 1763 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: cluster 2026-03-09T17:31:19.259855+0000 mon.a (mon.0) 1763 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.260995+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.260995+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.261097+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:19 vm00 bash[28333]: audit 2026-03-09T17:31:19.261097+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: cluster 2026-03-09T17:31:18.717595+0000 mgr.y (mgr.14505) 225 : cluster [DBG] pgmap v279: 290 pgs: 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: cluster 2026-03-09T17:31:18.717595+0000 mgr.y (mgr.14505) 225 : cluster [DBG] pgmap v279: 290 pgs: 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:18.919594+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:18.919594+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: cluster 2026-03-09T17:31:19.226086+0000 mon.a (mon.0) 1760 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: cluster 2026-03-09T17:31:19.226086+0000 mon.a (mon.0) 1760 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.246309+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.246309+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59908-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.246628+0000 mon.a (mon.0) 1762 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.246628+0000 mon.a (mon.0) 1762 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.258645+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.258645+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.258956+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3590968337' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.258956+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3590968337' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: cluster 2026-03-09T17:31:19.259855+0000 mon.a (mon.0) 1763 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: cluster 2026-03-09T17:31:19.259855+0000 mon.a (mon.0) 1763 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.260995+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.260995+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.261097+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:19 vm00 bash[20770]: audit 2026-03-09T17:31:19.261097+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:21.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:20 vm02 bash[23351]: audit 2026-03-09T17:31:19.920586+0000 mon.c (mon.2) 411 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:20 vm02 bash[23351]: audit 2026-03-09T17:31:19.920586+0000 mon.c (mon.2) 411 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:20 vm02 bash[23351]: audit 2026-03-09T17:31:20.250489+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:20 vm02 bash[23351]: audit 2026-03-09T17:31:20.250489+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:20 vm02 bash[23351]: audit 2026-03-09T17:31:20.250614+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:20 vm02 bash[23351]: audit 2026-03-09T17:31:20.250614+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:20 vm02 bash[23351]: cluster 2026-03-09T17:31:20.256538+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T17:31:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:20 vm02 bash[23351]: cluster 2026-03-09T17:31:20.256538+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:20 vm00 bash[20770]: audit 2026-03-09T17:31:19.920586+0000 mon.c (mon.2) 411 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:20 vm00 bash[20770]: audit 2026-03-09T17:31:19.920586+0000 mon.c (mon.2) 411 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:20 vm00 bash[20770]: audit 2026-03-09T17:31:20.250489+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:20 vm00 bash[20770]: audit 2026-03-09T17:31:20.250489+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:20 vm00 bash[20770]: audit 2026-03-09T17:31:20.250614+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:20 vm00 bash[20770]: audit 2026-03-09T17:31:20.250614+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:20 vm00 bash[20770]: cluster 2026-03-09T17:31:20.256538+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:20 vm00 bash[20770]: cluster 2026-03-09T17:31:20.256538+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:20 vm00 bash[28333]: audit 2026-03-09T17:31:19.920586+0000 mon.c (mon.2) 411 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:20 vm00 bash[28333]: audit 2026-03-09T17:31:19.920586+0000 mon.c (mon.2) 411 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:20 vm00 bash[28333]: audit 2026-03-09T17:31:20.250489+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:20 vm00 bash[28333]: audit 2026-03-09T17:31:20.250489+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:20 vm00 bash[28333]: audit 2026-03-09T17:31:20.250614+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:20 vm00 bash[28333]: audit 2026-03-09T17:31:20.250614+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59916-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:20 vm00 bash[28333]: cluster 2026-03-09T17:31:20.256538+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T17:31:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:20 vm00 bash[28333]: cluster 2026-03-09T17:31:20.256538+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: cluster 2026-03-09T17:31:20.718152+0000 mgr.y (mgr.14505) 226 : cluster [DBG] pgmap v282: 362 pgs: 72 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: cluster 2026-03-09T17:31:20.718152+0000 mgr.y (mgr.14505) 226 : cluster [DBG] pgmap v282: 362 pgs: 72 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:20.921440+0000 mon.c (mon.2) 412 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:20.921440+0000 mon.c (mon.2) 412 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: cluster 2026-03-09T17:31:21.266159+0000 mon.a (mon.0) 1769 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: cluster 2026-03-09T17:31:21.266159+0000 mon.a (mon.0) 1769 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:21.273746+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:21.273746+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:21.274095+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:21.274095+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:21.309740+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:21.309740+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:21.311007+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: audit 2026-03-09T17:31:21.311007+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: cluster 2026-03-09T17:31:21.609049+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:21 vm02 bash[23351]: cluster 2026-03-09T17:31:21.609049+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:22.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:31:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:31:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: cluster 2026-03-09T17:31:20.718152+0000 mgr.y (mgr.14505) 226 : cluster [DBG] pgmap v282: 362 pgs: 72 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: cluster 2026-03-09T17:31:20.718152+0000 mgr.y (mgr.14505) 226 : cluster [DBG] pgmap v282: 362 pgs: 72 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:20.921440+0000 mon.c (mon.2) 412 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:20.921440+0000 mon.c (mon.2) 412 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: cluster 2026-03-09T17:31:21.266159+0000 mon.a (mon.0) 1769 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: cluster 2026-03-09T17:31:21.266159+0000 mon.a (mon.0) 1769 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:21.273746+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:21.273746+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:21.274095+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:21.274095+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:21.309740+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:21.309740+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:21.311007+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: audit 2026-03-09T17:31:21.311007+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: cluster 2026-03-09T17:31:21.609049+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:21 vm00 bash[28333]: cluster 2026-03-09T17:31:21.609049+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: cluster 2026-03-09T17:31:20.718152+0000 mgr.y (mgr.14505) 226 : cluster [DBG] pgmap v282: 362 pgs: 72 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: cluster 2026-03-09T17:31:20.718152+0000 mgr.y (mgr.14505) 226 : cluster [DBG] pgmap v282: 362 pgs: 72 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 269 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:20.921440+0000 mon.c (mon.2) 412 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:20.921440+0000 mon.c (mon.2) 412 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: cluster 2026-03-09T17:31:21.266159+0000 mon.a (mon.0) 1769 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: cluster 2026-03-09T17:31:21.266159+0000 mon.a (mon.0) 1769 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:21.273746+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:21.273746+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:21.274095+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:21.274095+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:21.309740+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:21.309740+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:21.311007+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: audit 2026-03-09T17:31:21.311007+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: cluster 2026-03-09T17:31:21.609049+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:21 vm00 bash[20770]: cluster 2026-03-09T17:31:21.609049+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:21.648625+0000 mgr.y (mgr.14505) 227 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:21.648625+0000 mgr.y (mgr.14505) 227 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:21.922365+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:21.922365+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.262275+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.262275+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.262319+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.262319+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.267732+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.267732+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: cluster 2026-03-09T17:31:22.272114+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: cluster 2026-03-09T17:31:22.272114+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.274393+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.274393+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.278996+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.278996+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.279543+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.279543+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.279638+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:22 vm02 bash[23351]: audit 2026-03-09T17:31:22.279638+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:21.648625+0000 mgr.y (mgr.14505) 227 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:21.648625+0000 mgr.y (mgr.14505) 227 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:21.922365+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:21.922365+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.262275+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.262275+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.262319+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.262319+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.267732+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.267732+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: cluster 2026-03-09T17:31:22.272114+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: cluster 2026-03-09T17:31:22.272114+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.274393+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.274393+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.278996+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.278996+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.279543+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.279543+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.279638+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:22 vm00 bash[28333]: audit 2026-03-09T17:31:22.279638+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:21.648625+0000 mgr.y (mgr.14505) 227 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:21.648625+0000 mgr.y (mgr.14505) 227 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:21.922365+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:21.922365+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.262275+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.262275+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.262319+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.262319+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.267732+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.267732+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: cluster 2026-03-09T17:31:22.272114+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: cluster 2026-03-09T17:31:22.272114+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.274393+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.274393+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.278996+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.278996+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.100:0/1059830174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.279543+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.279543+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.279638+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:22 vm00 bash[20770]: audit 2026-03-09T17:31:22.279638+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: cluster 2026-03-09T17:31:22.718490+0000 mgr.y (mgr.14505) 228 : cluster [DBG] pgmap v285: 353 pgs: 64 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 268 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: cluster 2026-03-09T17:31:22.718490+0000 mgr.y (mgr.14505) 228 : cluster [DBG] pgmap v285: 353 pgs: 64 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 268 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:22.923093+0000 mon.c (mon.2) 416 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:22.923093+0000 mon.c (mon.2) 416 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.270629+0000 mon.a (mon.0) 1779 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.270629+0000 mon.a (mon.0) 1779 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.270775+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.270775+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.270913+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.270913+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.279381+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.279381+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: cluster 2026-03-09T17:31:23.289464+0000 mon.a (mon.0) 1782 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: cluster 2026-03-09T17:31:23.289464+0000 mon.a (mon.0) 1782 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.290915+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.290915+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.313623+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.313623+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.314826+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.314826+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.316361+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:23 vm02 bash[23351]: audit 2026-03-09T17:31:23.316361+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:24.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: cluster 2026-03-09T17:31:22.718490+0000 mgr.y (mgr.14505) 228 : cluster [DBG] pgmap v285: 353 pgs: 64 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 268 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: cluster 2026-03-09T17:31:22.718490+0000 mgr.y (mgr.14505) 228 : cluster [DBG] pgmap v285: 353 pgs: 64 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 268 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:22.923093+0000 mon.c (mon.2) 416 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:22.923093+0000 mon.c (mon.2) 416 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.270629+0000 mon.a (mon.0) 1779 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.270629+0000 mon.a (mon.0) 1779 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.270775+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.270775+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.270913+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.270913+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.279381+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.279381+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: cluster 2026-03-09T17:31:23.289464+0000 mon.a (mon.0) 1782 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: cluster 2026-03-09T17:31:23.289464+0000 mon.a (mon.0) 1782 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.290915+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.290915+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.313623+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.313623+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.314826+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.314826+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.316361+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:23 vm00 bash[28333]: audit 2026-03-09T17:31:23.316361+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: cluster 2026-03-09T17:31:22.718490+0000 mgr.y (mgr.14505) 228 : cluster [DBG] pgmap v285: 353 pgs: 64 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 268 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: cluster 2026-03-09T17:31:22.718490+0000 mgr.y (mgr.14505) 228 : cluster [DBG] pgmap v285: 353 pgs: 64 unknown, 1 peering, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 268 active+clean; 458 KiB data, 651 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:22.923093+0000 mon.c (mon.2) 416 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:22.923093+0000 mon.c (mon.2) 416 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.270629+0000 mon.a (mon.0) 1779 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.270629+0000 mon.a (mon.0) 1779 : audit [INF] from='client.? 192.168.123.100:0/1494882174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59916-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.270775+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.270775+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59908-36"}]': finished 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.270913+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.270913+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.279381+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.279381+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: cluster 2026-03-09T17:31:23.289464+0000 mon.a (mon.0) 1782 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: cluster 2026-03-09T17:31:23.289464+0000 mon.a (mon.0) 1782 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.290915+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.290915+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.313623+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.313623+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.314826+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.314826+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.316361+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:23 vm00 bash[20770]: audit 2026-03-09T17:31:23.316361+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:23.924087+0000 mon.c (mon.2) 417 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:23.924087+0000 mon.c (mon.2) 417 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: cluster 2026-03-09T17:31:24.270672+0000 mon.a (mon.0) 1787 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: cluster 2026-03-09T17:31:24.270672+0000 mon.a (mon.0) 1787 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.278058+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]': finished 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.278058+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]': finished 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.278192+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.278192+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: cluster 2026-03-09T17:31:24.301598+0000 mon.a (mon.0) 1790 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: cluster 2026-03-09T17:31:24.301598+0000 mon.a (mon.0) 1790 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.302220+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.302220+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.314335+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.314335+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.315298+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.315298+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.316046+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.316046+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.719453+0000 mon.c (mon.2) 418 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.719453+0000 mon.c (mon.2) 418 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.720882+0000 mon.a (mon.0) 1795 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:24 vm00 bash[28333]: audit 2026-03-09T17:31:24.720882+0000 mon.a (mon.0) 1795 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:23.924087+0000 mon.c (mon.2) 417 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:23.924087+0000 mon.c (mon.2) 417 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: cluster 2026-03-09T17:31:24.270672+0000 mon.a (mon.0) 1787 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: cluster 2026-03-09T17:31:24.270672+0000 mon.a (mon.0) 1787 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.278058+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]': finished 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.278058+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]': finished 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.278192+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.278192+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: cluster 2026-03-09T17:31:24.301598+0000 mon.a (mon.0) 1790 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: cluster 2026-03-09T17:31:24.301598+0000 mon.a (mon.0) 1790 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.302220+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.302220+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.314335+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.314335+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.315298+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.315298+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.316046+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.316046+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.719453+0000 mon.c (mon.2) 418 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.719453+0000 mon.c (mon.2) 418 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.720882+0000 mon.a (mon.0) 1795 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:24 vm00 bash[20770]: audit 2026-03-09T17:31:24.720882+0000 mon.a (mon.0) 1795 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:23.924087+0000 mon.c (mon.2) 417 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:23.924087+0000 mon.c (mon.2) 417 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: cluster 2026-03-09T17:31:24.270672+0000 mon.a (mon.0) 1787 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: cluster 2026-03-09T17:31:24.270672+0000 mon.a (mon.0) 1787 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.278058+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]': finished 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.278058+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-27", "mode": "writeback"}]': finished 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.278192+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.278192+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59908-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: cluster 2026-03-09T17:31:24.301598+0000 mon.a (mon.0) 1790 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: cluster 2026-03-09T17:31:24.301598+0000 mon.a (mon.0) 1790 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.302220+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.302220+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.314335+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.314335+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.315298+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.315298+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.316046+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.316046+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.719453+0000 mon.c (mon.2) 418 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.719453+0000 mon.c (mon.2) 418 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.720882+0000 mon.a (mon.0) 1795 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:24 vm02 bash[23351]: audit 2026-03-09T17:31:24.720882+0000 mon.a (mon.0) 1795 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: cluster 2026-03-09T17:31:24.718895+0000 mgr.y (mgr.14505) 229 : cluster [DBG] pgmap v288: 321 pgs: 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 302 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: cluster 2026-03-09T17:31:24.718895+0000 mgr.y (mgr.14505) 229 : cluster [DBG] pgmap v288: 321 pgs: 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 302 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:24.929340+0000 mon.c (mon.2) 419 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:24.929340+0000 mon.c (mon.2) 419 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: cluster 2026-03-09T17:31:25.279217+0000 mon.a (mon.0) 1796 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: cluster 2026-03-09T17:31:25.279217+0000 mon.a (mon.0) 1796 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.281578+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.281578+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.281644+0000 mon.a (mon.0) 1798 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.281644+0000 mon.a (mon.0) 1798 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: cluster 2026-03-09T17:31:25.306826+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: cluster 2026-03-09T17:31:25.306826+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.308334+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.308334+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.338008+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.338008+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.339311+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:25 vm00 bash[28333]: audit 2026-03-09T17:31:25.339311+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: cluster 2026-03-09T17:31:24.718895+0000 mgr.y (mgr.14505) 229 : cluster [DBG] pgmap v288: 321 pgs: 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 302 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: cluster 2026-03-09T17:31:24.718895+0000 mgr.y (mgr.14505) 229 : cluster [DBG] pgmap v288: 321 pgs: 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 302 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:24.929340+0000 mon.c (mon.2) 419 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:24.929340+0000 mon.c (mon.2) 419 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: cluster 2026-03-09T17:31:25.279217+0000 mon.a (mon.0) 1796 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: cluster 2026-03-09T17:31:25.279217+0000 mon.a (mon.0) 1796 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.281578+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.281578+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.281644+0000 mon.a (mon.0) 1798 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.281644+0000 mon.a (mon.0) 1798 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-09T17:31:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: cluster 2026-03-09T17:31:25.306826+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T17:31:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: cluster 2026-03-09T17:31:25.306826+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T17:31:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.308334+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.308334+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.338008+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.338008+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.339311+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:25 vm00 bash[20770]: audit 2026-03-09T17:31:25.339311+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: cluster 2026-03-09T17:31:24.718895+0000 mgr.y (mgr.14505) 229 : cluster [DBG] pgmap v288: 321 pgs: 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 302 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: cluster 2026-03-09T17:31:24.718895+0000 mgr.y (mgr.14505) 229 : cluster [DBG] pgmap v288: 321 pgs: 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 302 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:24.929340+0000 mon.c (mon.2) 419 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:24.929340+0000 mon.c (mon.2) 419 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: cluster 2026-03-09T17:31:25.279217+0000 mon.a (mon.0) 1796 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: cluster 2026-03-09T17:31:25.279217+0000 mon.a (mon.0) 1796 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.281578+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.281578+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.281644+0000 mon.a (mon.0) 1798 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.281644+0000 mon.a (mon.0) 1798 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: cluster 2026-03-09T17:31:25.306826+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: cluster 2026-03-09T17:31:25.306826+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.308334+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.308334+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.338008+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.338008+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.339311+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:25 vm02 bash[23351]: audit 2026-03-09T17:31:25.339311+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:31:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:31:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:25.930035+0000 mon.c (mon.2) 420 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:25.930035+0000 mon.c (mon.2) 420 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.286364+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.286364+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.286538+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.286538+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.290197+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.290197+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: cluster 2026-03-09T17:31:26.307060+0000 mon.a (mon.0) 1804 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: cluster 2026-03-09T17:31:26.307060+0000 mon.a (mon.0) 1804 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.312457+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.312457+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: cluster 2026-03-09T17:31:26.609925+0000 mon.a (mon.0) 1806 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: cluster 2026-03-09T17:31:26.609925+0000 mon.a (mon.0) 1806 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: cluster 2026-03-09T17:31:26.610585+0000 mon.a (mon.0) 1807 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: cluster 2026-03-09T17:31:26.610585+0000 mon.a (mon.0) 1807 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.614037+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.614037+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.614070+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: audit 2026-03-09T17:31:26.614070+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: cluster 2026-03-09T17:31:26.627686+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:26 vm00 bash[20770]: cluster 2026-03-09T17:31:26.627686+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:25.930035+0000 mon.c (mon.2) 420 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:25.930035+0000 mon.c (mon.2) 420 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.286364+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.286364+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.286538+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.286538+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.290197+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.290197+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: cluster 2026-03-09T17:31:26.307060+0000 mon.a (mon.0) 1804 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: cluster 2026-03-09T17:31:26.307060+0000 mon.a (mon.0) 1804 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.312457+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.312457+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: cluster 2026-03-09T17:31:26.609925+0000 mon.a (mon.0) 1806 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: cluster 2026-03-09T17:31:26.609925+0000 mon.a (mon.0) 1806 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: cluster 2026-03-09T17:31:26.610585+0000 mon.a (mon.0) 1807 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: cluster 2026-03-09T17:31:26.610585+0000 mon.a (mon.0) 1807 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.614037+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.614037+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.614070+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: audit 2026-03-09T17:31:26.614070+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: cluster 2026-03-09T17:31:26.627686+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T17:31:27.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:26 vm00 bash[28333]: cluster 2026-03-09T17:31:26.627686+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:25.930035+0000 mon.c (mon.2) 420 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:25.930035+0000 mon.c (mon.2) 420 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.286364+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.286364+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59908-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.286538+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.286538+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.290197+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.290197+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: cluster 2026-03-09T17:31:26.307060+0000 mon.a (mon.0) 1804 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: cluster 2026-03-09T17:31:26.307060+0000 mon.a (mon.0) 1804 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.312457+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.312457+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]: dispatch 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: cluster 2026-03-09T17:31:26.609925+0000 mon.a (mon.0) 1806 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: cluster 2026-03-09T17:31:26.609925+0000 mon.a (mon.0) 1806 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: cluster 2026-03-09T17:31:26.610585+0000 mon.a (mon.0) 1807 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: cluster 2026-03-09T17:31:26.610585+0000 mon.a (mon.0) 1807 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.614037+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.614037+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.614070+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: audit 2026-03-09T17:31:26.614070+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-27"}]': finished 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: cluster 2026-03-09T17:31:26.627686+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T17:31:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:26 vm02 bash[23351]: cluster 2026-03-09T17:31:26.627686+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T17:31:28.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: cluster 2026-03-09T17:31:26.719299+0000 mgr.y (mgr.14505) 230 : cluster [DBG] pgmap v292: 336 pgs: 16 unknown, 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 301 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: cluster 2026-03-09T17:31:26.719299+0000 mgr.y (mgr.14505) 230 : cluster [DBG] pgmap v292: 336 pgs: 16 unknown, 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 301 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: audit 2026-03-09T17:31:26.930708+0000 mon.c (mon.2) 421 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: audit 2026-03-09T17:31:26.930708+0000 mon.c (mon.2) 421 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: audit 2026-03-09T17:31:27.529599+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: audit 2026-03-09T17:31:27.529599+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: cluster 2026-03-09T17:31:27.660458+0000 mon.a (mon.0) 1811 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: cluster 2026-03-09T17:31:27.660458+0000 mon.a (mon.0) 1811 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: audit 2026-03-09T17:31:27.674132+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:27 vm00 bash[28333]: audit 2026-03-09T17:31:27.674132+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: cluster 2026-03-09T17:31:26.719299+0000 mgr.y (mgr.14505) 230 : cluster [DBG] pgmap v292: 336 pgs: 16 unknown, 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 301 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: cluster 2026-03-09T17:31:26.719299+0000 mgr.y (mgr.14505) 230 : cluster [DBG] pgmap v292: 336 pgs: 16 unknown, 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 301 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: audit 2026-03-09T17:31:26.930708+0000 mon.c (mon.2) 421 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: audit 2026-03-09T17:31:26.930708+0000 mon.c (mon.2) 421 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: audit 2026-03-09T17:31:27.529599+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: audit 2026-03-09T17:31:27.529599+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: cluster 2026-03-09T17:31:27.660458+0000 mon.a (mon.0) 1811 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: cluster 2026-03-09T17:31:27.660458+0000 mon.a (mon.0) 1811 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: audit 2026-03-09T17:31:27.674132+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:27 vm00 bash[20770]: audit 2026-03-09T17:31:27.674132+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: cluster 2026-03-09T17:31:26.719299+0000 mgr.y (mgr.14505) 230 : cluster [DBG] pgmap v292: 336 pgs: 16 unknown, 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 301 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: cluster 2026-03-09T17:31:26.719299+0000 mgr.y (mgr.14505) 230 : cluster [DBG] pgmap v292: 336 pgs: 16 unknown, 11 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 301 active+clean; 458 KiB data, 652 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: audit 2026-03-09T17:31:26.930708+0000 mon.c (mon.2) 421 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: audit 2026-03-09T17:31:26.930708+0000 mon.c (mon.2) 421 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: audit 2026-03-09T17:31:27.529599+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: audit 2026-03-09T17:31:27.529599+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: cluster 2026-03-09T17:31:27.660458+0000 mon.a (mon.0) 1811 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: cluster 2026-03-09T17:31:27.660458+0000 mon.a (mon.0) 1811 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: audit 2026-03-09T17:31:27.674132+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:27 vm02 bash[23351]: audit 2026-03-09T17:31:27.674132+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:29.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:27.935406+0000 mon.c (mon.2) 423 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:29.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:27.935406+0000 mon.c (mon.2) 423 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.659545+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.659545+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: cluster 2026-03-09T17:31:28.662717+0000 mon.a (mon.0) 1814 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: cluster 2026-03-09T17:31:28.662717+0000 mon.a (mon.0) 1814 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.663566+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.663566+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.664214+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.664214+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.683141+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.683141+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.684449+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:28 vm00 bash[28333]: audit 2026-03-09T17:31:28.684449+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:27.935406+0000 mon.c (mon.2) 423 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:27.935406+0000 mon.c (mon.2) 423 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.659545+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.659545+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: cluster 2026-03-09T17:31:28.662717+0000 mon.a (mon.0) 1814 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: cluster 2026-03-09T17:31:28.662717+0000 mon.a (mon.0) 1814 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.663566+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.663566+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.664214+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.664214+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.683141+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.683141+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.684449+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:28 vm00 bash[20770]: audit 2026-03-09T17:31:28.684449+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:27.935406+0000 mon.c (mon.2) 423 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:27.935406+0000 mon.c (mon.2) 423 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.659545+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.659545+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: cluster 2026-03-09T17:31:28.662717+0000 mon.a (mon.0) 1814 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: cluster 2026-03-09T17:31:28.662717+0000 mon.a (mon.0) 1814 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.663566+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.663566+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.664214+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.664214+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.683141+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.683141+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.684449+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:28 vm02 bash[23351]: audit 2026-03-09T17:31:28.684449+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: cluster 2026-03-09T17:31:28.719666+0000 mgr.y (mgr.14505) 231 : cluster [DBG] pgmap v295: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: cluster 2026-03-09T17:31:28.719666+0000 mgr.y (mgr.14505) 231 : cluster [DBG] pgmap v295: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:28.936381+0000 mon.c (mon.2) 424 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:28.936381+0000 mon.c (mon.2) 424 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.689468+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.689468+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.689566+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.689566+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.689585+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.689585+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: cluster 2026-03-09T17:31:29.695384+0000 mon.a (mon.0) 1821 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: cluster 2026-03-09T17:31:29.695384+0000 mon.a (mon.0) 1821 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.707746+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.707746+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.726865+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.726865+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.728222+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.728222+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.728582+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.728582+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.937430+0000 mon.c (mon.2) 425 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:29 vm00 bash[28333]: audit 2026-03-09T17:31:29.937430+0000 mon.c (mon.2) 425 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: cluster 2026-03-09T17:31:28.719666+0000 mgr.y (mgr.14505) 231 : cluster [DBG] pgmap v295: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: cluster 2026-03-09T17:31:28.719666+0000 mgr.y (mgr.14505) 231 : cluster [DBG] pgmap v295: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:28.936381+0000 mon.c (mon.2) 424 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:28.936381+0000 mon.c (mon.2) 424 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.689468+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.689468+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.689566+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.689566+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.689585+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.689585+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: cluster 2026-03-09T17:31:29.695384+0000 mon.a (mon.0) 1821 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: cluster 2026-03-09T17:31:29.695384+0000 mon.a (mon.0) 1821 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.707746+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.707746+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.726865+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.726865+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.728222+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.728222+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.728582+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.728582+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.937430+0000 mon.c (mon.2) 425 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:29 vm00 bash[20770]: audit 2026-03-09T17:31:29.937430+0000 mon.c (mon.2) 425 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: cluster 2026-03-09T17:31:28.719666+0000 mgr.y (mgr.14505) 231 : cluster [DBG] pgmap v295: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: cluster 2026-03-09T17:31:28.719666+0000 mgr.y (mgr.14505) 231 : cluster [DBG] pgmap v295: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:28.936381+0000 mon.c (mon.2) 424 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:28.936381+0000 mon.c (mon.2) 424 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.689468+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.689468+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? 192.168.123.100:0/2524946618' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59908-37"}]': finished 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.689566+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.689566+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.689585+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.689585+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: cluster 2026-03-09T17:31:29.695384+0000 mon.a (mon.0) 1821 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: cluster 2026-03-09T17:31:29.695384+0000 mon.a (mon.0) 1821 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.707746+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.707746+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.726865+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.726865+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.728222+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.728222+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.728582+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.728582+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.937430+0000 mon.c (mon.2) 425 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:29 vm02 bash[23351]: audit 2026-03-09T17:31:29.937430+0000 mon.c (mon.2) 425 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:32.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.693810+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:32.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.693810+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.694005+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.694005+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: cluster 2026-03-09T17:31:30.705847+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: cluster 2026-03-09T17:31:30.705847+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.706913+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.706913+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.716973+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.716973+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.718153+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.718153+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.718453+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.718453+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: cluster 2026-03-09T17:31:30.719996+0000 mgr.y (mgr.14505) 232 : cluster [DBG] pgmap v298: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: cluster 2026-03-09T17:31:30.719996+0000 mgr.y (mgr.14505) 232 : cluster [DBG] pgmap v298: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.938343+0000 mon.c (mon.2) 426 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: audit 2026-03-09T17:31:30.938343+0000 mon.c (mon.2) 426 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: cluster 2026-03-09T17:31:31.612586+0000 mon.a (mon.0) 1833 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:31 vm00 bash[28333]: cluster 2026-03-09T17:31:31.612586+0000 mon.a (mon.0) 1833 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.693810+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.693810+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.694005+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.694005+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: cluster 2026-03-09T17:31:30.705847+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: cluster 2026-03-09T17:31:30.705847+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.706913+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.706913+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.716973+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.716973+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.718153+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.718153+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.718453+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.718453+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: cluster 2026-03-09T17:31:30.719996+0000 mgr.y (mgr.14505) 232 : cluster [DBG] pgmap v298: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: cluster 2026-03-09T17:31:30.719996+0000 mgr.y (mgr.14505) 232 : cluster [DBG] pgmap v298: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.938343+0000 mon.c (mon.2) 426 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: audit 2026-03-09T17:31:30.938343+0000 mon.c (mon.2) 426 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: cluster 2026-03-09T17:31:31.612586+0000 mon.a (mon.0) 1833 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:31 vm00 bash[20770]: cluster 2026-03-09T17:31:31.612586+0000 mon.a (mon.0) 1833 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.693810+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.693810+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? 192.168.123.100:0/3231477776' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-46"}]': finished 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.694005+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.694005+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59908-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: cluster 2026-03-09T17:31:30.705847+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: cluster 2026-03-09T17:31:30.705847+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.706913+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.706913+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.716973+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.716973+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.718153+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.718153+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.718453+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.718453+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: cluster 2026-03-09T17:31:30.719996+0000 mgr.y (mgr.14505) 232 : cluster [DBG] pgmap v298: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: cluster 2026-03-09T17:31:30.719996+0000 mgr.y (mgr.14505) 232 : cluster [DBG] pgmap v298: 320 pgs: 32 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.938343+0000 mon.c (mon.2) 426 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: audit 2026-03-09T17:31:30.938343+0000 mon.c (mon.2) 426 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: cluster 2026-03-09T17:31:31.612586+0000 mon.a (mon.0) 1833 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:31 vm02 bash[23351]: cluster 2026-03-09T17:31:31.612586+0000 mon.a (mon.0) 1833 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:32.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:31:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.659173+0000 mgr.y (mgr.14505) 233 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.659173+0000 mgr.y (mgr.14505) 233 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.704412+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.704412+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: cluster 2026-03-09T17:31:31.729228+0000 mon.a (mon.0) 1835 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: cluster 2026-03-09T17:31:31.729228+0000 mon.a (mon.0) 1835 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.729894+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.729894+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.743739+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.743739+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.745032+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.745032+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.939409+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:32 vm00 bash[28333]: audit 2026-03-09T17:31:31.939409+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.659173+0000 mgr.y (mgr.14505) 233 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.659173+0000 mgr.y (mgr.14505) 233 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.704412+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.704412+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: cluster 2026-03-09T17:31:31.729228+0000 mon.a (mon.0) 1835 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: cluster 2026-03-09T17:31:31.729228+0000 mon.a (mon.0) 1835 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.729894+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.729894+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.743739+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.743739+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.745032+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.745032+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.939409+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:33.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:32 vm00 bash[20770]: audit 2026-03-09T17:31:31.939409+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.659173+0000 mgr.y (mgr.14505) 233 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.659173+0000 mgr.y (mgr.14505) 233 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.704412+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.704412+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59916-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: cluster 2026-03-09T17:31:31.729228+0000 mon.a (mon.0) 1835 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: cluster 2026-03-09T17:31:31.729228+0000 mon.a (mon.0) 1835 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.729894+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.729894+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.743739+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.743739+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.745032+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.745032+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.939409+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:32 vm02 bash[23351]: audit 2026-03-09T17:31:31.939409+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.710621+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.710621+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.710657+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.710657+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.719779+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.719779+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: cluster 2026-03-09T17:31:32.720297+0000 mgr.y (mgr.14505) 234 : cluster [DBG] pgmap v301: 328 pgs: 40 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: cluster 2026-03-09T17:31:32.720297+0000 mgr.y (mgr.14505) 234 : cluster [DBG] pgmap v301: 328 pgs: 40 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: cluster 2026-03-09T17:31:32.726014+0000 mon.a (mon.0) 1840 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: cluster 2026-03-09T17:31:32.726014+0000 mon.a (mon.0) 1840 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.767228+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.767228+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.940353+0000 mon.c (mon.2) 428 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:32.940353+0000 mon.c (mon.2) 428 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:33.724843+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:33.724843+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:33.725303+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:33.725303+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:33.727599+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:33.727599+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: cluster 2026-03-09T17:31:33.735339+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: cluster 2026-03-09T17:31:33.735339+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:33.749312+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:33 vm00 bash[28333]: audit 2026-03-09T17:31:33.749312+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.710621+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.710621+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.710657+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.710657+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.719779+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.719779+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: cluster 2026-03-09T17:31:32.720297+0000 mgr.y (mgr.14505) 234 : cluster [DBG] pgmap v301: 328 pgs: 40 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: cluster 2026-03-09T17:31:32.720297+0000 mgr.y (mgr.14505) 234 : cluster [DBG] pgmap v301: 328 pgs: 40 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: cluster 2026-03-09T17:31:32.726014+0000 mon.a (mon.0) 1840 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: cluster 2026-03-09T17:31:32.726014+0000 mon.a (mon.0) 1840 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.767228+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.767228+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.940353+0000 mon.c (mon.2) 428 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:32.940353+0000 mon.c (mon.2) 428 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:33.724843+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:33.724843+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:33.725303+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:33.725303+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:33.727599+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:33.727599+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: cluster 2026-03-09T17:31:33.735339+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T17:31:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: cluster 2026-03-09T17:31:33.735339+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T17:31:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:33.749312+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:33 vm00 bash[20770]: audit 2026-03-09T17:31:33.749312+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.710621+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.710621+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59908-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.710657+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.710657+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.719779+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.719779+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: cluster 2026-03-09T17:31:32.720297+0000 mgr.y (mgr.14505) 234 : cluster [DBG] pgmap v301: 328 pgs: 40 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: cluster 2026-03-09T17:31:32.720297+0000 mgr.y (mgr.14505) 234 : cluster [DBG] pgmap v301: 328 pgs: 40 unknown, 1 activating, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 267 active+clean; 458 KiB data, 657 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: cluster 2026-03-09T17:31:32.726014+0000 mon.a (mon.0) 1840 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: cluster 2026-03-09T17:31:32.726014+0000 mon.a (mon.0) 1840 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.767228+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.767228+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.940353+0000 mon.c (mon.2) 428 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:32.940353+0000 mon.c (mon.2) 428 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:33.724843+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:33.724843+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59916-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:33.725303+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:33.725303+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:33.727599+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:33.727599+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: cluster 2026-03-09T17:31:33.735339+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: cluster 2026-03-09T17:31:33.735339+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:33.749312+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:33 vm02 bash[23351]: audit 2026-03-09T17:31:33.749312+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]: dispatch 2026-03-09T17:31:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:34 vm02 bash[23351]: audit 2026-03-09T17:31:33.941327+0000 mon.c (mon.2) 429 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:34 vm02 bash[23351]: audit 2026-03-09T17:31:33.941327+0000 mon.c (mon.2) 429 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:34 vm02 bash[23351]: cluster 2026-03-09T17:31:34.724841+0000 mon.a (mon.0) 1846 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:34 vm02 bash[23351]: cluster 2026-03-09T17:31:34.724841+0000 mon.a (mon.0) 1846 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:34 vm02 bash[23351]: audit 2026-03-09T17:31:34.728308+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]': finished 2026-03-09T17:31:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:34 vm02 bash[23351]: audit 2026-03-09T17:31:34.728308+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]': finished 2026-03-09T17:31:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:34 vm02 bash[23351]: cluster 2026-03-09T17:31:34.736296+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T17:31:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:34 vm02 bash[23351]: cluster 2026-03-09T17:31:34.736296+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T17:31:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:34 vm00 bash[28333]: audit 2026-03-09T17:31:33.941327+0000 mon.c (mon.2) 429 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:35.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:34 vm00 bash[28333]: audit 2026-03-09T17:31:33.941327+0000 mon.c (mon.2) 429 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:34 vm00 bash[28333]: cluster 2026-03-09T17:31:34.724841+0000 mon.a (mon.0) 1846 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:34 vm00 bash[28333]: cluster 2026-03-09T17:31:34.724841+0000 mon.a (mon.0) 1846 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:34 vm00 bash[28333]: audit 2026-03-09T17:31:34.728308+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]': finished 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:34 vm00 bash[28333]: audit 2026-03-09T17:31:34.728308+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]': finished 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:34 vm00 bash[28333]: cluster 2026-03-09T17:31:34.736296+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:34 vm00 bash[28333]: cluster 2026-03-09T17:31:34.736296+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:34 vm00 bash[20770]: audit 2026-03-09T17:31:33.941327+0000 mon.c (mon.2) 429 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:34 vm00 bash[20770]: audit 2026-03-09T17:31:33.941327+0000 mon.c (mon.2) 429 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:34 vm00 bash[20770]: cluster 2026-03-09T17:31:34.724841+0000 mon.a (mon.0) 1846 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:34 vm00 bash[20770]: cluster 2026-03-09T17:31:34.724841+0000 mon.a (mon.0) 1846 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:34 vm00 bash[20770]: audit 2026-03-09T17:31:34.728308+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]': finished 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:34 vm00 bash[20770]: audit 2026-03-09T17:31:34.728308+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-29", "mode": "writeback"}]': finished 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:34 vm00 bash[20770]: cluster 2026-03-09T17:31:34.736296+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T17:31:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:34 vm00 bash[20770]: cluster 2026-03-09T17:31:34.736296+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T17:31:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:35 vm00 bash[28333]: cluster 2026-03-09T17:31:34.720714+0000 mgr.y (mgr.14505) 235 : cluster [DBG] pgmap v303: 336 pgs: 2 creating+peering, 14 unknown, 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 KiB/s wr, 1 op/s 2026-03-09T17:31:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:35 vm00 bash[28333]: cluster 2026-03-09T17:31:34.720714+0000 mgr.y (mgr.14505) 235 : cluster [DBG] pgmap v303: 336 pgs: 2 creating+peering, 14 unknown, 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 KiB/s wr, 1 op/s 2026-03-09T17:31:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:35 vm00 bash[28333]: audit 2026-03-09T17:31:34.942146+0000 mon.c (mon.2) 430 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:35 vm00 bash[28333]: audit 2026-03-09T17:31:34.942146+0000 mon.c (mon.2) 430 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:35 vm00 bash[20770]: cluster 2026-03-09T17:31:34.720714+0000 mgr.y (mgr.14505) 235 : cluster [DBG] pgmap v303: 336 pgs: 2 creating+peering, 14 unknown, 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 KiB/s wr, 1 op/s 2026-03-09T17:31:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:35 vm00 bash[20770]: cluster 2026-03-09T17:31:34.720714+0000 mgr.y (mgr.14505) 235 : cluster [DBG] pgmap v303: 336 pgs: 2 creating+peering, 14 unknown, 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 KiB/s wr, 1 op/s 2026-03-09T17:31:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:35 vm00 bash[20770]: audit 2026-03-09T17:31:34.942146+0000 mon.c (mon.2) 430 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:35 vm00 bash[20770]: audit 2026-03-09T17:31:34.942146+0000 mon.c (mon.2) 430 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:35 vm02 bash[23351]: cluster 2026-03-09T17:31:34.720714+0000 mgr.y (mgr.14505) 235 : cluster [DBG] pgmap v303: 336 pgs: 2 creating+peering, 14 unknown, 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 KiB/s wr, 1 op/s 2026-03-09T17:31:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:35 vm02 bash[23351]: cluster 2026-03-09T17:31:34.720714+0000 mgr.y (mgr.14505) 235 : cluster [DBG] pgmap v303: 336 pgs: 2 creating+peering, 14 unknown, 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 KiB/s wr, 1 op/s 2026-03-09T17:31:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:35 vm02 bash[23351]: audit 2026-03-09T17:31:34.942146+0000 mon.c (mon.2) 430 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:35 vm02 bash[23351]: audit 2026-03-09T17:31:34.942146+0000 mon.c (mon.2) 430 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:31:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:31:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: cluster 2026-03-09T17:31:35.906922+0000 mon.a (mon.0) 1849 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: cluster 2026-03-09T17:31:35.906922+0000 mon.a (mon.0) 1849 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:35.912632+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:35.912632+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:35.913957+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:35.913957+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:35.942938+0000 mon.c (mon.2) 431 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:35.942938+0000 mon.c (mon.2) 431 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: cluster 2026-03-09T17:31:36.613354+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: cluster 2026-03-09T17:31:36.613354+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.721600+0000 mon.c (mon.2) 432 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.721600+0000 mon.c (mon.2) 432 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.722414+0000 mon.a (mon.0) 1853 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.722414+0000 mon.a (mon.0) 1853 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.904936+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.904936+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.904984+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.904984+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.905002+0000 mon.a (mon.0) 1856 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-09T17:31:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.905002+0000 mon.a (mon.0) 1856 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: cluster 2026-03-09T17:31:36.917940+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: cluster 2026-03-09T17:31:36.917940+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.924406+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.924406+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.924526+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:36 vm00 bash[20770]: audit 2026-03-09T17:31:36.924526+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: cluster 2026-03-09T17:31:35.906922+0000 mon.a (mon.0) 1849 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: cluster 2026-03-09T17:31:35.906922+0000 mon.a (mon.0) 1849 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:35.912632+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:35.912632+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:35.913957+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:35.913957+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:35.942938+0000 mon.c (mon.2) 431 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:35.942938+0000 mon.c (mon.2) 431 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: cluster 2026-03-09T17:31:36.613354+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: cluster 2026-03-09T17:31:36.613354+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.721600+0000 mon.c (mon.2) 432 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.721600+0000 mon.c (mon.2) 432 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.722414+0000 mon.a (mon.0) 1853 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.722414+0000 mon.a (mon.0) 1853 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.904936+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.904936+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.904984+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.904984+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.905002+0000 mon.a (mon.0) 1856 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.905002+0000 mon.a (mon.0) 1856 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: cluster 2026-03-09T17:31:36.917940+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: cluster 2026-03-09T17:31:36.917940+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.924406+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.924406+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.924526+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:36 vm00 bash[28333]: audit 2026-03-09T17:31:36.924526+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: cluster 2026-03-09T17:31:35.906922+0000 mon.a (mon.0) 1849 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: cluster 2026-03-09T17:31:35.906922+0000 mon.a (mon.0) 1849 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:35.912632+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:35.912632+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:35.913957+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:35.913957+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:35.942938+0000 mon.c (mon.2) 431 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:35.942938+0000 mon.c (mon.2) 431 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: cluster 2026-03-09T17:31:36.613354+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: cluster 2026-03-09T17:31:36.613354+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.721600+0000 mon.c (mon.2) 432 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.721600+0000 mon.c (mon.2) 432 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.722414+0000 mon.a (mon.0) 1853 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.722414+0000 mon.a (mon.0) 1853 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.904936+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.904936+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.904984+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.904984+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.905002+0000 mon.a (mon.0) 1856 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.905002+0000 mon.a (mon.0) 1856 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: cluster 2026-03-09T17:31:36.917940+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: cluster 2026-03-09T17:31:36.917940+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.924406+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.924406+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.924526+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:36 vm02 bash[23351]: audit 2026-03-09T17:31:36.924526+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]: dispatch 2026-03-09T17:31:38.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: cluster 2026-03-09T17:31:36.721124+0000 mgr.y (mgr.14505) 236 : cluster [DBG] pgmap v306: 320 pgs: 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 KiB/s wr, 1 op/s 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: cluster 2026-03-09T17:31:36.721124+0000 mgr.y (mgr.14505) 236 : cluster [DBG] pgmap v306: 320 pgs: 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 KiB/s wr, 1 op/s 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: audit 2026-03-09T17:31:36.943704+0000 mon.c (mon.2) 433 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: audit 2026-03-09T17:31:36.943704+0000 mon.c (mon.2) 433 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: audit 2026-03-09T17:31:37.908767+0000 mon.a (mon.0) 1860 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: audit 2026-03-09T17:31:37.908767+0000 mon.a (mon.0) 1860 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: audit 2026-03-09T17:31:37.909183+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: audit 2026-03-09T17:31:37.909183+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: cluster 2026-03-09T17:31:37.925145+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:37 vm00 bash[20770]: cluster 2026-03-09T17:31:37.925145+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: cluster 2026-03-09T17:31:36.721124+0000 mgr.y (mgr.14505) 236 : cluster [DBG] pgmap v306: 320 pgs: 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 KiB/s wr, 1 op/s 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: cluster 2026-03-09T17:31:36.721124+0000 mgr.y (mgr.14505) 236 : cluster [DBG] pgmap v306: 320 pgs: 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 KiB/s wr, 1 op/s 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: audit 2026-03-09T17:31:36.943704+0000 mon.c (mon.2) 433 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: audit 2026-03-09T17:31:36.943704+0000 mon.c (mon.2) 433 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: audit 2026-03-09T17:31:37.908767+0000 mon.a (mon.0) 1860 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: audit 2026-03-09T17:31:37.908767+0000 mon.a (mon.0) 1860 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: audit 2026-03-09T17:31:37.909183+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: audit 2026-03-09T17:31:37.909183+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: cluster 2026-03-09T17:31:37.925145+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T17:31:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:37 vm00 bash[28333]: cluster 2026-03-09T17:31:37.925145+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: cluster 2026-03-09T17:31:36.721124+0000 mgr.y (mgr.14505) 236 : cluster [DBG] pgmap v306: 320 pgs: 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 KiB/s wr, 1 op/s 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: cluster 2026-03-09T17:31:36.721124+0000 mgr.y (mgr.14505) 236 : cluster [DBG] pgmap v306: 320 pgs: 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 309 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 KiB/s wr, 1 op/s 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: audit 2026-03-09T17:31:36.943704+0000 mon.c (mon.2) 433 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: audit 2026-03-09T17:31:36.943704+0000 mon.c (mon.2) 433 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: audit 2026-03-09T17:31:37.908767+0000 mon.a (mon.0) 1860 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: audit 2026-03-09T17:31:37.908767+0000 mon.a (mon.0) 1860 : audit [INF] from='client.? 192.168.123.100:0/1001649209' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59908-38"}]': finished 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: audit 2026-03-09T17:31:37.909183+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: audit 2026-03-09T17:31:37.909183+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/3894224197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59916-47"}]': finished 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: cluster 2026-03-09T17:31:37.925145+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T17:31:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:37 vm02 bash[23351]: cluster 2026-03-09T17:31:37.925145+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.948731+0000 mon.c (mon.2) 434 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.948731+0000 mon.c (mon.2) 434 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.959108+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.959108+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.971672+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.971672+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.972209+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.972209+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.972458+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.972458+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.973020+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.973020+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.973876+0000 mon.a (mon.0) 1865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.973876+0000 mon.a (mon.0) 1865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.977414+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.977414+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.979224+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.979224+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.979669+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.979669+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.979882+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.979882+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.981392+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.981392+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.981692+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:37.981692+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:38.082545+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:38.082545+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:38.084083+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:38 vm00 bash[20770]: audit 2026-03-09T17:31:38.084083+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.948731+0000 mon.c (mon.2) 434 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.948731+0000 mon.c (mon.2) 434 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.959108+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.959108+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.971672+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.971672+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.972209+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.972209+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.972458+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.972458+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.973020+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.973020+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.973876+0000 mon.a (mon.0) 1865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.973876+0000 mon.a (mon.0) 1865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.977414+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.977414+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.979224+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.979224+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.979669+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.979669+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.979882+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.979882+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.981392+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.981392+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.981692+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:37.981692+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:38.082545+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:38.082545+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:38.084083+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:38 vm00 bash[28333]: audit 2026-03-09T17:31:38.084083+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.948731+0000 mon.c (mon.2) 434 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.948731+0000 mon.c (mon.2) 434 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.959108+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.959108+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.971672+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.971672+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.972209+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.972209+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.972458+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.972458+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.973020+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.973020+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.973876+0000 mon.a (mon.0) 1865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.973876+0000 mon.a (mon.0) 1865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.977414+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.977414+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.979224+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.979224+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.979669+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.979669+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.979882+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.979882+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.981392+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.981392+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.981692+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:37.981692+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:38.082545+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:38.082545+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:38.084083+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:38 vm02 bash[23351]: audit 2026-03-09T17:31:38.084083+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: cluster 2026-03-09T17:31:38.721672+0000 mgr.y (mgr.14505) 237 : cluster [DBG] pgmap v309: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: cluster 2026-03-09T17:31:38.721672+0000 mgr.y (mgr.14505) 237 : cluster [DBG] pgmap v309: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: cluster 2026-03-09T17:31:38.950993+0000 mon.a (mon.0) 1870 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: cluster 2026-03-09T17:31:38.950993+0000 mon.a (mon.0) 1870 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.951019+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.951019+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.960124+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.960124+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.960285+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.960285+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.960476+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.960476+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.967778+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.967778+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.967873+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.967873+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.981891+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.981891+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: cluster 2026-03-09T17:31:38.986136+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: cluster 2026-03-09T17:31:38.986136+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.986696+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.986696+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.986781+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.986781+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.986939+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:38.986939+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:39.951858+0000 mon.c (mon.2) 440 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:40 vm00 bash[20770]: audit 2026-03-09T17:31:39.951858+0000 mon.c (mon.2) 440 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: cluster 2026-03-09T17:31:38.721672+0000 mgr.y (mgr.14505) 237 : cluster [DBG] pgmap v309: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: cluster 2026-03-09T17:31:38.721672+0000 mgr.y (mgr.14505) 237 : cluster [DBG] pgmap v309: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: cluster 2026-03-09T17:31:38.950993+0000 mon.a (mon.0) 1870 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: cluster 2026-03-09T17:31:38.950993+0000 mon.a (mon.0) 1870 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.951019+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.951019+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.960124+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.960124+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.960285+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.960285+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.960476+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.960476+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.967778+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.967778+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.967873+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.967873+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.981891+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.981891+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: cluster 2026-03-09T17:31:38.986136+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: cluster 2026-03-09T17:31:38.986136+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.986696+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.986696+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.986781+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.986781+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.986939+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:38.986939+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:39.951858+0000 mon.c (mon.2) 440 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:40 vm00 bash[28333]: audit 2026-03-09T17:31:39.951858+0000 mon.c (mon.2) 440 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: cluster 2026-03-09T17:31:38.721672+0000 mgr.y (mgr.14505) 237 : cluster [DBG] pgmap v309: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: cluster 2026-03-09T17:31:38.721672+0000 mgr.y (mgr.14505) 237 : cluster [DBG] pgmap v309: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: cluster 2026-03-09T17:31:38.950993+0000 mon.a (mon.0) 1870 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: cluster 2026-03-09T17:31:38.950993+0000 mon.a (mon.0) 1870 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.951019+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.951019+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.960124+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.960124+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59908-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.960285+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.960285+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59916-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.960476+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.960476+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.967778+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.967778+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.967873+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.967873+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.981891+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.981891+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: cluster 2026-03-09T17:31:38.986136+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: cluster 2026-03-09T17:31:38.986136+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.986696+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.986696+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.986781+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.986781+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.986939+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:38.986939+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:39.951858+0000 mon.c (mon.2) 440 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:40 vm02 bash[23351]: audit 2026-03-09T17:31:39.951858+0000 mon.c (mon.2) 440 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:41 vm02 bash[23351]: cluster 2026-03-09T17:31:39.962585+0000 mon.a (mon.0) 1878 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:41 vm02 bash[23351]: cluster 2026-03-09T17:31:39.962585+0000 mon.a (mon.0) 1878 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:41 vm02 bash[23351]: audit 2026-03-09T17:31:39.966344+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:41 vm02 bash[23351]: audit 2026-03-09T17:31:39.966344+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:41 vm02 bash[23351]: cluster 2026-03-09T17:31:40.002696+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T17:31:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:41 vm02 bash[23351]: cluster 2026-03-09T17:31:40.002696+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T17:31:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:41 vm02 bash[23351]: audit 2026-03-09T17:31:40.952830+0000 mon.c (mon.2) 441 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:41 vm02 bash[23351]: audit 2026-03-09T17:31:40.952830+0000 mon.c (mon.2) 441 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:41 vm00 bash[20770]: cluster 2026-03-09T17:31:39.962585+0000 mon.a (mon.0) 1878 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:41 vm00 bash[20770]: cluster 2026-03-09T17:31:39.962585+0000 mon.a (mon.0) 1878 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:41 vm00 bash[20770]: audit 2026-03-09T17:31:39.966344+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:41 vm00 bash[20770]: audit 2026-03-09T17:31:39.966344+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:41 vm00 bash[20770]: cluster 2026-03-09T17:31:40.002696+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:41 vm00 bash[20770]: cluster 2026-03-09T17:31:40.002696+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:41 vm00 bash[20770]: audit 2026-03-09T17:31:40.952830+0000 mon.c (mon.2) 441 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:41 vm00 bash[20770]: audit 2026-03-09T17:31:40.952830+0000 mon.c (mon.2) 441 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:41 vm00 bash[28333]: cluster 2026-03-09T17:31:39.962585+0000 mon.a (mon.0) 1878 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:41 vm00 bash[28333]: cluster 2026-03-09T17:31:39.962585+0000 mon.a (mon.0) 1878 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:41 vm00 bash[28333]: audit 2026-03-09T17:31:39.966344+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:41 vm00 bash[28333]: audit 2026-03-09T17:31:39.966344+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-29"}]': finished 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:41 vm00 bash[28333]: cluster 2026-03-09T17:31:40.002696+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:41 vm00 bash[28333]: cluster 2026-03-09T17:31:40.002696+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:41 vm00 bash[28333]: audit 2026-03-09T17:31:40.952830+0000 mon.c (mon.2) 441 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:41.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:41 vm00 bash[28333]: audit 2026-03-09T17:31:40.952830+0000 mon.c (mon.2) 441 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:42.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:31:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: cluster 2026-03-09T17:31:40.722077+0000 mgr.y (mgr.14505) 238 : cluster [DBG] pgmap v312: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: cluster 2026-03-09T17:31:40.722077+0000 mgr.y (mgr.14505) 238 : cluster [DBG] pgmap v312: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:41.098758+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:41.098758+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:41.098800+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:41.098800+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: cluster 2026-03-09T17:31:41.133501+0000 mon.a (mon.0) 1883 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: cluster 2026-03-09T17:31:41.133501+0000 mon.a (mon.0) 1883 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: cluster 2026-03-09T17:31:41.614128+0000 mon.a (mon.0) 1884 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: cluster 2026-03-09T17:31:41.614128+0000 mon.a (mon.0) 1884 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:41.953792+0000 mon.c (mon.2) 442 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:41.953792+0000 mon.c (mon.2) 442 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: cluster 2026-03-09T17:31:42.120640+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: cluster 2026-03-09T17:31:42.120640+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:42.121812+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:42.121812+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:42.127747+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:42 vm02 bash[23351]: audit 2026-03-09T17:31:42.127747+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: cluster 2026-03-09T17:31:40.722077+0000 mgr.y (mgr.14505) 238 : cluster [DBG] pgmap v312: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: cluster 2026-03-09T17:31:40.722077+0000 mgr.y (mgr.14505) 238 : cluster [DBG] pgmap v312: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:41.098758+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:41.098758+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:41.098800+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:41.098800+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: cluster 2026-03-09T17:31:41.133501+0000 mon.a (mon.0) 1883 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: cluster 2026-03-09T17:31:41.133501+0000 mon.a (mon.0) 1883 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: cluster 2026-03-09T17:31:41.614128+0000 mon.a (mon.0) 1884 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: cluster 2026-03-09T17:31:41.614128+0000 mon.a (mon.0) 1884 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:41.953792+0000 mon.c (mon.2) 442 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:41.953792+0000 mon.c (mon.2) 442 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: cluster 2026-03-09T17:31:42.120640+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: cluster 2026-03-09T17:31:42.120640+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:42.121812+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:42.121812+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:42.127747+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:42 vm00 bash[20770]: audit 2026-03-09T17:31:42.127747+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: cluster 2026-03-09T17:31:40.722077+0000 mgr.y (mgr.14505) 238 : cluster [DBG] pgmap v312: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: cluster 2026-03-09T17:31:40.722077+0000 mgr.y (mgr.14505) 238 : cluster [DBG] pgmap v312: 320 pgs: 1 peering, 15 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 299 active+clean; 8.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:41.098758+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:41.098758+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59916-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:41.098800+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:41.098800+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59908-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: cluster 2026-03-09T17:31:41.133501+0000 mon.a (mon.0) 1883 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: cluster 2026-03-09T17:31:41.133501+0000 mon.a (mon.0) 1883 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: cluster 2026-03-09T17:31:41.614128+0000 mon.a (mon.0) 1884 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: cluster 2026-03-09T17:31:41.614128+0000 mon.a (mon.0) 1884 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:41.953792+0000 mon.c (mon.2) 442 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:42.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:41.953792+0000 mon.c (mon.2) 442 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:42.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: cluster 2026-03-09T17:31:42.120640+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T17:31:42.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: cluster 2026-03-09T17:31:42.120640+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T17:31:42.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:42.121812+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:42.121812+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:42.127747+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:42.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:42 vm00 bash[28333]: audit 2026-03-09T17:31:42.127747+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:41.668184+0000 mgr.y (mgr.14505) 239 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:41.668184+0000 mgr.y (mgr.14505) 239 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:42.563595+0000 mon.a (mon.0) 1887 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:42.563595+0000 mon.a (mon.0) 1887 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:42.567727+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:42.567727+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:42.954563+0000 mon.c (mon.2) 444 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:42.954563+0000 mon.c (mon.2) 444 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.106132+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.106132+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.117026+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.117026+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: cluster 2026-03-09T17:31:43.119171+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: cluster 2026-03-09T17:31:43.119171+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.124641+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.124641+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.130127+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.130127+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.133057+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.133057+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.133151+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.133151+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.133191+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:43 vm02 bash[23351]: audit 2026-03-09T17:31:43.133191+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:41.668184+0000 mgr.y (mgr.14505) 239 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:41.668184+0000 mgr.y (mgr.14505) 239 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:42.563595+0000 mon.a (mon.0) 1887 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:42.563595+0000 mon.a (mon.0) 1887 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:42.567727+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:42.567727+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:42.954563+0000 mon.c (mon.2) 444 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:42.954563+0000 mon.c (mon.2) 444 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.106132+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.106132+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.117026+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.117026+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: cluster 2026-03-09T17:31:43.119171+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: cluster 2026-03-09T17:31:43.119171+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.124641+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.124641+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.130127+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.130127+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.133057+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.133057+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.133151+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.133151+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.133191+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:43 vm00 bash[20770]: audit 2026-03-09T17:31:43.133191+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:41.668184+0000 mgr.y (mgr.14505) 239 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:41.668184+0000 mgr.y (mgr.14505) 239 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:42.563595+0000 mon.a (mon.0) 1887 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:42.563595+0000 mon.a (mon.0) 1887 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:42.567727+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:42.567727+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:42.954563+0000 mon.c (mon.2) 444 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:42.954563+0000 mon.c (mon.2) 444 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.106132+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.106132+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.117026+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.117026+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: cluster 2026-03-09T17:31:43.119171+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: cluster 2026-03-09T17:31:43.119171+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.124641+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.124641+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.130127+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.130127+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.133057+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.133057+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.133151+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.133151+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:43.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.133191+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:43.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:43 vm00 bash[28333]: audit 2026-03-09T17:31:43.133191+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: cluster 2026-03-09T17:31:42.722668+0000 mgr.y (mgr.14505) 240 : cluster [DBG] pgmap v315: 336 pgs: 48 unknown, 1 peering, 5 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 277 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: cluster 2026-03-09T17:31:42.722668+0000 mgr.y (mgr.14505) 240 : cluster [DBG] pgmap v315: 336 pgs: 48 unknown, 1 peering, 5 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 277 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:43.955358+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:43.955358+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.109278+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.109278+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.109414+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.109414+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.109512+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.109512+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.114618+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.114618+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.115535+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.115535+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: cluster 2026-03-09T17:31:44.119252+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: cluster 2026-03-09T17:31:44.119252+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.121454+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.121454+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.121683+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.121683+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.124771+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.124771+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.124823+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:44 vm02 bash[23351]: audit 2026-03-09T17:31:44.124823+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: cluster 2026-03-09T17:31:42.722668+0000 mgr.y (mgr.14505) 240 : cluster [DBG] pgmap v315: 336 pgs: 48 unknown, 1 peering, 5 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 277 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: cluster 2026-03-09T17:31:42.722668+0000 mgr.y (mgr.14505) 240 : cluster [DBG] pgmap v315: 336 pgs: 48 unknown, 1 peering, 5 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 277 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:43.955358+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:43.955358+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.109278+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.109278+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.109414+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.109414+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.109512+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.109512+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.114618+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.114618+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.115535+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.115535+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: cluster 2026-03-09T17:31:44.119252+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: cluster 2026-03-09T17:31:44.119252+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.121454+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.121454+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.121683+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.121683+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.124771+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.124771+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.124823+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:44 vm00 bash[20770]: audit 2026-03-09T17:31:44.124823+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: cluster 2026-03-09T17:31:42.722668+0000 mgr.y (mgr.14505) 240 : cluster [DBG] pgmap v315: 336 pgs: 48 unknown, 1 peering, 5 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 277 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: cluster 2026-03-09T17:31:42.722668+0000 mgr.y (mgr.14505) 240 : cluster [DBG] pgmap v315: 336 pgs: 48 unknown, 1 peering, 5 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 277 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:43.955358+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:43.955358+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.109278+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.109278+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.109414+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.109414+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.109512+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.109512+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.114618+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.114618+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.115535+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.115535+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/16457271' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: cluster 2026-03-09T17:31:44.119252+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: cluster 2026-03-09T17:31:44.119252+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.121454+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.121454+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.100:0/1578810590' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.121683+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.121683+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]: dispatch 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.124771+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.124771+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.124823+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:44.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:44 vm00 bash[28333]: audit 2026-03-09T17:31:44.124823+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:44.723227+0000 mon.c (mon.2) 448 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:44.723227+0000 mon.c (mon.2) 448 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:44.723840+0000 mon.a (mon.0) 1900 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:44.723840+0000 mon.a (mon.0) 1900 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:44.956212+0000 mon.c (mon.2) 449 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:44.956212+0000 mon.c (mon.2) 449 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.113807+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.113807+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.113974+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.113974+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.114094+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.114094+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.114458+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.114458+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.117911+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.117911+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: cluster 2026-03-09T17:31:45.130450+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: cluster 2026-03-09T17:31:45.130450+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.143327+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.143327+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.148491+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.148491+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.148845+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.148845+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.149949+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.149949+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.150173+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.150173+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.150466+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.150466+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.150666+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.150666+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.153359+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.153359+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.154572+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.154572+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.154577+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.154577+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.155373+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.155373+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.155800+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.155800+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.156611+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:45 vm00 bash[20770]: audit 2026-03-09T17:31:45.156611+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:44.723227+0000 mon.c (mon.2) 448 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:44.723227+0000 mon.c (mon.2) 448 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:44.723840+0000 mon.a (mon.0) 1900 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:44.723840+0000 mon.a (mon.0) 1900 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:44.956212+0000 mon.c (mon.2) 449 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:44.956212+0000 mon.c (mon.2) 449 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.113807+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.113807+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.113974+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.113974+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.114094+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.114094+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.114458+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.114458+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.117911+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.117911+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: cluster 2026-03-09T17:31:45.130450+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: cluster 2026-03-09T17:31:45.130450+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.143327+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.143327+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.148491+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.148491+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.148845+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.148845+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.149949+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.149949+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.150173+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.150173+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.150466+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.150466+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.150666+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.150666+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.153359+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.153359+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.154572+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.154572+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.154577+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.154577+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.155373+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.155373+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.155800+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.155800+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.156611+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:45 vm00 bash[28333]: audit 2026-03-09T17:31:45.156611+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:44.723227+0000 mon.c (mon.2) 448 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:44.723227+0000 mon.c (mon.2) 448 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:44.723840+0000 mon.a (mon.0) 1900 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:44.723840+0000 mon.a (mon.0) 1900 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:44.956212+0000 mon.c (mon.2) 449 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:44.956212+0000 mon.c (mon.2) 449 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.113807+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.113807+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59916-48"}]': finished 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.113974+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.113974+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.114094+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.114094+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59908-39"}]': finished 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.114458+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.114458+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.117911+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.117911+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: cluster 2026-03-09T17:31:45.130450+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: cluster 2026-03-09T17:31:45.130450+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.143327+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.143327+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.148491+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.148491+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.148845+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.148845+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.149949+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.149949+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.150173+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.150173+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.150466+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.150466+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.150666+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.150666+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.153359+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.153359+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.154572+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.154572+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.154577+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.154577+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.155373+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.155373+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.155800+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.155800+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:45.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.156611+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:45.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:45 vm02 bash[23351]: audit 2026-03-09T17:31:45.156611+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: cluster 2026-03-09T17:31:44.722988+0000 mgr.y (mgr.14505) 241 : cluster [DBG] pgmap v318: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: cluster 2026-03-09T17:31:44.722988+0000 mgr.y (mgr.14505) 241 : cluster [DBG] pgmap v318: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: cluster 2026-03-09T17:31:45.349129+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: cluster 2026-03-09T17:31:45.349129+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:45.957012+0000 mon.c (mon.2) 453 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:45.957012+0000 mon.c (mon.2) 453 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: cluster 2026-03-09T17:31:46.113666+0000 mon.a (mon.0) 1914 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: cluster 2026-03-09T17:31:46.113666+0000 mon.a (mon.0) 1914 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.116796+0000 mon.a (mon.0) 1915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]': finished 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.116796+0000 mon.a (mon.0) 1915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]': finished 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.116837+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.116837+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.116910+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.116910+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.121644+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.121644+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: cluster 2026-03-09T17:31:46.122567+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: cluster 2026-03-09T17:31:46.122567+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.127063+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.127063+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.141018+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.141018+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.141367+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.141367+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.204565+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.204565+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.206004+0000 mon.a (mon.0) 1921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:46 vm00 bash[20770]: audit 2026-03-09T17:31:46.206004+0000 mon.a (mon.0) 1921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:31:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:31:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: cluster 2026-03-09T17:31:44.722988+0000 mgr.y (mgr.14505) 241 : cluster [DBG] pgmap v318: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: cluster 2026-03-09T17:31:44.722988+0000 mgr.y (mgr.14505) 241 : cluster [DBG] pgmap v318: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: cluster 2026-03-09T17:31:45.349129+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: cluster 2026-03-09T17:31:45.349129+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:45.957012+0000 mon.c (mon.2) 453 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:45.957012+0000 mon.c (mon.2) 453 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: cluster 2026-03-09T17:31:46.113666+0000 mon.a (mon.0) 1914 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: cluster 2026-03-09T17:31:46.113666+0000 mon.a (mon.0) 1914 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.116796+0000 mon.a (mon.0) 1915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]': finished 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.116796+0000 mon.a (mon.0) 1915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]': finished 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.116837+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.116837+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.116910+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.116910+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.121644+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.121644+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: cluster 2026-03-09T17:31:46.122567+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: cluster 2026-03-09T17:31:46.122567+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.127063+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.127063+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.141018+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.141018+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.141367+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.141367+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.204565+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.204565+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.206004+0000 mon.a (mon.0) 1921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:46 vm00 bash[28333]: audit 2026-03-09T17:31:46.206004+0000 mon.a (mon.0) 1921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: cluster 2026-03-09T17:31:44.722988+0000 mgr.y (mgr.14505) 241 : cluster [DBG] pgmap v318: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: cluster 2026-03-09T17:31:44.722988+0000 mgr.y (mgr.14505) 241 : cluster [DBG] pgmap v318: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: cluster 2026-03-09T17:31:45.349129+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: cluster 2026-03-09T17:31:45.349129+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:45.957012+0000 mon.c (mon.2) 453 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:45.957012+0000 mon.c (mon.2) 453 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: cluster 2026-03-09T17:31:46.113666+0000 mon.a (mon.0) 1914 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: cluster 2026-03-09T17:31:46.113666+0000 mon.a (mon.0) 1914 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.116796+0000 mon.a (mon.0) 1915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]': finished 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.116796+0000 mon.a (mon.0) 1915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-31", "mode": "writeback"}]': finished 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.116837+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.116837+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59916-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.116910+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.116910+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59908-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.121644+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.121644+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: cluster 2026-03-09T17:31:46.122567+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: cluster 2026-03-09T17:31:46.122567+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.127063+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.127063+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.141018+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.141018+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.141367+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.141367+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:46.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.204565+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.204565+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.206004+0000 mon.a (mon.0) 1921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:46.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:46 vm02 bash[23351]: audit 2026-03-09T17:31:46.206004+0000 mon.a (mon.0) 1921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:46.396783+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:46.396783+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: cluster 2026-03-09T17:31:46.723500+0000 mgr.y (mgr.14505) 242 : cluster [DBG] pgmap v321: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: cluster 2026-03-09T17:31:46.723500+0000 mgr.y (mgr.14505) 242 : cluster [DBG] pgmap v321: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:46.748504+0000 mon.a (mon.0) 1922 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:46.748504+0000 mon.a (mon.0) 1922 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:46.755985+0000 mon.a (mon.0) 1923 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:46.755985+0000 mon.a (mon.0) 1923 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:46.957779+0000 mon.c (mon.2) 456 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:46.957779+0000 mon.c (mon.2) 456 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.100131+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.100131+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.100787+0000 mon.c (mon.2) 458 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.100787+0000 mon.c (mon.2) 458 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.108493+0000 mon.a (mon.0) 1924 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.108493+0000 mon.a (mon.0) 1924 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.122554+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.122554+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.130511+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.130511+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: cluster 2026-03-09T17:31:47.145604+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: cluster 2026-03-09T17:31:47.145604+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.146213+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:47 vm00 bash[28333]: audit 2026-03-09T17:31:47.146213+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:46.396783+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:46.396783+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: cluster 2026-03-09T17:31:46.723500+0000 mgr.y (mgr.14505) 242 : cluster [DBG] pgmap v321: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: cluster 2026-03-09T17:31:46.723500+0000 mgr.y (mgr.14505) 242 : cluster [DBG] pgmap v321: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:46.748504+0000 mon.a (mon.0) 1922 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:46.748504+0000 mon.a (mon.0) 1922 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:46.755985+0000 mon.a (mon.0) 1923 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:46.755985+0000 mon.a (mon.0) 1923 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:46.957779+0000 mon.c (mon.2) 456 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:46.957779+0000 mon.c (mon.2) 456 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.100131+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.100131+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.100787+0000 mon.c (mon.2) 458 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.100787+0000 mon.c (mon.2) 458 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.108493+0000 mon.a (mon.0) 1924 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.108493+0000 mon.a (mon.0) 1924 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.122554+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.122554+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.130511+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.130511+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: cluster 2026-03-09T17:31:47.145604+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T17:31:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: cluster 2026-03-09T17:31:47.145604+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T17:31:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.146213+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:47 vm00 bash[20770]: audit 2026-03-09T17:31:47.146213+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:46.396783+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:46.396783+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: cluster 2026-03-09T17:31:46.723500+0000 mgr.y (mgr.14505) 242 : cluster [DBG] pgmap v321: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: cluster 2026-03-09T17:31:46.723500+0000 mgr.y (mgr.14505) 242 : cluster [DBG] pgmap v321: 320 pgs: 32 creating+peering, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 279 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s; 63 B/s, 0 objects/s recovering 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:46.748504+0000 mon.a (mon.0) 1922 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:46.748504+0000 mon.a (mon.0) 1922 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:46.755985+0000 mon.a (mon.0) 1923 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:46.755985+0000 mon.a (mon.0) 1923 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:46.957779+0000 mon.c (mon.2) 456 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:46.957779+0000 mon.c (mon.2) 456 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.100131+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.100131+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.100787+0000 mon.c (mon.2) 458 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.100787+0000 mon.c (mon.2) 458 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.108493+0000 mon.a (mon.0) 1924 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.108493+0000 mon.a (mon.0) 1924 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.122554+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.122554+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.130511+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.130511+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: cluster 2026-03-09T17:31:47.145604+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: cluster 2026-03-09T17:31:47.145604+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.146213+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:47 vm02 bash[23351]: audit 2026-03-09T17:31:47.146213+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]: dispatch 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: audit 2026-03-09T17:31:47.958461+0000 mon.c (mon.2) 459 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: audit 2026-03-09T17:31:47.958461+0000 mon.c (mon.2) 459 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: cluster 2026-03-09T17:31:48.122528+0000 mon.a (mon.0) 1928 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: cluster 2026-03-09T17:31:48.122528+0000 mon.a (mon.0) 1928 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: audit 2026-03-09T17:31:48.126072+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: audit 2026-03-09T17:31:48.126072+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: audit 2026-03-09T17:31:48.126206+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: audit 2026-03-09T17:31:48.126206+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: audit 2026-03-09T17:31:48.126234+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: audit 2026-03-09T17:31:48.126234+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: cluster 2026-03-09T17:31:48.147285+0000 mon.a (mon.0) 1932 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:48 vm00 bash[20770]: cluster 2026-03-09T17:31:48.147285+0000 mon.a (mon.0) 1932 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: audit 2026-03-09T17:31:47.958461+0000 mon.c (mon.2) 459 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: audit 2026-03-09T17:31:47.958461+0000 mon.c (mon.2) 459 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: cluster 2026-03-09T17:31:48.122528+0000 mon.a (mon.0) 1928 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: cluster 2026-03-09T17:31:48.122528+0000 mon.a (mon.0) 1928 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: audit 2026-03-09T17:31:48.126072+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: audit 2026-03-09T17:31:48.126072+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: audit 2026-03-09T17:31:48.126206+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: audit 2026-03-09T17:31:48.126206+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: audit 2026-03-09T17:31:48.126234+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: audit 2026-03-09T17:31:48.126234+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: cluster 2026-03-09T17:31:48.147285+0000 mon.a (mon.0) 1932 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T17:31:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:48 vm00 bash[28333]: cluster 2026-03-09T17:31:48.147285+0000 mon.a (mon.0) 1932 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: audit 2026-03-09T17:31:47.958461+0000 mon.c (mon.2) 459 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: audit 2026-03-09T17:31:47.958461+0000 mon.c (mon.2) 459 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: cluster 2026-03-09T17:31:48.122528+0000 mon.a (mon.0) 1928 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: cluster 2026-03-09T17:31:48.122528+0000 mon.a (mon.0) 1928 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: audit 2026-03-09T17:31:48.126072+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: audit 2026-03-09T17:31:48.126072+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59908-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: audit 2026-03-09T17:31:48.126206+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: audit 2026-03-09T17:31:48.126206+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59916-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: audit 2026-03-09T17:31:48.126234+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: audit 2026-03-09T17:31:48.126234+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-31"}]': finished 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: cluster 2026-03-09T17:31:48.147285+0000 mon.a (mon.0) 1932 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T17:31:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:48 vm02 bash[23351]: cluster 2026-03-09T17:31:48.147285+0000 mon.a (mon.0) 1932 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:49 vm00 bash[20770]: cluster 2026-03-09T17:31:48.723882+0000 mgr.y (mgr.14505) 243 : cluster [DBG] pgmap v324: 335 pgs: 16 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 313 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:49 vm00 bash[20770]: cluster 2026-03-09T17:31:48.723882+0000 mgr.y (mgr.14505) 243 : cluster [DBG] pgmap v324: 335 pgs: 16 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 313 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:49 vm00 bash[20770]: audit 2026-03-09T17:31:48.959345+0000 mon.c (mon.2) 460 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:49 vm00 bash[20770]: audit 2026-03-09T17:31:48.959345+0000 mon.c (mon.2) 460 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:49 vm00 bash[20770]: cluster 2026-03-09T17:31:49.148747+0000 mon.a (mon.0) 1933 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:49 vm00 bash[20770]: cluster 2026-03-09T17:31:49.148747+0000 mon.a (mon.0) 1933 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:49 vm00 bash[28333]: cluster 2026-03-09T17:31:48.723882+0000 mgr.y (mgr.14505) 243 : cluster [DBG] pgmap v324: 335 pgs: 16 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 313 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:49 vm00 bash[28333]: cluster 2026-03-09T17:31:48.723882+0000 mgr.y (mgr.14505) 243 : cluster [DBG] pgmap v324: 335 pgs: 16 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 313 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:49 vm00 bash[28333]: audit 2026-03-09T17:31:48.959345+0000 mon.c (mon.2) 460 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:49 vm00 bash[28333]: audit 2026-03-09T17:31:48.959345+0000 mon.c (mon.2) 460 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:49 vm00 bash[28333]: cluster 2026-03-09T17:31:49.148747+0000 mon.a (mon.0) 1933 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T17:31:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:49 vm00 bash[28333]: cluster 2026-03-09T17:31:49.148747+0000 mon.a (mon.0) 1933 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T17:31:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:49 vm02 bash[23351]: cluster 2026-03-09T17:31:48.723882+0000 mgr.y (mgr.14505) 243 : cluster [DBG] pgmap v324: 335 pgs: 16 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 313 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:31:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:49 vm02 bash[23351]: cluster 2026-03-09T17:31:48.723882+0000 mgr.y (mgr.14505) 243 : cluster [DBG] pgmap v324: 335 pgs: 16 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 313 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:31:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:49 vm02 bash[23351]: audit 2026-03-09T17:31:48.959345+0000 mon.c (mon.2) 460 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:49 vm02 bash[23351]: audit 2026-03-09T17:31:48.959345+0000 mon.c (mon.2) 460 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:49 vm02 bash[23351]: cluster 2026-03-09T17:31:49.148747+0000 mon.a (mon.0) 1933 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T17:31:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:49 vm02 bash[23351]: cluster 2026-03-09T17:31:49.148747+0000 mon.a (mon.0) 1933 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: cluster 2026-03-09T17:31:49.411804+0000 mon.a (mon.0) 1934 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: cluster 2026-03-09T17:31:49.411804+0000 mon.a (mon.0) 1934 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:49.960428+0000 mon.c (mon.2) 461 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:49.960428+0000 mon.c (mon.2) 461 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: cluster 2026-03-09T17:31:50.197383+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: cluster 2026-03-09T17:31:50.197383+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.203228+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.203228+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.203667+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.203667+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.213882+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.213882+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.214310+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.214310+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.217324+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.217324+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.222370+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:50 vm00 bash[20770]: audit 2026-03-09T17:31:50.222370+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: cluster 2026-03-09T17:31:49.411804+0000 mon.a (mon.0) 1934 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: cluster 2026-03-09T17:31:49.411804+0000 mon.a (mon.0) 1934 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:49.960428+0000 mon.c (mon.2) 461 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:49.960428+0000 mon.c (mon.2) 461 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: cluster 2026-03-09T17:31:50.197383+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: cluster 2026-03-09T17:31:50.197383+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.203228+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.203228+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.203667+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.203667+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.213882+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.213882+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.214310+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.214310+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.217324+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.217324+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.222370+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:50 vm00 bash[28333]: audit 2026-03-09T17:31:50.222370+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: cluster 2026-03-09T17:31:49.411804+0000 mon.a (mon.0) 1934 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: cluster 2026-03-09T17:31:49.411804+0000 mon.a (mon.0) 1934 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:49.960428+0000 mon.c (mon.2) 461 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:49.960428+0000 mon.c (mon.2) 461 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: cluster 2026-03-09T17:31:50.197383+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: cluster 2026-03-09T17:31:50.197383+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.203228+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.203228+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.203667+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.203667+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.213882+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.213882+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.214310+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.214310+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.217324+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.217324+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.222370+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:50 vm02 bash[23351]: audit 2026-03-09T17:31:50.222370+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: cluster 2026-03-09T17:31:50.724228+0000 mgr.y (mgr.14505) 244 : cluster [DBG] pgmap v327: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: cluster 2026-03-09T17:31:50.724228+0000 mgr.y (mgr.14505) 244 : cluster [DBG] pgmap v327: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:50.961268+0000 mon.c (mon.2) 463 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:50.961268+0000 mon.c (mon.2) 463 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.197201+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.197201+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.197250+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.197250+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.197276+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.197276+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.207215+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.207215+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: cluster 2026-03-09T17:31:51.208606+0000 mon.a (mon.0) 1942 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: cluster 2026-03-09T17:31:51.208606+0000 mon.a (mon.0) 1942 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.209321+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.209321+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: cluster 2026-03-09T17:31:50.724228+0000 mgr.y (mgr.14505) 244 : cluster [DBG] pgmap v327: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: cluster 2026-03-09T17:31:50.724228+0000 mgr.y (mgr.14505) 244 : cluster [DBG] pgmap v327: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:50.961268+0000 mon.c (mon.2) 463 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:50.961268+0000 mon.c (mon.2) 463 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.197201+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.197201+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.197250+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.197250+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.197276+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.197276+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.207215+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.207215+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: cluster 2026-03-09T17:31:51.208606+0000 mon.a (mon.0) 1942 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: cluster 2026-03-09T17:31:51.208606+0000 mon.a (mon.0) 1942 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.209321+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.209321+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.226505+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.226505+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.226929+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.226929+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.227296+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.227296+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.227852+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:51 vm00 bash[28333]: audit 2026-03-09T17:31:51.227852+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.226505+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.226505+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.226929+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.226929+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.227296+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.227296+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.227852+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:51 vm00 bash[20770]: audit 2026-03-09T17:31:51.227852+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: cluster 2026-03-09T17:31:50.724228+0000 mgr.y (mgr.14505) 244 : cluster [DBG] pgmap v327: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: cluster 2026-03-09T17:31:50.724228+0000 mgr.y (mgr.14505) 244 : cluster [DBG] pgmap v327: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:50.961268+0000 mon.c (mon.2) 463 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:50.961268+0000 mon.c (mon.2) 463 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.197201+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.197201+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.197250+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.197250+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.197276+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.197276+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.207215+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.207215+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/1292596765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: cluster 2026-03-09T17:31:51.208606+0000 mon.a (mon.0) 1942 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: cluster 2026-03-09T17:31:51.208606+0000 mon.a (mon.0) 1942 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.209321+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.209321+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.226505+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.226505+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.226929+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.226929+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.100:0/3948829751' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.227296+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.227296+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.227852+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:51 vm02 bash[23351]: audit 2026-03-09T17:31:51.227852+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:31:51.887 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:31:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:51.678339+0000 mgr.y (mgr.14505) 245 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:51.678339+0000 mgr.y (mgr.14505) 245 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:51.962261+0000 mon.c (mon.2) 465 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:51.962261+0000 mon.c (mon.2) 465 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.201051+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.201051+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.201182+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.201182+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.201273+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.201273+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.204590+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.204590+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: cluster 2026-03-09T17:31:52.210337+0000 mon.a (mon.0) 1949 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: cluster 2026-03-09T17:31:52.210337+0000 mon.a (mon.0) 1949 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.220171+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.220171+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.230197+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.230197+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.230638+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.230638+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.231285+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.231285+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.231640+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.231640+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.232203+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.232203+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.232848+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.232848+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.235553+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.235553+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.236518+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:51.678339+0000 mgr.y (mgr.14505) 245 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:51.678339+0000 mgr.y (mgr.14505) 245 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:51.962261+0000 mon.c (mon.2) 465 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:51.962261+0000 mon.c (mon.2) 465 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.201051+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.201051+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.201182+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.201182+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.201273+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.201273+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.204590+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.204590+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: cluster 2026-03-09T17:31:52.210337+0000 mon.a (mon.0) 1949 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: cluster 2026-03-09T17:31:52.210337+0000 mon.a (mon.0) 1949 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.220171+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.220171+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.230197+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.230197+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.230638+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.230638+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.231285+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.231285+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.231640+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.231640+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.232203+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.232203+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.232848+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.232848+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.235553+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.235553+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.236518+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.236518+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.236789+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.236789+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.237144+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.237144+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.237703+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.237703+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.238372+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:52 vm00 bash[28333]: audit 2026-03-09T17:31:52.238372+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.236518+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.236789+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.236789+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.237144+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.237144+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.237703+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.237703+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.238372+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:52 vm00 bash[20770]: audit 2026-03-09T17:31:52.238372+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:51.678339+0000 mgr.y (mgr.14505) 245 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:51.678339+0000 mgr.y (mgr.14505) 245 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:51.962261+0000 mon.c (mon.2) 465 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:51.962261+0000 mon.c (mon.2) 465 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.201051+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.201051+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59908-40"}]': finished 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.201182+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.201182+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59916-49"}]': finished 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.201273+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.201273+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.204590+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.204590+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: cluster 2026-03-09T17:31:52.210337+0000 mon.a (mon.0) 1949 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: cluster 2026-03-09T17:31:52.210337+0000 mon.a (mon.0) 1949 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.220171+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.220171+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.230197+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.230197+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.230638+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.230638+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.231285+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.231285+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.231640+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.231640+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.232203+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.232203+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.232848+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.232848+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.235553+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.235553+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.236518+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.236518+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.236789+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.236789+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.237144+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.237144+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.237703+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.237703+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.238372+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:52 vm02 bash[23351]: audit 2026-03-09T17:31:52.238372+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: cluster 2026-03-09T17:31:52.724726+0000 mgr.y (mgr.14505) 246 : cluster [DBG] pgmap v330: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: cluster 2026-03-09T17:31:52.724726+0000 mgr.y (mgr.14505) 246 : cluster [DBG] pgmap v330: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:52.963081+0000 mon.c (mon.2) 469 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:52.963081+0000 mon.c (mon.2) 469 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.205589+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.205589+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.205694+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.205694+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.205727+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.205727+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: cluster 2026-03-09T17:31:53.208898+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: cluster 2026-03-09T17:31:53.208898+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.212812+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.212812+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.214353+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.214353+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.214712+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.214712+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.214916+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.214916+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.216055+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.216055+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.216122+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:53 vm00 bash[20770]: audit 2026-03-09T17:31:53.216122+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: cluster 2026-03-09T17:31:52.724726+0000 mgr.y (mgr.14505) 246 : cluster [DBG] pgmap v330: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: cluster 2026-03-09T17:31:52.724726+0000 mgr.y (mgr.14505) 246 : cluster [DBG] pgmap v330: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:52.963081+0000 mon.c (mon.2) 469 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:52.963081+0000 mon.c (mon.2) 469 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.205589+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.205589+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.205694+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.205694+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.205727+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.205727+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: cluster 2026-03-09T17:31:53.208898+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: cluster 2026-03-09T17:31:53.208898+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.212812+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.212812+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.214353+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.214353+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.214712+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.214712+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.214916+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.214916+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.216055+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.216055+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.216122+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:53 vm00 bash[28333]: audit 2026-03-09T17:31:53.216122+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: cluster 2026-03-09T17:31:52.724726+0000 mgr.y (mgr.14505) 246 : cluster [DBG] pgmap v330: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: cluster 2026-03-09T17:31:52.724726+0000 mgr.y (mgr.14505) 246 : cluster [DBG] pgmap v330: 319 pgs: 32 unknown, 1 peering, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 281 active+clean; 4.4 MiB data, 675 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:52.963081+0000 mon.c (mon.2) 469 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:52.963081+0000 mon.c (mon.2) 469 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.205589+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.205589+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.205694+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.205694+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59916-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.205727+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.205727+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59908-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: cluster 2026-03-09T17:31:53.208898+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: cluster 2026-03-09T17:31:53.208898+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.212812+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.212812+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.214353+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.214353+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.214712+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.214712+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.214916+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.214916+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.216055+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.216055+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]: dispatch 2026-03-09T17:31:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.216122+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:53 vm02 bash[23351]: audit 2026-03-09T17:31:53.216122+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: audit 2026-03-09T17:31:53.964013+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: audit 2026-03-09T17:31:53.964013+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: cluster 2026-03-09T17:31:54.206016+0000 mon.a (mon.0) 1964 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: cluster 2026-03-09T17:31:54.206016+0000 mon.a (mon.0) 1964 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: audit 2026-03-09T17:31:54.210051+0000 mon.a (mon.0) 1965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]': finished 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: audit 2026-03-09T17:31:54.210051+0000 mon.a (mon.0) 1965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]': finished 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: cluster 2026-03-09T17:31:54.243141+0000 mon.a (mon.0) 1966 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: cluster 2026-03-09T17:31:54.243141+0000 mon.a (mon.0) 1966 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: audit 2026-03-09T17:31:54.314014+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: audit 2026-03-09T17:31:54.314014+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: audit 2026-03-09T17:31:54.315507+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:54 vm02 bash[23351]: audit 2026-03-09T17:31:54.315507+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: audit 2026-03-09T17:31:53.964013+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:55.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: audit 2026-03-09T17:31:53.964013+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:55.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: cluster 2026-03-09T17:31:54.206016+0000 mon.a (mon.0) 1964 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:55.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: cluster 2026-03-09T17:31:54.206016+0000 mon.a (mon.0) 1964 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: audit 2026-03-09T17:31:54.210051+0000 mon.a (mon.0) 1965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]': finished 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: audit 2026-03-09T17:31:54.210051+0000 mon.a (mon.0) 1965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]': finished 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: cluster 2026-03-09T17:31:54.243141+0000 mon.a (mon.0) 1966 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: cluster 2026-03-09T17:31:54.243141+0000 mon.a (mon.0) 1966 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: audit 2026-03-09T17:31:54.314014+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: audit 2026-03-09T17:31:54.314014+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: audit 2026-03-09T17:31:54.315507+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:54 vm00 bash[28333]: audit 2026-03-09T17:31:54.315507+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: audit 2026-03-09T17:31:53.964013+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: audit 2026-03-09T17:31:53.964013+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: cluster 2026-03-09T17:31:54.206016+0000 mon.a (mon.0) 1964 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: cluster 2026-03-09T17:31:54.206016+0000 mon.a (mon.0) 1964 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: audit 2026-03-09T17:31:54.210051+0000 mon.a (mon.0) 1965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]': finished 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: audit 2026-03-09T17:31:54.210051+0000 mon.a (mon.0) 1965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-33", "mode": "writeback"}]': finished 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: cluster 2026-03-09T17:31:54.243141+0000 mon.a (mon.0) 1966 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: cluster 2026-03-09T17:31:54.243141+0000 mon.a (mon.0) 1966 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: audit 2026-03-09T17:31:54.314014+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: audit 2026-03-09T17:31:54.314014+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: audit 2026-03-09T17:31:54.315507+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:54 vm00 bash[20770]: audit 2026-03-09T17:31:54.315507+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: cluster 2026-03-09T17:31:54.725135+0000 mgr.y (mgr.14505) 247 : cluster [DBG] pgmap v333: 319 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: cluster 2026-03-09T17:31:54.725135+0000 mgr.y (mgr.14505) 247 : cluster [DBG] pgmap v333: 319 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:54.725630+0000 mon.c (mon.2) 472 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:54.725630+0000 mon.c (mon.2) 472 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:54.726414+0000 mon.a (mon.0) 1968 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:54.726414+0000 mon.a (mon.0) 1968 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:54.964993+0000 mon.c (mon.2) 473 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:54.964993+0000 mon.c (mon.2) 473 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.411629+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.411629+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.411847+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.411847+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.411986+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.411986+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.412111+0000 mon.a (mon.0) 1972 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.412111+0000 mon.a (mon.0) 1972 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: cluster 2026-03-09T17:31:55.449203+0000 mon.a (mon.0) 1973 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: cluster 2026-03-09T17:31:55.449203+0000 mon.a (mon.0) 1973 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.512288+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.512288+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.513604+0000 mon.a (mon.0) 1974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:55 vm02 bash[23351]: audit 2026-03-09T17:31:55.513604+0000 mon.a (mon.0) 1974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: cluster 2026-03-09T17:31:54.725135+0000 mgr.y (mgr.14505) 247 : cluster [DBG] pgmap v333: 319 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: cluster 2026-03-09T17:31:54.725135+0000 mgr.y (mgr.14505) 247 : cluster [DBG] pgmap v333: 319 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:54.725630+0000 mon.c (mon.2) 472 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:54.725630+0000 mon.c (mon.2) 472 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:54.726414+0000 mon.a (mon.0) 1968 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:54.726414+0000 mon.a (mon.0) 1968 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:54.964993+0000 mon.c (mon.2) 473 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:54.964993+0000 mon.c (mon.2) 473 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.411629+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.411629+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.411847+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.411847+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.411986+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.411986+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.412111+0000 mon.a (mon.0) 1972 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.412111+0000 mon.a (mon.0) 1972 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: cluster 2026-03-09T17:31:55.449203+0000 mon.a (mon.0) 1973 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: cluster 2026-03-09T17:31:55.449203+0000 mon.a (mon.0) 1973 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.512288+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.512288+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.513604+0000 mon.a (mon.0) 1974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:55 vm00 bash[28333]: audit 2026-03-09T17:31:55.513604+0000 mon.a (mon.0) 1974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: cluster 2026-03-09T17:31:54.725135+0000 mgr.y (mgr.14505) 247 : cluster [DBG] pgmap v333: 319 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: cluster 2026-03-09T17:31:54.725135+0000 mgr.y (mgr.14505) 247 : cluster [DBG] pgmap v333: 319 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:54.725630+0000 mon.c (mon.2) 472 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:54.725630+0000 mon.c (mon.2) 472 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:54.726414+0000 mon.a (mon.0) 1968 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:54.726414+0000 mon.a (mon.0) 1968 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:54.964993+0000 mon.c (mon.2) 473 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:54.964993+0000 mon.c (mon.2) 473 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.411629+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.411629+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59916-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.411847+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.411847+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59908-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.411986+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.411986+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.412111+0000 mon.a (mon.0) 1972 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.412111+0000 mon.a (mon.0) 1972 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: cluster 2026-03-09T17:31:55.449203+0000 mon.a (mon.0) 1973 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: cluster 2026-03-09T17:31:55.449203+0000 mon.a (mon.0) 1973 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.512288+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.512288+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.513604+0000 mon.a (mon.0) 1974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:55 vm00 bash[20770]: audit 2026-03-09T17:31:55.513604+0000 mon.a (mon.0) 1974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]: dispatch 2026-03-09T17:31:56.583 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:31:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:31:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:31:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:56 vm02 bash[23351]: audit 2026-03-09T17:31:55.965928+0000 mon.c (mon.2) 474 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:56 vm02 bash[23351]: audit 2026-03-09T17:31:55.965928+0000 mon.c (mon.2) 474 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:56 vm02 bash[23351]: cluster 2026-03-09T17:31:56.411557+0000 mon.a (mon.0) 1975 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:56 vm02 bash[23351]: cluster 2026-03-09T17:31:56.411557+0000 mon.a (mon.0) 1975 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:56 vm02 bash[23351]: audit 2026-03-09T17:31:56.414986+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:56 vm02 bash[23351]: audit 2026-03-09T17:31:56.414986+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:56 vm02 bash[23351]: cluster 2026-03-09T17:31:56.428566+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T17:31:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:56 vm02 bash[23351]: cluster 2026-03-09T17:31:56.428566+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T17:31:57.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:56 vm00 bash[28333]: audit 2026-03-09T17:31:55.965928+0000 mon.c (mon.2) 474 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:56 vm00 bash[28333]: audit 2026-03-09T17:31:55.965928+0000 mon.c (mon.2) 474 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:56 vm00 bash[28333]: cluster 2026-03-09T17:31:56.411557+0000 mon.a (mon.0) 1975 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:56 vm00 bash[28333]: cluster 2026-03-09T17:31:56.411557+0000 mon.a (mon.0) 1975 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:56 vm00 bash[28333]: audit 2026-03-09T17:31:56.414986+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:56 vm00 bash[28333]: audit 2026-03-09T17:31:56.414986+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:56 vm00 bash[28333]: cluster 2026-03-09T17:31:56.428566+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:56 vm00 bash[28333]: cluster 2026-03-09T17:31:56.428566+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:56 vm00 bash[20770]: audit 2026-03-09T17:31:55.965928+0000 mon.c (mon.2) 474 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:56 vm00 bash[20770]: audit 2026-03-09T17:31:55.965928+0000 mon.c (mon.2) 474 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:56 vm00 bash[20770]: cluster 2026-03-09T17:31:56.411557+0000 mon.a (mon.0) 1975 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:56 vm00 bash[20770]: cluster 2026-03-09T17:31:56.411557+0000 mon.a (mon.0) 1975 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:56 vm00 bash[20770]: audit 2026-03-09T17:31:56.414986+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:56 vm00 bash[20770]: audit 2026-03-09T17:31:56.414986+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-33"}]': finished 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:56 vm00 bash[20770]: cluster 2026-03-09T17:31:56.428566+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T17:31:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:56 vm00 bash[20770]: cluster 2026-03-09T17:31:56.428566+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: cluster 2026-03-09T17:31:56.616159+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: cluster 2026-03-09T17:31:56.616159+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: cluster 2026-03-09T17:31:56.629565+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: cluster 2026-03-09T17:31:56.629565+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.630561+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.630561+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.632009+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.632009+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.634084+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.634084+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.634881+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.634881+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.697761+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.697761+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.697892+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.697892+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.697973+0000 mon.c (mon.2) 478 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.697973+0000 mon.c (mon.2) 478 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698050+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698050+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698118+0000 mon.c (mon.2) 480 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698118+0000 mon.c (mon.2) 480 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698515+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698515+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698624+0000 mon.a (mon.0) 1983 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698624+0000 mon.a (mon.0) 1983 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698698+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.698698+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.699553+0000 mon.a (mon.0) 1985 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.699553+0000 mon.a (mon.0) 1985 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.699766+0000 mon.a (mon.0) 1986 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.699766+0000 mon.a (mon.0) 1986 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: cluster 2026-03-09T17:31:56.725569+0000 mgr.y (mgr.14505) 248 : cluster [DBG] pgmap v337: 287 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 282 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: cluster 2026-03-09T17:31:56.725569+0000 mgr.y (mgr.14505) 248 : cluster [DBG] pgmap v337: 287 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 282 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.726191+0000 mon.c (mon.2) 481 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.726191+0000 mon.c (mon.2) 481 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.727391+0000 mon.a (mon.0) 1987 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.727391+0000 mon.a (mon.0) 1987 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.966888+0000 mon.c (mon.2) 482 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:56.966888+0000 mon.c (mon.2) 482 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:57.581280+0000 mon.a (mon.0) 1988 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:57.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:57 vm02 bash[23351]: audit 2026-03-09T17:31:57.581280+0000 mon.a (mon.0) 1988 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: cluster 2026-03-09T17:31:56.616159+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: cluster 2026-03-09T17:31:56.616159+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: cluster 2026-03-09T17:31:56.629565+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: cluster 2026-03-09T17:31:56.629565+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.630561+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.630561+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.632009+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.632009+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.634084+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.634084+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.634881+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.634881+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.697761+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.697761+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.697892+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.697892+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.697973+0000 mon.c (mon.2) 478 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.697973+0000 mon.c (mon.2) 478 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698050+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698050+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698118+0000 mon.c (mon.2) 480 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698118+0000 mon.c (mon.2) 480 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698515+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698515+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698624+0000 mon.a (mon.0) 1983 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698624+0000 mon.a (mon.0) 1983 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698698+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.698698+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.699553+0000 mon.a (mon.0) 1985 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.699553+0000 mon.a (mon.0) 1985 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.699766+0000 mon.a (mon.0) 1986 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.699766+0000 mon.a (mon.0) 1986 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: cluster 2026-03-09T17:31:56.725569+0000 mgr.y (mgr.14505) 248 : cluster [DBG] pgmap v337: 287 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 282 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: cluster 2026-03-09T17:31:56.725569+0000 mgr.y (mgr.14505) 248 : cluster [DBG] pgmap v337: 287 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 282 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.726191+0000 mon.c (mon.2) 481 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.726191+0000 mon.c (mon.2) 481 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.727391+0000 mon.a (mon.0) 1987 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.727391+0000 mon.a (mon.0) 1987 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.966888+0000 mon.c (mon.2) 482 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:56.966888+0000 mon.c (mon.2) 482 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:57.581280+0000 mon.a (mon.0) 1988 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:57 vm00 bash[20770]: audit 2026-03-09T17:31:57.581280+0000 mon.a (mon.0) 1988 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: cluster 2026-03-09T17:31:56.616159+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: cluster 2026-03-09T17:31:56.616159+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: cluster 2026-03-09T17:31:56.629565+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: cluster 2026-03-09T17:31:56.629565+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T17:31:58.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.630561+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.630561+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.632009+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.632009+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.634084+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.634084+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.634881+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.634881+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.697761+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.697761+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.697892+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.697892+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.697973+0000 mon.c (mon.2) 478 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.697973+0000 mon.c (mon.2) 478 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698050+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698050+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698118+0000 mon.c (mon.2) 480 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698118+0000 mon.c (mon.2) 480 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698515+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698515+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698624+0000 mon.a (mon.0) 1983 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698624+0000 mon.a (mon.0) 1983 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698698+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.698698+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.699553+0000 mon.a (mon.0) 1985 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.699553+0000 mon.a (mon.0) 1985 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.699766+0000 mon.a (mon.0) 1986 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.699766+0000 mon.a (mon.0) 1986 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: cluster 2026-03-09T17:31:56.725569+0000 mgr.y (mgr.14505) 248 : cluster [DBG] pgmap v337: 287 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 282 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: cluster 2026-03-09T17:31:56.725569+0000 mgr.y (mgr.14505) 248 : cluster [DBG] pgmap v337: 287 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 282 active+clean; 4.4 MiB data, 676 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.726191+0000 mon.c (mon.2) 481 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.726191+0000 mon.c (mon.2) 481 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.727391+0000 mon.a (mon.0) 1987 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.727391+0000 mon.a (mon.0) 1987 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.966888+0000 mon.c (mon.2) 482 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:56.966888+0000 mon.c (mon.2) 482 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:57.581280+0000 mon.a (mon.0) 1988 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:58.040 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:57 vm00 bash[28333]: audit 2026-03-09T17:31:57.581280+0000 mon.a (mon.0) 1988 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:31:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.588034+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.588034+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653044+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653044+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653129+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653129+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653162+0000 mon.a (mon.0) 1991 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T17:31:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653162+0000 mon.a (mon.0) 1991 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653517+0000 mon.a (mon.0) 1992 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653517+0000 mon.a (mon.0) 1992 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653621+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653621+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653895+0000 mon.a (mon.0) 1994 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653895+0000 mon.a (mon.0) 1994 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653974+0000 mon.a (mon.0) 1995 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.653974+0000 mon.a (mon.0) 1995 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.654078+0000 mon.a (mon.0) 1996 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.654078+0000 mon.a (mon.0) 1996 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.667191+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.667191+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: cluster 2026-03-09T17:31:57.669829+0000 mon.a (mon.0) 1997 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: cluster 2026-03-09T17:31:57.669829+0000 mon.a (mon.0) 1997 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.672097+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.672097+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.672097+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.672097+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.673267+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.673267+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.679265+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.679265+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.684182+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.684182+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.967654+0000 mon.c (mon.2) 485 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:58.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:58 vm02 bash[23351]: audit 2026-03-09T17:31:57.967654+0000 mon.c (mon.2) 485 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.588034+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.588034+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653044+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653044+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653129+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653129+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653162+0000 mon.a (mon.0) 1991 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653162+0000 mon.a (mon.0) 1991 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653517+0000 mon.a (mon.0) 1992 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653517+0000 mon.a (mon.0) 1992 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653621+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653621+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653895+0000 mon.a (mon.0) 1994 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653895+0000 mon.a (mon.0) 1994 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653974+0000 mon.a (mon.0) 1995 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.653974+0000 mon.a (mon.0) 1995 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.654078+0000 mon.a (mon.0) 1996 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.654078+0000 mon.a (mon.0) 1996 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.667191+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.667191+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: cluster 2026-03-09T17:31:57.669829+0000 mon.a (mon.0) 1997 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: cluster 2026-03-09T17:31:57.669829+0000 mon.a (mon.0) 1997 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.672097+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.672097+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.672097+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.672097+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.673267+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.673267+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.679265+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.679265+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.684182+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.684182+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.967654+0000 mon.c (mon.2) 485 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:58 vm00 bash[28333]: audit 2026-03-09T17:31:57.967654+0000 mon.c (mon.2) 485 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.588034+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.588034+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653044+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653044+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653129+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653129+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653162+0000 mon.a (mon.0) 1991 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653162+0000 mon.a (mon.0) 1991 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653517+0000 mon.a (mon.0) 1992 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653517+0000 mon.a (mon.0) 1992 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.9", "id": [3, 1]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653621+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653621+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.f", "id": [0, 1]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653895+0000 mon.a (mon.0) 1994 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653895+0000 mon.a (mon.0) 1994 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653974+0000 mon.a (mon.0) 1995 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.653974+0000 mon.a (mon.0) 1995 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.6", "id": [6, 7]}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.654078+0000 mon.a (mon.0) 1996 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.654078+0000 mon.a (mon.0) 1996 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.667191+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.667191+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/1557204711' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: cluster 2026-03-09T17:31:57.669829+0000 mon.a (mon.0) 1997 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: cluster 2026-03-09T17:31:57.669829+0000 mon.a (mon.0) 1997 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.672097+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.672097+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.672097+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.672097+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.673267+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.673267+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.679265+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.679265+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.100:0/3701764966' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.684182+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.684182+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.967654+0000 mon.c (mon.2) 485 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:31:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:58 vm00 bash[20770]: audit 2026-03-09T17:31:57.967654+0000 mon.c (mon.2) 485 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.699772+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.699772+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.699838+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.699838+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.699873+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.699873+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: cluster 2026-03-09T17:31:58.707473+0000 mon.a (mon.0) 2004 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: cluster 2026-03-09T17:31:58.707473+0000 mon.a (mon.0) 2004 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: cluster 2026-03-09T17:31:58.727778+0000 mgr.y (mgr.14505) 249 : cluster [DBG] pgmap v340: 319 pgs: 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 313 B/s wr, 2 op/s; 39 B/s, 0 objects/s recovering 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: cluster 2026-03-09T17:31:58.727778+0000 mgr.y (mgr.14505) 249 : cluster [DBG] pgmap v340: 319 pgs: 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 313 B/s wr, 2 op/s; 39 B/s, 0 objects/s recovering 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.732158+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.732158+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.735886+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.735886+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.740992+0000 mon.c (mon.2) 486 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.740992+0000 mon.c (mon.2) 486 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.745347+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.745347+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.749740+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.749740+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.767449+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.767449+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.775189+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.775189+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.779817+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.779817+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.779817+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.779817+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.781624+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.781624+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.968484+0000 mon.c (mon.2) 487 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:31:59 vm00 bash[28333]: audit 2026-03-09T17:31:58.968484+0000 mon.c (mon.2) 487 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.699772+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.699772+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.699838+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.699838+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.699873+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.699873+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: cluster 2026-03-09T17:31:58.707473+0000 mon.a (mon.0) 2004 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: cluster 2026-03-09T17:31:58.707473+0000 mon.a (mon.0) 2004 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: cluster 2026-03-09T17:31:58.727778+0000 mgr.y (mgr.14505) 249 : cluster [DBG] pgmap v340: 319 pgs: 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 313 B/s wr, 2 op/s; 39 B/s, 0 objects/s recovering 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: cluster 2026-03-09T17:31:58.727778+0000 mgr.y (mgr.14505) 249 : cluster [DBG] pgmap v340: 319 pgs: 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 313 B/s wr, 2 op/s; 39 B/s, 0 objects/s recovering 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.732158+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.732158+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.735886+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.735886+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.740992+0000 mon.c (mon.2) 486 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.740992+0000 mon.c (mon.2) 486 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.745347+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.745347+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.749740+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.749740+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.767449+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.767449+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.775189+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.775189+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.779817+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.779817+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.779817+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.779817+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.781624+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.781624+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.968484+0000 mon.c (mon.2) 487 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:31:59 vm00 bash[20770]: audit 2026-03-09T17:31:58.968484+0000 mon.c (mon.2) 487 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.699772+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.699772+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59908-41"}]': finished 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.699838+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.699838+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.699873+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.699873+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59916-50"}]': finished 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: cluster 2026-03-09T17:31:58.707473+0000 mon.a (mon.0) 2004 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: cluster 2026-03-09T17:31:58.707473+0000 mon.a (mon.0) 2004 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: cluster 2026-03-09T17:31:58.727778+0000 mgr.y (mgr.14505) 249 : cluster [DBG] pgmap v340: 319 pgs: 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 313 B/s wr, 2 op/s; 39 B/s, 0 objects/s recovering 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: cluster 2026-03-09T17:31:58.727778+0000 mgr.y (mgr.14505) 249 : cluster [DBG] pgmap v340: 319 pgs: 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 313 B/s wr, 2 op/s; 39 B/s, 0 objects/s recovering 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.732158+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.732158+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.735886+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.735886+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.740992+0000 mon.c (mon.2) 486 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.740992+0000 mon.c (mon.2) 486 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.745347+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.745347+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.749740+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.749740+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.767449+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.767449+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.775189+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.775189+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.779817+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.779817+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.779817+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.779817+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.781624+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.781624+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.968484+0000 mon.c (mon.2) 487 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:31:59 vm02 bash[23351]: audit 2026-03-09T17:31:58.968484+0000 mon.c (mon.2) 487 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: cluster 2026-03-09T17:31:59.699824+0000 mon.a (mon.0) 2010 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: cluster 2026-03-09T17:31:59.699824+0000 mon.a (mon.0) 2010 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.710193+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.710193+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.710277+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.710277+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.710370+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.710370+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.844481+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.844481+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.844626+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.844626+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: cluster 2026-03-09T17:31:59.851598+0000 mon.a (mon.0) 2014 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: cluster 2026-03-09T17:31:59.851598+0000 mon.a (mon.0) 2014 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.857033+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.857033+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.857595+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.857595+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.857689+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.857689+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.969236+0000 mon.c (mon.2) 488 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:00 vm00 bash[28333]: audit 2026-03-09T17:31:59.969236+0000 mon.c (mon.2) 488 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: cluster 2026-03-09T17:31:59.699824+0000 mon.a (mon.0) 2010 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: cluster 2026-03-09T17:31:59.699824+0000 mon.a (mon.0) 2010 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:32:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.710193+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.710193+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.710277+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.710277+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.710370+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.710370+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.844481+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.844481+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.844626+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.844626+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: cluster 2026-03-09T17:31:59.851598+0000 mon.a (mon.0) 2014 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: cluster 2026-03-09T17:31:59.851598+0000 mon.a (mon.0) 2014 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.857033+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.857033+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.857595+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.857595+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.857689+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.857689+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.969236+0000 mon.c (mon.2) 488 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:00 vm00 bash[20770]: audit 2026-03-09T17:31:59.969236+0000 mon.c (mon.2) 488 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: cluster 2026-03-09T17:31:59.699824+0000 mon.a (mon.0) 2010 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: cluster 2026-03-09T17:31:59.699824+0000 mon.a (mon.0) 2010 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.710193+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.710193+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.710277+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.710277+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.710370+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.710370+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59908-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.844481+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.844481+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.844626+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.844626+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: cluster 2026-03-09T17:31:59.851598+0000 mon.a (mon.0) 2014 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: cluster 2026-03-09T17:31:59.851598+0000 mon.a (mon.0) 2014 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.857033+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.857033+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.857595+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.857595+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.857689+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.857689+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.969236+0000 mon.c (mon.2) 488 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:01.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:00 vm02 bash[23351]: audit 2026-03-09T17:31:59.969236+0000 mon.c (mon.2) 488 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:00.728286+0000 mgr.y (mgr.14505) 250 : cluster [DBG] pgmap v342: 351 pgs: 32 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:00.728286+0000 mgr.y (mgr.14505) 250 : cluster [DBG] pgmap v342: 351 pgs: 32 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.823083+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.823083+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.823150+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.823150+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.833560+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.833560+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:00.846840+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:00.846840+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.875694+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:01.939 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.875694+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:01.940 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:32:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:00.728286+0000 mgr.y (mgr.14505) 250 : cluster [DBG] pgmap v342: 351 pgs: 32 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:00.728286+0000 mgr.y (mgr.14505) 250 : cluster [DBG] pgmap v342: 351 pgs: 32 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.823083+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.823083+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.823150+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.823150+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.833560+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.833560+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:00.846840+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:00.846840+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.875694+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.875694+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.970224+0000 mon.c (mon.2) 489 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:00.970224+0000 mon.c (mon.2) 489 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:01.823130+0000 mon.a (mon.0) 2022 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:01.823130+0000 mon.a (mon.0) 2022 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:01.848450+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:01.848450+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.856144+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.856144+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.856300+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]': finished 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.856300+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]': finished 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:01.863010+0000 mon.a (mon.0) 2026 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: cluster 2026-03-09T17:32:01.863010+0000 mon.a (mon.0) 2026 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.895663+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.895663+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.897369+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.897369+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.897676+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.897676+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.898604+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.898604+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.898865+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.898865+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.900420+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:01 vm00 bash[28333]: audit 2026-03-09T17:32:01.900420+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:00.728286+0000 mgr.y (mgr.14505) 250 : cluster [DBG] pgmap v342: 351 pgs: 32 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:00.728286+0000 mgr.y (mgr.14505) 250 : cluster [DBG] pgmap v342: 351 pgs: 32 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 276 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.823083+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.823083+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.823150+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.823150+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.100:0/2260325957' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59916-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.833560+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.833560+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:00.846840+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:00.846840+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.875694+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.875694+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.970224+0000 mon.c (mon.2) 489 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:00.970224+0000 mon.c (mon.2) 489 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:01.823130+0000 mon.a (mon.0) 2022 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:01.823130+0000 mon.a (mon.0) 2022 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:01.848450+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:01.848450+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.856144+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.856144+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.856300+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]': finished 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.856300+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]': finished 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:01.863010+0000 mon.a (mon.0) 2026 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: cluster 2026-03-09T17:32:01.863010+0000 mon.a (mon.0) 2026 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.895663+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.895663+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.897369+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.897369+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.897676+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.897676+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.898604+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.898604+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.898865+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.898865+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.900420+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:01 vm00 bash[20770]: audit 2026-03-09T17:32:01.900420+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.970224+0000 mon.c (mon.2) 489 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:00.970224+0000 mon.c (mon.2) 489 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:01.823130+0000 mon.a (mon.0) 2022 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:01.823130+0000 mon.a (mon.0) 2022 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:01.848450+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:01.848450+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.856144+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.856144+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59908-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.856300+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]': finished 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.856300+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-35", "mode": "writeback"}]': finished 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:01.863010+0000 mon.a (mon.0) 2026 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: cluster 2026-03-09T17:32:01.863010+0000 mon.a (mon.0) 2026 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.895663+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.895663+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.897369+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.897369+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.897676+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.897676+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.898604+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.898604+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.898865+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.898865+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:02.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.900420+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:02.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:01 vm02 bash[23351]: audit 2026-03-09T17:32:01.900420+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:03.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:01.689247+0000 mgr.y (mgr.14505) 251 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:01.689247+0000 mgr.y (mgr.14505) 251 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:01.971332+0000 mon.c (mon.2) 490 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:01.971332+0000 mon.c (mon.2) 490 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:02.859741+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:02.859741+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: cluster 2026-03-09T17:32:02.865858+0000 mon.a (mon.0) 2031 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: cluster 2026-03-09T17:32:02.865858+0000 mon.a (mon.0) 2031 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:02.870022+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:02.870022+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:02.872640+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:02 vm00 bash[28333]: audit 2026-03-09T17:32:02.872640+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:01.689247+0000 mgr.y (mgr.14505) 251 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:01.689247+0000 mgr.y (mgr.14505) 251 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:01.971332+0000 mon.c (mon.2) 490 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:01.971332+0000 mon.c (mon.2) 490 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:02.859741+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:02.859741+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: cluster 2026-03-09T17:32:02.865858+0000 mon.a (mon.0) 2031 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: cluster 2026-03-09T17:32:02.865858+0000 mon.a (mon.0) 2031 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:02.870022+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:02.870022+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:02.872640+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:02 vm00 bash[20770]: audit 2026-03-09T17:32:02.872640+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:01.689247+0000 mgr.y (mgr.14505) 251 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:01.689247+0000 mgr.y (mgr.14505) 251 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:01.971332+0000 mon.c (mon.2) 490 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:01.971332+0000 mon.c (mon.2) 490 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:02.859741+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:02.859741+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: cluster 2026-03-09T17:32:02.865858+0000 mon.a (mon.0) 2031 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: cluster 2026-03-09T17:32:02.865858+0000 mon.a (mon.0) 2031 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:02.870022+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:02.870022+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:02.872640+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:02 vm02 bash[23351]: audit 2026-03-09T17:32:02.872640+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:04.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: cluster 2026-03-09T17:32:02.728787+0000 mgr.y (mgr.14505) 252 : cluster [DBG] pgmap v345: 326 pgs: 8 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 275 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: cluster 2026-03-09T17:32:02.728787+0000 mgr.y (mgr.14505) 252 : cluster [DBG] pgmap v345: 326 pgs: 8 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 275 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:02.972126+0000 mon.c (mon.2) 491 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:02.972126+0000 mon.c (mon.2) 491 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: cluster 2026-03-09T17:32:03.880374+0000 mon.a (mon.0) 2033 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: cluster 2026-03-09T17:32:03.880374+0000 mon.a (mon.0) 2033 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:03.880890+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:03.880890+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:03.882469+0000 mon.a (mon.0) 2034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:03.882469+0000 mon.a (mon.0) 2034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:03.925821+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:03.925821+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:03.927224+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:03 vm00 bash[28333]: audit 2026-03-09T17:32:03.927224+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: cluster 2026-03-09T17:32:02.728787+0000 mgr.y (mgr.14505) 252 : cluster [DBG] pgmap v345: 326 pgs: 8 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 275 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: cluster 2026-03-09T17:32:02.728787+0000 mgr.y (mgr.14505) 252 : cluster [DBG] pgmap v345: 326 pgs: 8 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 275 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:02.972126+0000 mon.c (mon.2) 491 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:02.972126+0000 mon.c (mon.2) 491 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: cluster 2026-03-09T17:32:03.880374+0000 mon.a (mon.0) 2033 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: cluster 2026-03-09T17:32:03.880374+0000 mon.a (mon.0) 2033 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:03.880890+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:03.880890+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:03.882469+0000 mon.a (mon.0) 2034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:03.882469+0000 mon.a (mon.0) 2034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:03.925821+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:03.925821+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:03.927224+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:03 vm00 bash[20770]: audit 2026-03-09T17:32:03.927224+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: cluster 2026-03-09T17:32:02.728787+0000 mgr.y (mgr.14505) 252 : cluster [DBG] pgmap v345: 326 pgs: 8 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 275 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: cluster 2026-03-09T17:32:02.728787+0000 mgr.y (mgr.14505) 252 : cluster [DBG] pgmap v345: 326 pgs: 8 unknown, 32 creating+peering, 2 active+clean+snaptrim, 6 peering, 3 active+clean+snaptrim_wait, 275 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:02.972126+0000 mon.c (mon.2) 491 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:02.972126+0000 mon.c (mon.2) 491 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: cluster 2026-03-09T17:32:03.880374+0000 mon.a (mon.0) 2033 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: cluster 2026-03-09T17:32:03.880374+0000 mon.a (mon.0) 2033 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:03.880890+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:03.880890+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:03.882469+0000 mon.a (mon.0) 2034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:03.882469+0000 mon.a (mon.0) 2034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:03.925821+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:03.925821+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:03.927224+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:03 vm02 bash[23351]: audit 2026-03-09T17:32:03.927224+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:03.972822+0000 mon.c (mon.2) 492 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:03.972822+0000 mon.c (mon.2) 492 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.729637+0000 mon.c (mon.2) 493 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.729637+0000 mon.c (mon.2) 493 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.730308+0000 mon.a (mon.0) 2036 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.730308+0000 mon.a (mon.0) 2036 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.868372+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.868372+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.868409+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.868409+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.868428+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.868428+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.868459+0000 mon.a (mon.0) 2040 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.868459+0000 mon.a (mon.0) 2040 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: cluster 2026-03-09T17:32:04.874351+0000 mon.a (mon.0) 2041 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: cluster 2026-03-09T17:32:04.874351+0000 mon.a (mon.0) 2041 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.887116+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.887116+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.888678+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.888678+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.891998+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.891998+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.895733+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:04 vm00 bash[28333]: audit 2026-03-09T17:32:04.895733+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:03.972822+0000 mon.c (mon.2) 492 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:03.972822+0000 mon.c (mon.2) 492 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.729637+0000 mon.c (mon.2) 493 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.729637+0000 mon.c (mon.2) 493 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.730308+0000 mon.a (mon.0) 2036 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.730308+0000 mon.a (mon.0) 2036 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.868372+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.868372+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.868409+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.868409+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.868428+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.868428+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.868459+0000 mon.a (mon.0) 2040 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.868459+0000 mon.a (mon.0) 2040 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: cluster 2026-03-09T17:32:04.874351+0000 mon.a (mon.0) 2041 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: cluster 2026-03-09T17:32:04.874351+0000 mon.a (mon.0) 2041 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.887116+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.887116+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.888678+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.888678+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.891998+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.891998+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.895733+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:04 vm00 bash[20770]: audit 2026-03-09T17:32:04.895733+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:03.972822+0000 mon.c (mon.2) 492 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:03.972822+0000 mon.c (mon.2) 492 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.729637+0000 mon.c (mon.2) 493 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.729637+0000 mon.c (mon.2) 493 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.730308+0000 mon.a (mon.0) 2036 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.730308+0000 mon.a (mon.0) 2036 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.868372+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.868372+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59916-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.868409+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.868409+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.868428+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.868428+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.868459+0000 mon.a (mon.0) 2040 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.868459+0000 mon.a (mon.0) 2040 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: cluster 2026-03-09T17:32:04.874351+0000 mon.a (mon.0) 2041 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: cluster 2026-03-09T17:32:04.874351+0000 mon.a (mon.0) 2041 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.887116+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.887116+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/4040056710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.888678+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.888678+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.891998+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.891998+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.895733+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:04 vm02 bash[23351]: audit 2026-03-09T17:32:04.895733+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleWrite (7354 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.WaitForComplete 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.WaitForComplete (6123 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip (7070 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip2 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip2 (7204 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripAppend 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripAppend (7055 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.IsComplete 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.IsComplete (7045 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.IsSafe 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.IsSafe (7011 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.ReturnValue 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.ReturnValue (7101 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.Flush 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.Flush (7106 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.FlushAsync 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.FlushAsync (7050 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripWriteFull 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripWriteFull (6429 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStat 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStat (8218 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStatNS 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStatNS (7202 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.StatRemove 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.StatRemove (7075 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.ExecuteClass 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.ExecuteClass (6527 ms) 2026-03-09T17:32:05.959 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.MultiWrite 2026-03-09T17:32:05.960 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.MultiWrite (7216 ms) 2026-03-09T17:32:05.960 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] 16 tests from LibRadosAioEC (112787 ms total) 2026-03-09T17:32:05.960 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: 2026-03-09T17:32:05.960 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] Global test environment tear-down 2026-03-09T17:32:05.960 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [==========] 42 tests from 2 test suites ran. (193826 ms total) 2026-03-09T17:32:05.960 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ PASSED ] 42 tests. 2026-03-09T17:32:06.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: cluster 2026-03-09T17:32:04.729143+0000 mgr.y (mgr.14505) 253 : cluster [DBG] pgmap v348: 318 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 3 B/s, 1 objects/s recovering 2026-03-09T17:32:06.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: cluster 2026-03-09T17:32:04.729143+0000 mgr.y (mgr.14505) 253 : cluster [DBG] pgmap v348: 318 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 3 B/s, 1 objects/s recovering 2026-03-09T17:32:06.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: cluster 2026-03-09T17:32:04.956953+0000 mon.a (mon.0) 2044 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: cluster 2026-03-09T17:32:04.956953+0000 mon.a (mon.0) 2044 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: audit 2026-03-09T17:32:04.996785+0000 mon.c (mon.2) 494 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: audit 2026-03-09T17:32:04.996785+0000 mon.c (mon.2) 494 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: audit 2026-03-09T17:32:05.931789+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: audit 2026-03-09T17:32:05.931789+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: audit 2026-03-09T17:32:05.931892+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: audit 2026-03-09T17:32:05.931892+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: cluster 2026-03-09T17:32:05.937548+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:05 vm00 bash[28333]: cluster 2026-03-09T17:32:05.937548+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: cluster 2026-03-09T17:32:04.729143+0000 mgr.y (mgr.14505) 253 : cluster [DBG] pgmap v348: 318 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 3 B/s, 1 objects/s recovering 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: cluster 2026-03-09T17:32:04.729143+0000 mgr.y (mgr.14505) 253 : cluster [DBG] pgmap v348: 318 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 3 B/s, 1 objects/s recovering 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: cluster 2026-03-09T17:32:04.956953+0000 mon.a (mon.0) 2044 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: cluster 2026-03-09T17:32:04.956953+0000 mon.a (mon.0) 2044 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: audit 2026-03-09T17:32:04.996785+0000 mon.c (mon.2) 494 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: audit 2026-03-09T17:32:04.996785+0000 mon.c (mon.2) 494 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: audit 2026-03-09T17:32:05.931789+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: audit 2026-03-09T17:32:05.931789+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: audit 2026-03-09T17:32:05.931892+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: audit 2026-03-09T17:32:05.931892+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: cluster 2026-03-09T17:32:05.937548+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T17:32:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:05 vm00 bash[20770]: cluster 2026-03-09T17:32:05.937548+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: cluster 2026-03-09T17:32:04.729143+0000 mgr.y (mgr.14505) 253 : cluster [DBG] pgmap v348: 318 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 3 B/s, 1 objects/s recovering 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: cluster 2026-03-09T17:32:04.729143+0000 mgr.y (mgr.14505) 253 : cluster [DBG] pgmap v348: 318 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 3 B/s, 1 objects/s recovering 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: cluster 2026-03-09T17:32:04.956953+0000 mon.a (mon.0) 2044 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: cluster 2026-03-09T17:32:04.956953+0000 mon.a (mon.0) 2044 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: audit 2026-03-09T17:32:04.996785+0000 mon.c (mon.2) 494 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: audit 2026-03-09T17:32:04.996785+0000 mon.c (mon.2) 494 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: audit 2026-03-09T17:32:05.931789+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: audit 2026-03-09T17:32:05.931789+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59908-42"}]': finished 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: audit 2026-03-09T17:32:05.931892+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: audit 2026-03-09T17:32:05.931892+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: cluster 2026-03-09T17:32:05.937548+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T17:32:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:05 vm02 bash[23351]: cluster 2026-03-09T17:32:05.937548+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T17:32:06.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:32:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:32:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:32:07.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:07 vm00 bash[28333]: audit 2026-03-09T17:32:05.997648+0000 mon.c (mon.2) 495 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:07 vm00 bash[28333]: audit 2026-03-09T17:32:05.997648+0000 mon.c (mon.2) 495 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:07 vm00 bash[28333]: cluster 2026-03-09T17:32:06.944777+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:07 vm00 bash[28333]: cluster 2026-03-09T17:32:06.944777+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:07 vm00 bash[28333]: audit 2026-03-09T17:32:06.945496+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:07 vm00 bash[28333]: audit 2026-03-09T17:32:06.945496+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:07 vm00 bash[28333]: audit 2026-03-09T17:32:06.946795+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:07 vm00 bash[28333]: audit 2026-03-09T17:32:06.946795+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:07 vm00 bash[20770]: audit 2026-03-09T17:32:05.997648+0000 mon.c (mon.2) 495 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:07 vm00 bash[20770]: audit 2026-03-09T17:32:05.997648+0000 mon.c (mon.2) 495 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:07 vm00 bash[20770]: cluster 2026-03-09T17:32:06.944777+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:07 vm00 bash[20770]: cluster 2026-03-09T17:32:06.944777+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:07 vm00 bash[20770]: audit 2026-03-09T17:32:06.945496+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:07 vm00 bash[20770]: audit 2026-03-09T17:32:06.945496+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:07 vm00 bash[20770]: audit 2026-03-09T17:32:06.946795+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:07 vm00 bash[20770]: audit 2026-03-09T17:32:06.946795+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:07 vm02 bash[23351]: audit 2026-03-09T17:32:05.997648+0000 mon.c (mon.2) 495 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:07 vm02 bash[23351]: audit 2026-03-09T17:32:05.997648+0000 mon.c (mon.2) 495 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:07 vm02 bash[23351]: cluster 2026-03-09T17:32:06.944777+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T17:32:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:07 vm02 bash[23351]: cluster 2026-03-09T17:32:06.944777+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T17:32:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:07 vm02 bash[23351]: audit 2026-03-09T17:32:06.945496+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:07 vm02 bash[23351]: audit 2026-03-09T17:32:06.945496+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:07 vm02 bash[23351]: audit 2026-03-09T17:32:06.946795+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:07 vm02 bash[23351]: audit 2026-03-09T17:32:06.946795+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: cluster 2026-03-09T17:32:06.729479+0000 mgr.y (mgr.14505) 254 : cluster [DBG] pgmap v351: 326 pgs: 8 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 35 B/s, 1 objects/s recovering 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: cluster 2026-03-09T17:32:06.729479+0000 mgr.y (mgr.14505) 254 : cluster [DBG] pgmap v351: 326 pgs: 8 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 35 B/s, 1 objects/s recovering 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: cluster 2026-03-09T17:32:06.972408+0000 mon.a (mon.0) 2050 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: cluster 2026-03-09T17:32:06.972408+0000 mon.a (mon.0) 2050 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:06.996145+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:06.996145+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.000266+0000 mon.c (mon.2) 496 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.000266+0000 mon.c (mon.2) 496 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.012937+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.012937+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.943730+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.943730+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.944034+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.944034+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.948693+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.948693+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.949211+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.949211+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: cluster 2026-03-09T17:32:07.957536+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: cluster 2026-03-09T17:32:07.957536+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.958150+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.958150+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.958254+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:08 vm02 bash[23351]: audit 2026-03-09T17:32:07.958254+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: cluster 2026-03-09T17:32:06.729479+0000 mgr.y (mgr.14505) 254 : cluster [DBG] pgmap v351: 326 pgs: 8 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 35 B/s, 1 objects/s recovering 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: cluster 2026-03-09T17:32:06.729479+0000 mgr.y (mgr.14505) 254 : cluster [DBG] pgmap v351: 326 pgs: 8 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 35 B/s, 1 objects/s recovering 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: cluster 2026-03-09T17:32:06.972408+0000 mon.a (mon.0) 2050 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: cluster 2026-03-09T17:32:06.972408+0000 mon.a (mon.0) 2050 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:06.996145+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:06.996145+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.000266+0000 mon.c (mon.2) 496 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.000266+0000 mon.c (mon.2) 496 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.012937+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.012937+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.943730+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.943730+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.944034+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.944034+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.948693+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.948693+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.949211+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.949211+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: cluster 2026-03-09T17:32:07.957536+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: cluster 2026-03-09T17:32:07.957536+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.958150+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.958150+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.958254+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:08 vm00 bash[28333]: audit 2026-03-09T17:32:07.958254+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: cluster 2026-03-09T17:32:06.729479+0000 mgr.y (mgr.14505) 254 : cluster [DBG] pgmap v351: 326 pgs: 8 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 35 B/s, 1 objects/s recovering 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: cluster 2026-03-09T17:32:06.729479+0000 mgr.y (mgr.14505) 254 : cluster [DBG] pgmap v351: 326 pgs: 8 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 313 active+clean; 4.4 MiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 35 B/s, 1 objects/s recovering 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: cluster 2026-03-09T17:32:06.972408+0000 mon.a (mon.0) 2050 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: cluster 2026-03-09T17:32:06.972408+0000 mon.a (mon.0) 2050 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:06.996145+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:06.996145+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.000266+0000 mon.c (mon.2) 496 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.000266+0000 mon.c (mon.2) 496 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.012937+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.012937+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.943730+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.943730+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.944034+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.944034+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.948693+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.948693+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/2291852577' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.949211+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.949211+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: cluster 2026-03-09T17:32:07.957536+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: cluster 2026-03-09T17:32:07.957536+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.958150+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.958150+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]: dispatch 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.958254+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:08.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:08 vm00 bash[20770]: audit 2026-03-09T17:32:07.958254+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.001083+0000 mon.c (mon.2) 497 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.001083+0000 mon.c (mon.2) 497 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: cluster 2026-03-09T17:32:08.944215+0000 mon.a (mon.0) 2057 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: cluster 2026-03-09T17:32:08.944215+0000 mon.a (mon.0) 2057 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.947603+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.947603+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.947699+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.947699+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: cluster 2026-03-09T17:32:08.954778+0000 mon.a (mon.0) 2060 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: cluster 2026-03-09T17:32:08.954778+0000 mon.a (mon.0) 2060 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.965458+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.965458+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.966922+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.966922+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.967537+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.967537+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.968988+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.968988+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.969607+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.969607+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.970180+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:08.970180+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:09.001792+0000 mon.c (mon.2) 498 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:09 vm02 bash[23351]: audit 2026-03-09T17:32:09.001792+0000 mon.c (mon.2) 498 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.001083+0000 mon.c (mon.2) 497 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.001083+0000 mon.c (mon.2) 497 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: cluster 2026-03-09T17:32:08.944215+0000 mon.a (mon.0) 2057 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: cluster 2026-03-09T17:32:08.944215+0000 mon.a (mon.0) 2057 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.947603+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.947603+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.947699+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.947699+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: cluster 2026-03-09T17:32:08.954778+0000 mon.a (mon.0) 2060 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: cluster 2026-03-09T17:32:08.954778+0000 mon.a (mon.0) 2060 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.965458+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.965458+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.966922+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.966922+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.967537+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.967537+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.968988+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.968988+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.969607+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.969607+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.970180+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:08.970180+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:09.001792+0000 mon.c (mon.2) 498 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:09 vm00 bash[28333]: audit 2026-03-09T17:32:09.001792+0000 mon.c (mon.2) 498 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.001083+0000 mon.c (mon.2) 497 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.001083+0000 mon.c (mon.2) 497 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: cluster 2026-03-09T17:32:08.944215+0000 mon.a (mon.0) 2057 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: cluster 2026-03-09T17:32:08.944215+0000 mon.a (mon.0) 2057 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.947603+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.947603+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59916-52"}]': finished 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.947699+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.947699+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-35"}]': finished 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: cluster 2026-03-09T17:32:08.954778+0000 mon.a (mon.0) 2060 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: cluster 2026-03-09T17:32:08.954778+0000 mon.a (mon.0) 2060 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.965458+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.965458+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.966922+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.966922+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.967537+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.967537+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.968988+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.968988+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.969607+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.969607+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.970180+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:08.970180+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:09.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:09.001792+0000 mon.c (mon.2) 498 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:09.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:09 vm00 bash[20770]: audit 2026-03-09T17:32:09.001792+0000 mon.c (mon.2) 498 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: cluster 2026-03-09T17:32:08.730012+0000 mgr.y (mgr.14505) 255 : cluster [DBG] pgmap v354: 317 pgs: 4 active+clean+snaptrim, 1 peering, 7 active+clean+snaptrim_wait, 305 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: cluster 2026-03-09T17:32:08.730012+0000 mgr.y (mgr.14505) 255 : cluster [DBG] pgmap v354: 317 pgs: 4 active+clean+snaptrim, 1 peering, 7 active+clean+snaptrim_wait, 305 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: audit 2026-03-09T17:32:09.950827+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: audit 2026-03-09T17:32:09.950827+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: audit 2026-03-09T17:32:09.953471+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: audit 2026-03-09T17:32:09.953471+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: cluster 2026-03-09T17:32:09.968204+0000 mon.a (mon.0) 2065 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: cluster 2026-03-09T17:32:09.968204+0000 mon.a (mon.0) 2065 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: audit 2026-03-09T17:32:09.969294+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: audit 2026-03-09T17:32:09.969294+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: audit 2026-03-09T17:32:10.002522+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:10 vm02 bash[23351]: audit 2026-03-09T17:32:10.002522+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:10.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: cluster 2026-03-09T17:32:08.730012+0000 mgr.y (mgr.14505) 255 : cluster [DBG] pgmap v354: 317 pgs: 4 active+clean+snaptrim, 1 peering, 7 active+clean+snaptrim_wait, 305 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:32:10.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: cluster 2026-03-09T17:32:08.730012+0000 mgr.y (mgr.14505) 255 : cluster [DBG] pgmap v354: 317 pgs: 4 active+clean+snaptrim, 1 peering, 7 active+clean+snaptrim_wait, 305 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:32:10.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: audit 2026-03-09T17:32:09.950827+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: audit 2026-03-09T17:32:09.950827+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: audit 2026-03-09T17:32:09.953471+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: audit 2026-03-09T17:32:09.953471+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: cluster 2026-03-09T17:32:09.968204+0000 mon.a (mon.0) 2065 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: cluster 2026-03-09T17:32:09.968204+0000 mon.a (mon.0) 2065 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: audit 2026-03-09T17:32:09.969294+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: audit 2026-03-09T17:32:09.969294+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: audit 2026-03-09T17:32:10.002522+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:10 vm00 bash[28333]: audit 2026-03-09T17:32:10.002522+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: cluster 2026-03-09T17:32:08.730012+0000 mgr.y (mgr.14505) 255 : cluster [DBG] pgmap v354: 317 pgs: 4 active+clean+snaptrim, 1 peering, 7 active+clean+snaptrim_wait, 305 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: cluster 2026-03-09T17:32:08.730012+0000 mgr.y (mgr.14505) 255 : cluster [DBG] pgmap v354: 317 pgs: 4 active+clean+snaptrim, 1 peering, 7 active+clean+snaptrim_wait, 305 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: audit 2026-03-09T17:32:09.950827+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: audit 2026-03-09T17:32:09.950827+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59916-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: audit 2026-03-09T17:32:09.953471+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: audit 2026-03-09T17:32:09.953471+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: cluster 2026-03-09T17:32:09.968204+0000 mon.a (mon.0) 2065 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: cluster 2026-03-09T17:32:09.968204+0000 mon.a (mon.0) 2065 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: audit 2026-03-09T17:32:09.969294+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: audit 2026-03-09T17:32:09.969294+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: audit 2026-03-09T17:32:10.002522+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:10 vm00 bash[20770]: audit 2026-03-09T17:32:10.002522+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:12.008 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:32:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:32:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: cluster 2026-03-09T17:32:10.730412+0000 mgr.y (mgr.14505) 256 : cluster [DBG] pgmap v357: 285 pgs: 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:32:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: cluster 2026-03-09T17:32:10.730412+0000 mgr.y (mgr.14505) 256 : cluster [DBG] pgmap v357: 285 pgs: 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:32:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: cluster 2026-03-09T17:32:10.985039+0000 mon.a (mon.0) 2067 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T17:32:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: cluster 2026-03-09T17:32:10.985039+0000 mon.a (mon.0) 2067 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T17:32:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: audit 2026-03-09T17:32:10.985707+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: audit 2026-03-09T17:32:10.985707+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: audit 2026-03-09T17:32:10.988519+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: audit 2026-03-09T17:32:10.988519+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: audit 2026-03-09T17:32:11.003369+0000 mon.c (mon.2) 500 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:12 vm00 bash[28333]: audit 2026-03-09T17:32:11.003369+0000 mon.c (mon.2) 500 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: cluster 2026-03-09T17:32:10.730412+0000 mgr.y (mgr.14505) 256 : cluster [DBG] pgmap v357: 285 pgs: 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: cluster 2026-03-09T17:32:10.730412+0000 mgr.y (mgr.14505) 256 : cluster [DBG] pgmap v357: 285 pgs: 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: cluster 2026-03-09T17:32:10.985039+0000 mon.a (mon.0) 2067 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: cluster 2026-03-09T17:32:10.985039+0000 mon.a (mon.0) 2067 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: audit 2026-03-09T17:32:10.985707+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: audit 2026-03-09T17:32:10.985707+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: audit 2026-03-09T17:32:10.988519+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: audit 2026-03-09T17:32:10.988519+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: audit 2026-03-09T17:32:11.003369+0000 mon.c (mon.2) 500 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:12 vm00 bash[20770]: audit 2026-03-09T17:32:11.003369+0000 mon.c (mon.2) 500 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: cluster 2026-03-09T17:32:10.730412+0000 mgr.y (mgr.14505) 256 : cluster [DBG] pgmap v357: 285 pgs: 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: cluster 2026-03-09T17:32:10.730412+0000 mgr.y (mgr.14505) 256 : cluster [DBG] pgmap v357: 285 pgs: 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: cluster 2026-03-09T17:32:10.985039+0000 mon.a (mon.0) 2067 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: cluster 2026-03-09T17:32:10.985039+0000 mon.a (mon.0) 2067 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: audit 2026-03-09T17:32:10.985707+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: audit 2026-03-09T17:32:10.985707+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: audit 2026-03-09T17:32:10.988519+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: audit 2026-03-09T17:32:10.988519+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: audit 2026-03-09T17:32:11.003369+0000 mon.c (mon.2) 500 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:12 vm02 bash[23351]: audit 2026-03-09T17:32:11.003369+0000 mon.c (mon.2) 500 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:11.699205+0000 mgr.y (mgr.14505) 257 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:11.699205+0000 mgr.y (mgr.14505) 257 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:11.959751+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:11.959751+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:11.959856+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:11.959856+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:11.983960+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:11.983960+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: cluster 2026-03-09T17:32:11.993487+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: cluster 2026-03-09T17:32:11.993487+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.003176+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.003176+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.004543+0000 mon.c (mon.2) 501 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.004543+0000 mon.c (mon.2) 501 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.653170+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.653170+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.962855+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.962855+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: cluster 2026-03-09T17:32:12.967634+0000 mon.a (mon.0) 2074 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: cluster 2026-03-09T17:32:12.967634+0000 mon.a (mon.0) 2074 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.972324+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.972324+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.975709+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:12.975709+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:13.005259+0000 mon.c (mon.2) 503 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:13 vm00 bash[28333]: audit 2026-03-09T17:32:13.005259+0000 mon.c (mon.2) 503 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:11.699205+0000 mgr.y (mgr.14505) 257 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:11.699205+0000 mgr.y (mgr.14505) 257 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:11.959751+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:11.959751+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:11.959856+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:11.959856+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:11.983960+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:11.983960+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: cluster 2026-03-09T17:32:11.993487+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: cluster 2026-03-09T17:32:11.993487+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.003176+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.003176+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.004543+0000 mon.c (mon.2) 501 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.004543+0000 mon.c (mon.2) 501 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.653170+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.653170+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.962855+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.962855+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: cluster 2026-03-09T17:32:12.967634+0000 mon.a (mon.0) 2074 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: cluster 2026-03-09T17:32:12.967634+0000 mon.a (mon.0) 2074 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.972324+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.972324+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.975709+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:12.975709+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:13.005259+0000 mon.c (mon.2) 503 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:13 vm00 bash[20770]: audit 2026-03-09T17:32:13.005259+0000 mon.c (mon.2) 503 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:11.699205+0000 mgr.y (mgr.14505) 257 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:11.699205+0000 mgr.y (mgr.14505) 257 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:11.959751+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:11.959751+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59916-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:11.959856+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:11.959856+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:11.983960+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:11.983960+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: cluster 2026-03-09T17:32:11.993487+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: cluster 2026-03-09T17:32:11.993487+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.003176+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.003176+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.004543+0000 mon.c (mon.2) 501 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.004543+0000 mon.c (mon.2) 501 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.653170+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.653170+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.962855+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.962855+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: cluster 2026-03-09T17:32:12.967634+0000 mon.a (mon.0) 2074 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: cluster 2026-03-09T17:32:12.967634+0000 mon.a (mon.0) 2074 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.972324+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.972324+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.975709+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:12.975709+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:13.005259+0000 mon.c (mon.2) 503 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:13 vm02 bash[23351]: audit 2026-03-09T17:32:13.005259+0000 mon.c (mon.2) 503 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:14.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: cluster 2026-03-09T17:32:12.730873+0000 mgr.y (mgr.14505) 258 : cluster [DBG] pgmap v360: 325 pgs: 40 unknown, 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: cluster 2026-03-09T17:32:12.730873+0000 mgr.y (mgr.14505) 258 : cluster [DBG] pgmap v360: 325 pgs: 40 unknown, 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: cluster 2026-03-09T17:32:13.007325+0000 mon.a (mon.0) 2076 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: cluster 2026-03-09T17:32:13.007325+0000 mon.a (mon.0) 2076 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.966122+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.966122+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: cluster 2026-03-09T17:32:13.970567+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: cluster 2026-03-09T17:32:13.970567+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.975665+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.975665+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.976519+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.976519+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.978429+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.978429+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.978500+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:13.978500+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:14.006188+0000 mon.c (mon.2) 504 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:14 vm00 bash[28333]: audit 2026-03-09T17:32:14.006188+0000 mon.c (mon.2) 504 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: cluster 2026-03-09T17:32:12.730873+0000 mgr.y (mgr.14505) 258 : cluster [DBG] pgmap v360: 325 pgs: 40 unknown, 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: cluster 2026-03-09T17:32:12.730873+0000 mgr.y (mgr.14505) 258 : cluster [DBG] pgmap v360: 325 pgs: 40 unknown, 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: cluster 2026-03-09T17:32:13.007325+0000 mon.a (mon.0) 2076 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: cluster 2026-03-09T17:32:13.007325+0000 mon.a (mon.0) 2076 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.966122+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.966122+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: cluster 2026-03-09T17:32:13.970567+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: cluster 2026-03-09T17:32:13.970567+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.975665+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.975665+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.976519+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.976519+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.978429+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.978429+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.978500+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:13.978500+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:14.006188+0000 mon.c (mon.2) 504 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:14 vm00 bash[20770]: audit 2026-03-09T17:32:14.006188+0000 mon.c (mon.2) 504 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: cluster 2026-03-09T17:32:12.730873+0000 mgr.y (mgr.14505) 258 : cluster [DBG] pgmap v360: 325 pgs: 40 unknown, 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: cluster 2026-03-09T17:32:12.730873+0000 mgr.y (mgr.14505) 258 : cluster [DBG] pgmap v360: 325 pgs: 40 unknown, 3 active+clean+snaptrim, 1 peering, 3 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: cluster 2026-03-09T17:32:13.007325+0000 mon.a (mon.0) 2076 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: cluster 2026-03-09T17:32:13.007325+0000 mon.a (mon.0) 2076 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.966122+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.966122+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: cluster 2026-03-09T17:32:13.970567+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: cluster 2026-03-09T17:32:13.970567+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.975665+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.975665+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.976519+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.976519+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.978429+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.978429+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.978500+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:13.978500+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:14.006188+0000 mon.c (mon.2) 504 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:14 vm02 bash[23351]: audit 2026-03-09T17:32:14.006188+0000 mon.c (mon.2) 504 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:15.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.731924+0000 mon.c (mon.2) 505 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.731924+0000 mon.c (mon.2) 505 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.732775+0000 mon.a (mon.0) 2081 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.732775+0000 mon.a (mon.0) 2081 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: cluster 2026-03-09T17:32:14.966254+0000 mon.a (mon.0) 2082 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: cluster 2026-03-09T17:32:14.966254+0000 mon.a (mon.0) 2082 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.969591+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.969591+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.969726+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.969726+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.969859+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.969859+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.977187+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.977187+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: cluster 2026-03-09T17:32:14.977547+0000 mon.a (mon.0) 2086 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: cluster 2026-03-09T17:32:14.977547+0000 mon.a (mon.0) 2086 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.982826+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:14.982826+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:15.007056+0000 mon.c (mon.2) 506 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:15 vm00 bash[28333]: audit 2026-03-09T17:32:15.007056+0000 mon.c (mon.2) 506 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.731924+0000 mon.c (mon.2) 505 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.731924+0000 mon.c (mon.2) 505 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.732775+0000 mon.a (mon.0) 2081 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.732775+0000 mon.a (mon.0) 2081 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: cluster 2026-03-09T17:32:14.966254+0000 mon.a (mon.0) 2082 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: cluster 2026-03-09T17:32:14.966254+0000 mon.a (mon.0) 2082 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.969591+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.969591+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.969726+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.969726+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.969859+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.969859+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.977187+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.977187+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: cluster 2026-03-09T17:32:14.977547+0000 mon.a (mon.0) 2086 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: cluster 2026-03-09T17:32:14.977547+0000 mon.a (mon.0) 2086 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.982826+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:14.982826+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:15.007056+0000 mon.c (mon.2) 506 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:15 vm00 bash[20770]: audit 2026-03-09T17:32:15.007056+0000 mon.c (mon.2) 506 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.731924+0000 mon.c (mon.2) 505 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.731924+0000 mon.c (mon.2) 505 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.732775+0000 mon.a (mon.0) 2081 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.732775+0000 mon.a (mon.0) 2081 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: cluster 2026-03-09T17:32:14.966254+0000 mon.a (mon.0) 2082 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: cluster 2026-03-09T17:32:14.966254+0000 mon.a (mon.0) 2082 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.969591+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]': finished 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.969591+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-37", "mode": "writeback"}]': finished 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.969726+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.969726+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.969859+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.969859+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.977187+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.977187+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/2489941242' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: cluster 2026-03-09T17:32:14.977547+0000 mon.a (mon.0) 2086 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: cluster 2026-03-09T17:32:14.977547+0000 mon.a (mon.0) 2086 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.982826+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:14.982826+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:15.007056+0000 mon.c (mon.2) 506 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:15 vm02 bash[23351]: audit 2026-03-09T17:32:15.007056+0000 mon.c (mon.2) 506 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: cluster 2026-03-09T17:32:14.731277+0000 mgr.y (mgr.14505) 259 : cluster [DBG] pgmap v363: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: cluster 2026-03-09T17:32:14.731277+0000 mgr.y (mgr.14505) 259 : cluster [DBG] pgmap v363: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:15.034556+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:15.034556+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:15.035895+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:15.035895+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.003935+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.003935+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.004099+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.004099+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.009441+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.009441+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.012533+0000 mon.c (mon.2) 507 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.012533+0000 mon.c (mon.2) 507 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: cluster 2026-03-09T17:32:16.021228+0000 mon.a (mon.0) 2091 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: cluster 2026-03-09T17:32:16.021228+0000 mon.a (mon.0) 2091 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.021830+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:16 vm00 bash[28333]: audit 2026-03-09T17:32:16.021830+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: cluster 2026-03-09T17:32:14.731277+0000 mgr.y (mgr.14505) 259 : cluster [DBG] pgmap v363: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: cluster 2026-03-09T17:32:14.731277+0000 mgr.y (mgr.14505) 259 : cluster [DBG] pgmap v363: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:15.034556+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:15.034556+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:15.035895+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:15.035895+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.003935+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.003935+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.004099+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.004099+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.009441+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.009441+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.012533+0000 mon.c (mon.2) 507 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.012533+0000 mon.c (mon.2) 507 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: cluster 2026-03-09T17:32:16.021228+0000 mon.a (mon.0) 2091 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: cluster 2026-03-09T17:32:16.021228+0000 mon.a (mon.0) 2091 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.021830+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:16 vm00 bash[20770]: audit 2026-03-09T17:32:16.021830+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: cluster 2026-03-09T17:32:14.731277+0000 mgr.y (mgr.14505) 259 : cluster [DBG] pgmap v363: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: cluster 2026-03-09T17:32:14.731277+0000 mgr.y (mgr.14505) 259 : cluster [DBG] pgmap v363: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:15.034556+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:15.034556+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:15.035895+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:15.035895+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.003935+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.003935+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59916-53"}]': finished 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.004099+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.004099+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.009441+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.009441+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.012533+0000 mon.c (mon.2) 507 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.012533+0000 mon.c (mon.2) 507 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: cluster 2026-03-09T17:32:16.021228+0000 mon.a (mon.0) 2091 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: cluster 2026-03-09T17:32:16.021228+0000 mon.a (mon.0) 2091 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.021830+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:16 vm02 bash[23351]: audit 2026-03-09T17:32:16.021830+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]: dispatch 2026-03-09T17:32:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:32:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:32:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.027287+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.027287+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.029329+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.029329+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.030669+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.030669+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.038527+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.038527+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.039208+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.039208+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.039486+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.039486+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: cluster 2026-03-09T17:32:16.621560+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: cluster 2026-03-09T17:32:16.621560+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.706339+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.706339+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.706492+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.706492+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: cluster 2026-03-09T17:32:16.745680+0000 mon.a (mon.0) 2099 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: cluster 2026-03-09T17:32:16.745680+0000 mon.a (mon.0) 2099 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.753349+0000 mon.c (mon.2) 511 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.753349+0000 mon.c (mon.2) 511 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.755064+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.755064+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.757049+0000 mon.a (mon.0) 2100 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.757049+0000 mon.a (mon.0) 2100 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.757117+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:16.757117+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:17.013418+0000 mon.c (mon.2) 513 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:17 vm00 bash[20770]: audit 2026-03-09T17:32:17.013418+0000 mon.c (mon.2) 513 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.027287+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.027287+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.029329+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.029329+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.030669+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.030669+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.038527+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.038527+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.039208+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.039208+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.039486+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.039486+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: cluster 2026-03-09T17:32:16.621560+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: cluster 2026-03-09T17:32:16.621560+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.706339+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.706339+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.706492+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.706492+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: cluster 2026-03-09T17:32:16.745680+0000 mon.a (mon.0) 2099 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: cluster 2026-03-09T17:32:16.745680+0000 mon.a (mon.0) 2099 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.753349+0000 mon.c (mon.2) 511 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.753349+0000 mon.c (mon.2) 511 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.755064+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.755064+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.757049+0000 mon.a (mon.0) 2100 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.757049+0000 mon.a (mon.0) 2100 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.757117+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:16.757117+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:17.013418+0000 mon.c (mon.2) 513 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:17.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:17 vm00 bash[28333]: audit 2026-03-09T17:32:17.013418+0000 mon.c (mon.2) 513 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.027287+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.027287+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.029329+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.029329+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.030669+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.030669+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.038527+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.038527+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.039208+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.039208+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.039486+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.039486+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: cluster 2026-03-09T17:32:16.621560+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: cluster 2026-03-09T17:32:16.621560+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.706339+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.706339+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-37"}]': finished 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.706492+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.706492+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59916-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: cluster 2026-03-09T17:32:16.745680+0000 mon.a (mon.0) 2099 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: cluster 2026-03-09T17:32:16.745680+0000 mon.a (mon.0) 2099 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.753349+0000 mon.c (mon.2) 511 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.753349+0000 mon.c (mon.2) 511 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.755064+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.755064+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.757049+0000 mon.a (mon.0) 2100 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.757049+0000 mon.a (mon.0) 2100 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.757117+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:16.757117+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:17.013418+0000 mon.c (mon.2) 513 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:17.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:17 vm02 bash[23351]: audit 2026-03-09T17:32:17.013418+0000 mon.c (mon.2) 513 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:18 vm02 bash[23351]: cluster 2026-03-09T17:32:16.731649+0000 mgr.y (mgr.14505) 260 : cluster [DBG] pgmap v366: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:18 vm02 bash[23351]: cluster 2026-03-09T17:32:16.731649+0000 mgr.y (mgr.14505) 260 : cluster [DBG] pgmap v366: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:18 vm02 bash[23351]: audit 2026-03-09T17:32:17.709820+0000 mon.a (mon.0) 2102 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]': finished 2026-03-09T17:32:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:18 vm02 bash[23351]: audit 2026-03-09T17:32:17.709820+0000 mon.a (mon.0) 2102 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]': finished 2026-03-09T17:32:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:18 vm02 bash[23351]: cluster 2026-03-09T17:32:17.737742+0000 mon.a (mon.0) 2103 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T17:32:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:18 vm02 bash[23351]: cluster 2026-03-09T17:32:17.737742+0000 mon.a (mon.0) 2103 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T17:32:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:18 vm02 bash[23351]: audit 2026-03-09T17:32:18.014279+0000 mon.c (mon.2) 514 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:18 vm02 bash[23351]: audit 2026-03-09T17:32:18.014279+0000 mon.c (mon.2) 514 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:18 vm00 bash[20770]: cluster 2026-03-09T17:32:16.731649+0000 mgr.y (mgr.14505) 260 : cluster [DBG] pgmap v366: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:18 vm00 bash[20770]: cluster 2026-03-09T17:32:16.731649+0000 mgr.y (mgr.14505) 260 : cluster [DBG] pgmap v366: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:18.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:18 vm00 bash[20770]: audit 2026-03-09T17:32:17.709820+0000 mon.a (mon.0) 2102 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]': finished 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:18 vm00 bash[20770]: audit 2026-03-09T17:32:17.709820+0000 mon.a (mon.0) 2102 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]': finished 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:18 vm00 bash[20770]: cluster 2026-03-09T17:32:17.737742+0000 mon.a (mon.0) 2103 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:18 vm00 bash[20770]: cluster 2026-03-09T17:32:17.737742+0000 mon.a (mon.0) 2103 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:18 vm00 bash[20770]: audit 2026-03-09T17:32:18.014279+0000 mon.c (mon.2) 514 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:18 vm00 bash[20770]: audit 2026-03-09T17:32:18.014279+0000 mon.c (mon.2) 514 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:18 vm00 bash[28333]: cluster 2026-03-09T17:32:16.731649+0000 mgr.y (mgr.14505) 260 : cluster [DBG] pgmap v366: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:18 vm00 bash[28333]: cluster 2026-03-09T17:32:16.731649+0000 mgr.y (mgr.14505) 260 : cluster [DBG] pgmap v366: 317 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 312 active+clean; 4.4 MiB data, 690 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:18 vm00 bash[28333]: audit 2026-03-09T17:32:17.709820+0000 mon.a (mon.0) 2102 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]': finished 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:18 vm00 bash[28333]: audit 2026-03-09T17:32:17.709820+0000 mon.a (mon.0) 2102 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "23"}]': finished 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:18 vm00 bash[28333]: cluster 2026-03-09T17:32:17.737742+0000 mon.a (mon.0) 2103 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:18 vm00 bash[28333]: cluster 2026-03-09T17:32:17.737742+0000 mon.a (mon.0) 2103 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:18 vm00 bash[28333]: audit 2026-03-09T17:32:18.014279+0000 mon.c (mon.2) 514 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:18 vm00 bash[28333]: audit 2026-03-09T17:32:18.014279+0000 mon.c (mon.2) 514 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.716676+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.716676+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: cluster 2026-03-09T17:32:18.723827+0000 mon.a (mon.0) 2105 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: cluster 2026-03-09T17:32:18.723827+0000 mon.a (mon.0) 2105 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.724390+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.724390+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: cluster 2026-03-09T17:32:18.732219+0000 mgr.y (mgr.14505) 261 : cluster [DBG] pgmap v370: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: cluster 2026-03-09T17:32:18.732219+0000 mgr.y (mgr.14505) 261 : cluster [DBG] pgmap v370: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.732721+0000 mon.c (mon.2) 515 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.732721+0000 mon.c (mon.2) 515 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.746020+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.746020+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.746184+0000 mon.a (mon.0) 2107 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:18.746184+0000 mon.a (mon.0) 2107 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:19.015090+0000 mon.c (mon.2) 516 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:19 vm00 bash[28333]: audit 2026-03-09T17:32:19.015090+0000 mon.c (mon.2) 516 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.716676+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.716676+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: cluster 2026-03-09T17:32:18.723827+0000 mon.a (mon.0) 2105 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: cluster 2026-03-09T17:32:18.723827+0000 mon.a (mon.0) 2105 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.724390+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.724390+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: cluster 2026-03-09T17:32:18.732219+0000 mgr.y (mgr.14505) 261 : cluster [DBG] pgmap v370: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: cluster 2026-03-09T17:32:18.732219+0000 mgr.y (mgr.14505) 261 : cluster [DBG] pgmap v370: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.732721+0000 mon.c (mon.2) 515 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.732721+0000 mon.c (mon.2) 515 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.746020+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.746020+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.746184+0000 mon.a (mon.0) 2107 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:18.746184+0000 mon.a (mon.0) 2107 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:19.015090+0000 mon.c (mon.2) 516 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:19 vm00 bash[20770]: audit 2026-03-09T17:32:19.015090+0000 mon.c (mon.2) 516 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.716676+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.716676+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59916-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: cluster 2026-03-09T17:32:18.723827+0000 mon.a (mon.0) 2105 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: cluster 2026-03-09T17:32:18.723827+0000 mon.a (mon.0) 2105 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.724390+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.724390+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: cluster 2026-03-09T17:32:18.732219+0000 mgr.y (mgr.14505) 261 : cluster [DBG] pgmap v370: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: cluster 2026-03-09T17:32:18.732219+0000 mgr.y (mgr.14505) 261 : cluster [DBG] pgmap v370: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.732721+0000 mon.c (mon.2) 515 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.732721+0000 mon.c (mon.2) 515 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.746020+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.746020+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.746184+0000 mon.a (mon.0) 2107 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:18.746184+0000 mon.a (mon.0) 2107 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:19.015090+0000 mon.c (mon.2) 516 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:19 vm02 bash[23351]: audit 2026-03-09T17:32:19.015090+0000 mon.c (mon.2) 516 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: cluster 2026-03-09T17:32:19.716563+0000 mon.a (mon.0) 2108 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: cluster 2026-03-09T17:32:19.716563+0000 mon.a (mon.0) 2108 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:19.729070+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:19.729070+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:19.729250+0000 mon.a (mon.0) 2110 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:19.729250+0000 mon.a (mon.0) 2110 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:19.757845+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:19.757845+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: cluster 2026-03-09T17:32:19.772079+0000 mon.a (mon.0) 2111 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: cluster 2026-03-09T17:32:19.772079+0000 mon.a (mon.0) 2111 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:19.785248+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:19.785248+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:20.015856+0000 mon.c (mon.2) 517 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:20 vm02 bash[23351]: audit 2026-03-09T17:32:20.015856+0000 mon.c (mon.2) 517 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: cluster 2026-03-09T17:32:19.716563+0000 mon.a (mon.0) 2108 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: cluster 2026-03-09T17:32:19.716563+0000 mon.a (mon.0) 2108 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:19.729070+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:19.729070+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:19.729250+0000 mon.a (mon.0) 2110 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:19.729250+0000 mon.a (mon.0) 2110 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:19.757845+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:19.757845+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: cluster 2026-03-09T17:32:19.772079+0000 mon.a (mon.0) 2111 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: cluster 2026-03-09T17:32:19.772079+0000 mon.a (mon.0) 2111 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:19.785248+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:19.785248+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:20.015856+0000 mon.c (mon.2) 517 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:20 vm00 bash[20770]: audit 2026-03-09T17:32:20.015856+0000 mon.c (mon.2) 517 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: cluster 2026-03-09T17:32:19.716563+0000 mon.a (mon.0) 2108 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: cluster 2026-03-09T17:32:19.716563+0000 mon.a (mon.0) 2108 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:19.729070+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:19.729070+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:19.729250+0000 mon.a (mon.0) 2110 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:19.729250+0000 mon.a (mon.0) 2110 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:19.757845+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:19.757845+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: cluster 2026-03-09T17:32:19.772079+0000 mon.a (mon.0) 2111 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: cluster 2026-03-09T17:32:19.772079+0000 mon.a (mon.0) 2111 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:19.785248+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:19.785248+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:20.015856+0000 mon.c (mon.2) 517 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:20 vm00 bash[28333]: audit 2026-03-09T17:32:20.015856+0000 mon.c (mon.2) 517 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: cluster 2026-03-09T17:32:20.732616+0000 mgr.y (mgr.14505) 262 : cluster [DBG] pgmap v372: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: cluster 2026-03-09T17:32:20.732616+0000 mgr.y (mgr.14505) 262 : cluster [DBG] pgmap v372: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.841074+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.841074+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.845505+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.845505+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: cluster 2026-03-09T17:32:20.850001+0000 mon.a (mon.0) 2114 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: cluster 2026-03-09T17:32:20.850001+0000 mon.a (mon.0) 2114 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.851751+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.851751+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.852328+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.852328+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.852515+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:20.852515+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:21.016690+0000 mon.c (mon.2) 519 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:21 vm02 bash[23351]: audit 2026-03-09T17:32:21.016690+0000 mon.c (mon.2) 519 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:22.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:32:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:32:22.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: cluster 2026-03-09T17:32:20.732616+0000 mgr.y (mgr.14505) 262 : cluster [DBG] pgmap v372: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: cluster 2026-03-09T17:32:20.732616+0000 mgr.y (mgr.14505) 262 : cluster [DBG] pgmap v372: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.841074+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.841074+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.845505+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.845505+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: cluster 2026-03-09T17:32:20.850001+0000 mon.a (mon.0) 2114 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: cluster 2026-03-09T17:32:20.850001+0000 mon.a (mon.0) 2114 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.851751+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.851751+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.852328+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.852328+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.852515+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:20.852515+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:21.016690+0000 mon.c (mon.2) 519 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:21 vm00 bash[20770]: audit 2026-03-09T17:32:21.016690+0000 mon.c (mon.2) 519 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: cluster 2026-03-09T17:32:20.732616+0000 mgr.y (mgr.14505) 262 : cluster [DBG] pgmap v372: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: cluster 2026-03-09T17:32:20.732616+0000 mgr.y (mgr.14505) 262 : cluster [DBG] pgmap v372: 325 pgs: 40 unknown, 1 active+clean+snaptrim, 1 peering, 283 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.841074+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.841074+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.845505+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.845505+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: cluster 2026-03-09T17:32:20.850001+0000 mon.a (mon.0) 2114 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: cluster 2026-03-09T17:32:20.850001+0000 mon.a (mon.0) 2114 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.851751+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.851751+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.852328+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.852328+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.852515+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:20.852515+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:21.016690+0000 mon.c (mon.2) 519 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:21 vm00 bash[28333]: audit 2026-03-09T17:32:21.016690+0000 mon.c (mon.2) 519 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.706933+0000 mgr.y (mgr.14505) 263 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.706933+0000 mgr.y (mgr.14505) 263 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.844527+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.844527+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.844621+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.844621+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.847873+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.847873+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: cluster 2026-03-09T17:32:21.848336+0000 mon.a (mon.0) 2119 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: cluster 2026-03-09T17:32:21.848336+0000 mon.a (mon.0) 2119 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.857500+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.857500+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.857951+0000 mon.a (mon.0) 2120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.857951+0000 mon.a (mon.0) 2120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.859427+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:21.859427+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:22.017411+0000 mon.c (mon.2) 521 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:22.017411+0000 mon.c (mon.2) 521 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: cluster 2026-03-09T17:32:22.845116+0000 mon.a (mon.0) 2122 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: cluster 2026-03-09T17:32:22.845116+0000 mon.a (mon.0) 2122 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:22.848335+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:22.848335+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:22.848486+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: audit 2026-03-09T17:32:22.848486+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: cluster 2026-03-09T17:32:22.852492+0000 mon.a (mon.0) 2125 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:22 vm00 bash[20770]: cluster 2026-03-09T17:32:22.852492+0000 mon.a (mon.0) 2125 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.706933+0000 mgr.y (mgr.14505) 263 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.706933+0000 mgr.y (mgr.14505) 263 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.844527+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.844527+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.844621+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.844621+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.847873+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.847873+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: cluster 2026-03-09T17:32:21.848336+0000 mon.a (mon.0) 2119 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: cluster 2026-03-09T17:32:21.848336+0000 mon.a (mon.0) 2119 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.857500+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.857500+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.857951+0000 mon.a (mon.0) 2120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.857951+0000 mon.a (mon.0) 2120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.859427+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:21.859427+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:22.017411+0000 mon.c (mon.2) 521 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:22.017411+0000 mon.c (mon.2) 521 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: cluster 2026-03-09T17:32:22.845116+0000 mon.a (mon.0) 2122 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:23.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: cluster 2026-03-09T17:32:22.845116+0000 mon.a (mon.0) 2122 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:23.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:22.848335+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]': finished 2026-03-09T17:32:23.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:22.848335+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]': finished 2026-03-09T17:32:23.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:22.848486+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: audit 2026-03-09T17:32:22.848486+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: cluster 2026-03-09T17:32:22.852492+0000 mon.a (mon.0) 2125 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T17:32:23.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:22 vm00 bash[28333]: cluster 2026-03-09T17:32:22.852492+0000 mon.a (mon.0) 2125 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.706933+0000 mgr.y (mgr.14505) 263 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.706933+0000 mgr.y (mgr.14505) 263 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.844527+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.844527+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.844621+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.844621+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.847873+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.847873+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: cluster 2026-03-09T17:32:21.848336+0000 mon.a (mon.0) 2119 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: cluster 2026-03-09T17:32:21.848336+0000 mon.a (mon.0) 2119 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.857500+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.857500+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.100:0/184919154' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.857951+0000 mon.a (mon.0) 2120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.857951+0000 mon.a (mon.0) 2120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.859427+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:21.859427+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:22.017411+0000 mon.c (mon.2) 521 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:22.017411+0000 mon.c (mon.2) 521 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: cluster 2026-03-09T17:32:22.845116+0000 mon.a (mon.0) 2122 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: cluster 2026-03-09T17:32:22.845116+0000 mon.a (mon.0) 2122 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:22.848335+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]': finished 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:22.848335+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-39", "mode": "writeback"}]': finished 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:22.848486+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: audit 2026-03-09T17:32:22.848486+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59916-54"}]': finished 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: cluster 2026-03-09T17:32:22.852492+0000 mon.a (mon.0) 2125 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T17:32:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:22 vm02 bash[23351]: cluster 2026-03-09T17:32:22.852492+0000 mon.a (mon.0) 2125 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T17:32:24.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: cluster 2026-03-09T17:32:22.733023+0000 mgr.y (mgr.14505) 264 : cluster [DBG] pgmap v375: 316 pgs: 32 unknown, 1 active+clean+snaptrim, 1 peering, 282 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: cluster 2026-03-09T17:32:22.733023+0000 mgr.y (mgr.14505) 264 : cluster [DBG] pgmap v375: 316 pgs: 32 unknown, 1 active+clean+snaptrim, 1 peering, 282 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.883526+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.883526+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.901489+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.901489+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.903052+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.903052+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.924748+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.924748+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.924810+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.924810+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.926155+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.926155+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.985653+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.985653+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.987125+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:22.987125+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:23.018025+0000 mon.c (mon.2) 522 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:23 vm00 bash[20770]: audit 2026-03-09T17:32:23.018025+0000 mon.c (mon.2) 522 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: cluster 2026-03-09T17:32:22.733023+0000 mgr.y (mgr.14505) 264 : cluster [DBG] pgmap v375: 316 pgs: 32 unknown, 1 active+clean+snaptrim, 1 peering, 282 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: cluster 2026-03-09T17:32:22.733023+0000 mgr.y (mgr.14505) 264 : cluster [DBG] pgmap v375: 316 pgs: 32 unknown, 1 active+clean+snaptrim, 1 peering, 282 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.883526+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.883526+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.901489+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.901489+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.903052+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.903052+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.924748+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.924748+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.924810+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.924810+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.926155+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.926155+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.985653+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.985653+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.987125+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:22.987125+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:23.018025+0000 mon.c (mon.2) 522 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:23 vm00 bash[28333]: audit 2026-03-09T17:32:23.018025+0000 mon.c (mon.2) 522 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: cluster 2026-03-09T17:32:22.733023+0000 mgr.y (mgr.14505) 264 : cluster [DBG] pgmap v375: 316 pgs: 32 unknown, 1 active+clean+snaptrim, 1 peering, 282 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: cluster 2026-03-09T17:32:22.733023+0000 mgr.y (mgr.14505) 264 : cluster [DBG] pgmap v375: 316 pgs: 32 unknown, 1 active+clean+snaptrim, 1 peering, 282 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.883526+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.883526+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.901489+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.901489+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.903052+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.903052+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.924748+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.924748+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.924810+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.924810+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.926155+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.926155+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.985653+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.985653+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.987125+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:22.987125+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:23.018025+0000 mon.c (mon.2) 522 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:23 vm02 bash[23351]: audit 2026-03-09T17:32:23.018025+0000 mon.c (mon.2) 522 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.924762+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.924762+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.924808+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.924808+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.931123+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.931123+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.931376+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.931376+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: cluster 2026-03-09T17:32:23.935715+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: cluster 2026-03-09T17:32:23.935715+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.938441+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.938441+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.938531+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:23.938531+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.018912+0000 mon.c (mon.2) 523 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.018912+0000 mon.c (mon.2) 523 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.733702+0000 mon.c (mon.2) 524 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.733702+0000 mon.c (mon.2) 524 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.734309+0000 mon.a (mon.0) 2135 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.734309+0000 mon.a (mon.0) 2135 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: cluster 2026-03-09T17:32:24.924857+0000 mon.a (mon.0) 2136 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: cluster 2026-03-09T17:32:24.924857+0000 mon.a (mon.0) 2136 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.927862+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.927862+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.928164+0000 mon.a (mon.0) 2138 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: audit 2026-03-09T17:32:24.928164+0000 mon.a (mon.0) 2138 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: cluster 2026-03-09T17:32:24.953398+0000 mon.a (mon.0) 2139 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:24 vm00 bash[20770]: cluster 2026-03-09T17:32:24.953398+0000 mon.a (mon.0) 2139 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.924762+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.924762+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.924808+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.924808+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.931123+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.931123+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.931376+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.931376+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: cluster 2026-03-09T17:32:23.935715+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: cluster 2026-03-09T17:32:23.935715+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.938441+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.938441+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.938531+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:23.938531+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.018912+0000 mon.c (mon.2) 523 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.018912+0000 mon.c (mon.2) 523 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.733702+0000 mon.c (mon.2) 524 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.733702+0000 mon.c (mon.2) 524 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.734309+0000 mon.a (mon.0) 2135 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.734309+0000 mon.a (mon.0) 2135 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: cluster 2026-03-09T17:32:24.924857+0000 mon.a (mon.0) 2136 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: cluster 2026-03-09T17:32:24.924857+0000 mon.a (mon.0) 2136 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.927862+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.927862+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.928164+0000 mon.a (mon.0) 2138 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]': finished 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: audit 2026-03-09T17:32:24.928164+0000 mon.a (mon.0) 2138 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]': finished 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: cluster 2026-03-09T17:32:24.953398+0000 mon.a (mon.0) 2139 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T17:32:25.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:24 vm00 bash[28333]: cluster 2026-03-09T17:32:24.953398+0000 mon.a (mon.0) 2139 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.924762+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.924762+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59916-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.924808+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.924808+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.931123+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.931123+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.931376+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.931376+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: cluster 2026-03-09T17:32:23.935715+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: cluster 2026-03-09T17:32:23.935715+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.938441+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.938441+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.938531+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:23.938531+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.018912+0000 mon.c (mon.2) 523 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.018912+0000 mon.c (mon.2) 523 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.733702+0000 mon.c (mon.2) 524 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.733702+0000 mon.c (mon.2) 524 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.734309+0000 mon.a (mon.0) 2135 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.734309+0000 mon.a (mon.0) 2135 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]: dispatch 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: cluster 2026-03-09T17:32:24.924857+0000 mon.a (mon.0) 2136 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: cluster 2026-03-09T17:32:24.924857+0000 mon.a (mon.0) 2136 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.927862+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.927862+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-39"}]': finished 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.928164+0000 mon.a (mon.0) 2138 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]': finished 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: audit 2026-03-09T17:32:24.928164+0000 mon.a (mon.0) 2138 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pg_num_actual", "val": "23"}]': finished 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: cluster 2026-03-09T17:32:24.953398+0000 mon.a (mon.0) 2139 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T17:32:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:24 vm02 bash[23351]: cluster 2026-03-09T17:32:24.953398+0000 mon.a (mon.0) 2139 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [==========] Running 77 tests from 4 test suites. 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] Global test environment set-up. 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: seed 60118 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.Dirty 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierPP.Dirty (639 ms) 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.FlushWriteRaces 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierPP.FlushWriteRaces (10697 ms) 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.HitSetNone 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierPP.HitSetNone (102 ms) 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP (11439 ms total) 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Overlay 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Overlay (7587 ms) 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Promote 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Promote (8115 ms) 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnap 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnap (10745 ms) 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapScrub 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: my_snaps [3] 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: my_snaps [4,3] 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: my_snaps [5,4,3] 2026-03-09T17:32:25.946 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: my_snaps [6,5,4,3] 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting some heads 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting from clones for snap 6 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting from clones for snap 5 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting from clones for snap 4 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting from clones for snap 3 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: waiting for scrubs... 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: done waiting 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapScrub (50067 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapTrimRace 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapTrimRace (9760 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Whiteout 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Whiteout (8217 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate (8052 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Evict 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Evict (8062 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap (10096 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap2 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap2 (9098 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ListSnap 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ListSnap (9420 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace (13477 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlush 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlush (8014 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Flush 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Flush (7482 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushSnap 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushSnap (13336 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushTryFlushRaces 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushTryFlushRaces (7772 ms) 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlushReadRace 2026-03-09T17:32:25.947 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlushReadRace (8202 ms) 2026-03-09T17:32:26.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:26 vm00 bash[20770]: cluster 2026-03-09T17:32:24.733381+0000 mgr.y (mgr.14505) 265 : cluster [DBG] pgmap v378: 316 pgs: 1 active+clean+snaptrim, 315 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:26 vm00 bash[20770]: cluster 2026-03-09T17:32:24.733381+0000 mgr.y (mgr.14505) 265 : cluster [DBG] pgmap v378: 316 pgs: 1 active+clean+snaptrim, 315 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:26 vm00 bash[20770]: audit 2026-03-09T17:32:25.019911+0000 mon.c (mon.2) 525 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:26 vm00 bash[20770]: audit 2026-03-09T17:32:25.019911+0000 mon.c (mon.2) 525 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:26 vm00 bash[20770]: audit 2026-03-09T17:32:25.934333+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:26 vm00 bash[20770]: audit 2026-03-09T17:32:25.934333+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:26 vm00 bash[20770]: cluster 2026-03-09T17:32:25.940121+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:26 vm00 bash[20770]: cluster 2026-03-09T17:32:25.940121+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:26 vm00 bash[28333]: cluster 2026-03-09T17:32:24.733381+0000 mgr.y (mgr.14505) 265 : cluster [DBG] pgmap v378: 316 pgs: 1 active+clean+snaptrim, 315 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:26 vm00 bash[28333]: cluster 2026-03-09T17:32:24.733381+0000 mgr.y (mgr.14505) 265 : cluster [DBG] pgmap v378: 316 pgs: 1 active+clean+snaptrim, 315 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:26 vm00 bash[28333]: audit 2026-03-09T17:32:25.019911+0000 mon.c (mon.2) 525 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:26 vm00 bash[28333]: audit 2026-03-09T17:32:25.019911+0000 mon.c (mon.2) 525 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:26 vm00 bash[28333]: audit 2026-03-09T17:32:25.934333+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:26 vm00 bash[28333]: audit 2026-03-09T17:32:25.934333+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:26 vm00 bash[28333]: cluster 2026-03-09T17:32:25.940121+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T17:32:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:26 vm00 bash[28333]: cluster 2026-03-09T17:32:25.940121+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T17:32:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:26 vm02 bash[23351]: cluster 2026-03-09T17:32:24.733381+0000 mgr.y (mgr.14505) 265 : cluster [DBG] pgmap v378: 316 pgs: 1 active+clean+snaptrim, 315 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:26 vm02 bash[23351]: cluster 2026-03-09T17:32:24.733381+0000 mgr.y (mgr.14505) 265 : cluster [DBG] pgmap v378: 316 pgs: 1 active+clean+snaptrim, 315 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:26 vm02 bash[23351]: audit 2026-03-09T17:32:25.019911+0000 mon.c (mon.2) 525 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:26 vm02 bash[23351]: audit 2026-03-09T17:32:25.019911+0000 mon.c (mon.2) 525 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:26 vm02 bash[23351]: audit 2026-03-09T17:32:25.934333+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:26 vm02 bash[23351]: audit 2026-03-09T17:32:25.934333+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59916-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:26 vm02 bash[23351]: cluster 2026-03-09T17:32:25.940121+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T17:32:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:26 vm02 bash[23351]: cluster 2026-03-09T17:32:25.940121+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T17:32:26.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:32:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:32:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: audit 2026-03-09T17:32:26.020679+0000 mon.c (mon.2) 526 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: audit 2026-03-09T17:32:26.020679+0000 mon.c (mon.2) 526 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: cluster 2026-03-09T17:32:26.677439+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: cluster 2026-03-09T17:32:26.677439+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: audit 2026-03-09T17:32:26.989662+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: audit 2026-03-09T17:32:26.989662+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: cluster 2026-03-09T17:32:26.993346+0000 mon.a (mon.0) 2143 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: cluster 2026-03-09T17:32:26.993346+0000 mon.a (mon.0) 2143 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: audit 2026-03-09T17:32:27.003820+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:27 vm00 bash[28333]: audit 2026-03-09T17:32:27.003820+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: audit 2026-03-09T17:32:26.020679+0000 mon.c (mon.2) 526 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: audit 2026-03-09T17:32:26.020679+0000 mon.c (mon.2) 526 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: cluster 2026-03-09T17:32:26.677439+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: cluster 2026-03-09T17:32:26.677439+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: audit 2026-03-09T17:32:26.989662+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: audit 2026-03-09T17:32:26.989662+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: cluster 2026-03-09T17:32:26.993346+0000 mon.a (mon.0) 2143 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: cluster 2026-03-09T17:32:26.993346+0000 mon.a (mon.0) 2143 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: audit 2026-03-09T17:32:27.003820+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:27 vm00 bash[20770]: audit 2026-03-09T17:32:27.003820+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: audit 2026-03-09T17:32:26.020679+0000 mon.c (mon.2) 526 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: audit 2026-03-09T17:32:26.020679+0000 mon.c (mon.2) 526 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: cluster 2026-03-09T17:32:26.677439+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: cluster 2026-03-09T17:32:26.677439+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: audit 2026-03-09T17:32:26.989662+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: audit 2026-03-09T17:32:26.989662+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: cluster 2026-03-09T17:32:26.993346+0000 mon.a (mon.0) 2143 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: cluster 2026-03-09T17:32:26.993346+0000 mon.a (mon.0) 2143 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: audit 2026-03-09T17:32:27.003820+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:27 vm02 bash[23351]: audit 2026-03-09T17:32:27.003820+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: cluster 2026-03-09T17:32:26.733781+0000 mgr.y (mgr.14505) 266 : cluster [DBG] pgmap v381: 292 pgs: 8 unknown, 1 active+clean+snaptrim, 283 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: cluster 2026-03-09T17:32:26.733781+0000 mgr.y (mgr.14505) 266 : cluster [DBG] pgmap v381: 292 pgs: 8 unknown, 1 active+clean+snaptrim, 283 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.026117+0000 mon.c (mon.2) 527 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.026117+0000 mon.c (mon.2) 527 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.661577+0000 mon.c (mon.2) 528 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.661577+0000 mon.c (mon.2) 528 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.943965+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.943965+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: cluster 2026-03-09T17:32:27.947053+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: cluster 2026-03-09T17:32:27.947053+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.972358+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.972358+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.972546+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.972546+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.975259+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.975259+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.975352+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:28 vm00 bash[20770]: audit 2026-03-09T17:32:27.975352+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: cluster 2026-03-09T17:32:26.733781+0000 mgr.y (mgr.14505) 266 : cluster [DBG] pgmap v381: 292 pgs: 8 unknown, 1 active+clean+snaptrim, 283 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: cluster 2026-03-09T17:32:26.733781+0000 mgr.y (mgr.14505) 266 : cluster [DBG] pgmap v381: 292 pgs: 8 unknown, 1 active+clean+snaptrim, 283 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.026117+0000 mon.c (mon.2) 527 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.026117+0000 mon.c (mon.2) 527 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.661577+0000 mon.c (mon.2) 528 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.661577+0000 mon.c (mon.2) 528 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.943965+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.943965+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: cluster 2026-03-09T17:32:27.947053+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: cluster 2026-03-09T17:32:27.947053+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.972358+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.972358+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.972546+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.972546+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.975259+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.975259+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.975352+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:28 vm00 bash[28333]: audit 2026-03-09T17:32:27.975352+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: cluster 2026-03-09T17:32:26.733781+0000 mgr.y (mgr.14505) 266 : cluster [DBG] pgmap v381: 292 pgs: 8 unknown, 1 active+clean+snaptrim, 283 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: cluster 2026-03-09T17:32:26.733781+0000 mgr.y (mgr.14505) 266 : cluster [DBG] pgmap v381: 292 pgs: 8 unknown, 1 active+clean+snaptrim, 283 active+clean; 4.4 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.026117+0000 mon.c (mon.2) 527 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.026117+0000 mon.c (mon.2) 527 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.661577+0000 mon.c (mon.2) 528 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.661577+0000 mon.c (mon.2) 528 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.943965+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.943965+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: cluster 2026-03-09T17:32:27.947053+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: cluster 2026-03-09T17:32:27.947053+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.972358+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.972358+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.972546+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.972546+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.975259+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.975259+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.975352+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:28 vm02 bash[23351]: audit 2026-03-09T17:32:27.975352+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.029693+0000 mon.c (mon.2) 529 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.029693+0000 mon.c (mon.2) 529 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.947540+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.947540+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.947867+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.947867+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.954894+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.954894+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.955135+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.955135+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: cluster 2026-03-09T17:32:28.959075+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: cluster 2026-03-09T17:32:28.959075+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.959617+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.959617+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.959870+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:29 vm00 bash[28333]: audit 2026-03-09T17:32:28.959870+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.029693+0000 mon.c (mon.2) 529 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.029693+0000 mon.c (mon.2) 529 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.947540+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.947540+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.947867+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.947867+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.954894+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.954894+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.955135+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.955135+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: cluster 2026-03-09T17:32:28.959075+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: cluster 2026-03-09T17:32:28.959075+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.959617+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.959617+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.959870+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:29 vm00 bash[20770]: audit 2026-03-09T17:32:28.959870+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.029693+0000 mon.c (mon.2) 529 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.029693+0000 mon.c (mon.2) 529 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.947540+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.947540+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.947867+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.947867+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.954894+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.954894+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.955135+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.955135+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/2867088040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: cluster 2026-03-09T17:32:28.959075+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: cluster 2026-03-09T17:32:28.959075+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.959617+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.959617+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.959870+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:29 vm02 bash[23351]: audit 2026-03-09T17:32:28.959870+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]: dispatch 2026-03-09T17:32:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:30 vm02 bash[23351]: cluster 2026-03-09T17:32:28.734174+0000 mgr.y (mgr.14505) 267 : cluster [DBG] pgmap v384: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:30 vm02 bash[23351]: cluster 2026-03-09T17:32:28.734174+0000 mgr.y (mgr.14505) 267 : cluster [DBG] pgmap v384: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:30 vm02 bash[23351]: audit 2026-03-09T17:32:29.031324+0000 mon.c (mon.2) 530 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:30 vm02 bash[23351]: audit 2026-03-09T17:32:29.031324+0000 mon.c (mon.2) 530 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:30 vm00 bash[28333]: cluster 2026-03-09T17:32:28.734174+0000 mgr.y (mgr.14505) 267 : cluster [DBG] pgmap v384: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:30 vm00 bash[28333]: cluster 2026-03-09T17:32:28.734174+0000 mgr.y (mgr.14505) 267 : cluster [DBG] pgmap v384: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:30 vm00 bash[28333]: audit 2026-03-09T17:32:29.031324+0000 mon.c (mon.2) 530 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:30 vm00 bash[28333]: audit 2026-03-09T17:32:29.031324+0000 mon.c (mon.2) 530 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:30 vm00 bash[20770]: cluster 2026-03-09T17:32:28.734174+0000 mgr.y (mgr.14505) 267 : cluster [DBG] pgmap v384: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:30 vm00 bash[20770]: cluster 2026-03-09T17:32:28.734174+0000 mgr.y (mgr.14505) 267 : cluster [DBG] pgmap v384: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:30 vm00 bash[20770]: audit 2026-03-09T17:32:29.031324+0000 mon.c (mon.2) 530 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:30 vm00 bash[20770]: audit 2026-03-09T17:32:29.031324+0000 mon.c (mon.2) 530 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.032041+0000 mon.c (mon.2) 531 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.032041+0000 mon.c (mon.2) 531 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.108069+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.108069+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.108112+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.108112+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: cluster 2026-03-09T17:32:30.210361+0000 mon.a (mon.0) 2156 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: cluster 2026-03-09T17:32:30.210361+0000 mon.a (mon.0) 2156 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.289181+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.289181+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.299373+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.299373+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.350027+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.350027+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.350127+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.350127+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.355649+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.355649+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.364027+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.364027+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.364491+0000 mon.b (mon.1) 314 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.364491+0000 mon.b (mon.1) 314 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.365806+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:30.365806+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.032941+0000 mon.c (mon.2) 532 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.032941+0000 mon.c (mon.2) 532 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.112589+0000 mon.a (mon.0) 2161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.112589+0000 mon.a (mon.0) 2161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.112717+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.112717+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: cluster 2026-03-09T17:32:31.120341+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: cluster 2026-03-09T17:32:31.120341+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.120583+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.120583+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.121029+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.121029+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.123769+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.123769+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.123859+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:31 vm02 bash[23351]: audit 2026-03-09T17:32:31.123859+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.032041+0000 mon.c (mon.2) 531 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.032041+0000 mon.c (mon.2) 531 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.108069+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.108069+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.108112+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.108112+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: cluster 2026-03-09T17:32:30.210361+0000 mon.a (mon.0) 2156 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: cluster 2026-03-09T17:32:30.210361+0000 mon.a (mon.0) 2156 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.289181+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.289181+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.299373+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.299373+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.350027+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.350027+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.350127+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.350127+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.355649+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.355649+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.364027+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.364027+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.364491+0000 mon.b (mon.1) 314 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.364491+0000 mon.b (mon.1) 314 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.365806+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:30.365806+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.032941+0000 mon.c (mon.2) 532 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.032941+0000 mon.c (mon.2) 532 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.112589+0000 mon.a (mon.0) 2161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.112589+0000 mon.a (mon.0) 2161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.112717+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.112717+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: cluster 2026-03-09T17:32:31.120341+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: cluster 2026-03-09T17:32:31.120341+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.120583+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.120583+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.121029+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.121029+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.123769+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.123769+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.123859+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:31 vm00 bash[20770]: audit 2026-03-09T17:32:31.123859+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.032041+0000 mon.c (mon.2) 531 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.032041+0000 mon.c (mon.2) 531 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.108069+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.108069+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.108112+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.108112+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59916-55"}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: cluster 2026-03-09T17:32:30.210361+0000 mon.a (mon.0) 2156 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: cluster 2026-03-09T17:32:30.210361+0000 mon.a (mon.0) 2156 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.289181+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.289181+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.299373+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.299373+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.350027+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.350027+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.350127+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.350127+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.355649+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.355649+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.364027+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.364027+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.364491+0000 mon.b (mon.1) 314 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.364491+0000 mon.b (mon.1) 314 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.365806+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:30.365806+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.032941+0000 mon.c (mon.2) 532 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.032941+0000 mon.c (mon.2) 532 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.112589+0000 mon.a (mon.0) 2161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.112589+0000 mon.a (mon.0) 2161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.112717+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.112717+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59916-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: cluster 2026-03-09T17:32:31.120341+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: cluster 2026-03-09T17:32:31.120341+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.120583+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.120583+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.121029+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.121029+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.123769+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.123769+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.123859+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:31 vm00 bash[28333]: audit 2026-03-09T17:32:31.123859+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:32.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:32:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: cluster 2026-03-09T17:32:30.734580+0000 mgr.y (mgr.14505) 268 : cluster [DBG] pgmap v387: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: cluster 2026-03-09T17:32:30.734580+0000 mgr.y (mgr.14505) 268 : cluster [DBG] pgmap v387: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: audit 2026-03-09T17:32:31.709255+0000 mgr.y (mgr.14505) 269 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: audit 2026-03-09T17:32:31.709255+0000 mgr.y (mgr.14505) 269 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: audit 2026-03-09T17:32:32.033741+0000 mon.c (mon.2) 533 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: audit 2026-03-09T17:32:32.033741+0000 mon.c (mon.2) 533 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: audit 2026-03-09T17:32:32.233933+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: audit 2026-03-09T17:32:32.233933+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: cluster 2026-03-09T17:32:32.243496+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T17:32:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:32 vm02 bash[23351]: cluster 2026-03-09T17:32:32.243496+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T17:32:32.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: cluster 2026-03-09T17:32:30.734580+0000 mgr.y (mgr.14505) 268 : cluster [DBG] pgmap v387: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: cluster 2026-03-09T17:32:30.734580+0000 mgr.y (mgr.14505) 268 : cluster [DBG] pgmap v387: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: audit 2026-03-09T17:32:31.709255+0000 mgr.y (mgr.14505) 269 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: audit 2026-03-09T17:32:31.709255+0000 mgr.y (mgr.14505) 269 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: audit 2026-03-09T17:32:32.033741+0000 mon.c (mon.2) 533 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: audit 2026-03-09T17:32:32.033741+0000 mon.c (mon.2) 533 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: audit 2026-03-09T17:32:32.233933+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: audit 2026-03-09T17:32:32.233933+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: cluster 2026-03-09T17:32:32.243496+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:32 vm00 bash[28333]: cluster 2026-03-09T17:32:32.243496+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: cluster 2026-03-09T17:32:30.734580+0000 mgr.y (mgr.14505) 268 : cluster [DBG] pgmap v387: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: cluster 2026-03-09T17:32:30.734580+0000 mgr.y (mgr.14505) 268 : cluster [DBG] pgmap v387: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: audit 2026-03-09T17:32:31.709255+0000 mgr.y (mgr.14505) 269 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: audit 2026-03-09T17:32:31.709255+0000 mgr.y (mgr.14505) 269 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: audit 2026-03-09T17:32:32.033741+0000 mon.c (mon.2) 533 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: audit 2026-03-09T17:32:32.033741+0000 mon.c (mon.2) 533 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: audit 2026-03-09T17:32:32.233933+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: audit 2026-03-09T17:32:32.233933+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: cluster 2026-03-09T17:32:32.243496+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T17:32:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:32 vm00 bash[20770]: cluster 2026-03-09T17:32:32.243496+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: cluster 2026-03-09T17:32:32.734930+0000 mgr.y (mgr.14505) 270 : cluster [DBG] pgmap v390: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: cluster 2026-03-09T17:32:32.734930+0000 mgr.y (mgr.14505) 270 : cluster [DBG] pgmap v390: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.034704+0000 mon.c (mon.2) 534 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.034704+0000 mon.c (mon.2) 534 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.237385+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.237385+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: cluster 2026-03-09T17:32:33.245195+0000 mon.a (mon.0) 2169 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: cluster 2026-03-09T17:32:33.245195+0000 mon.a (mon.0) 2169 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.265099+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.265099+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.266085+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.266085+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.266440+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.266440+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.267318+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:33 vm02 bash[23351]: audit 2026-03-09T17:32:33.267318+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: cluster 2026-03-09T17:32:32.734930+0000 mgr.y (mgr.14505) 270 : cluster [DBG] pgmap v390: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: cluster 2026-03-09T17:32:32.734930+0000 mgr.y (mgr.14505) 270 : cluster [DBG] pgmap v390: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.034704+0000 mon.c (mon.2) 534 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.034704+0000 mon.c (mon.2) 534 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.237385+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.237385+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: cluster 2026-03-09T17:32:33.245195+0000 mon.a (mon.0) 2169 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: cluster 2026-03-09T17:32:33.245195+0000 mon.a (mon.0) 2169 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.265099+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.265099+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.266085+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.266085+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.266440+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.266440+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.267318+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:33 vm00 bash[28333]: audit 2026-03-09T17:32:33.267318+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: cluster 2026-03-09T17:32:32.734930+0000 mgr.y (mgr.14505) 270 : cluster [DBG] pgmap v390: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: cluster 2026-03-09T17:32:32.734930+0000 mgr.y (mgr.14505) 270 : cluster [DBG] pgmap v390: 315 pgs: 32 creating+peering, 1 active+clean+snaptrim, 1 peering, 281 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.034704+0000 mon.c (mon.2) 534 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.034704+0000 mon.c (mon.2) 534 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.237385+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.237385+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59916-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: cluster 2026-03-09T17:32:33.245195+0000 mon.a (mon.0) 2169 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: cluster 2026-03-09T17:32:33.245195+0000 mon.a (mon.0) 2169 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.265099+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.265099+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.266085+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.266085+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.266440+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.266440+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.267318+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:33 vm00 bash[20770]: audit 2026-03-09T17:32:33.267318+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]: dispatch 2026-03-09T17:32:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:34 vm02 bash[23351]: audit 2026-03-09T17:32:34.035642+0000 mon.c (mon.2) 535 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:34 vm02 bash[23351]: audit 2026-03-09T17:32:34.035642+0000 mon.c (mon.2) 535 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:34 vm02 bash[23351]: audit 2026-03-09T17:32:34.241503+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]': finished 2026-03-09T17:32:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:34 vm02 bash[23351]: audit 2026-03-09T17:32:34.241503+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]': finished 2026-03-09T17:32:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:34 vm02 bash[23351]: cluster 2026-03-09T17:32:34.245448+0000 mon.a (mon.0) 2173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T17:32:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:34 vm02 bash[23351]: cluster 2026-03-09T17:32:34.245448+0000 mon.a (mon.0) 2173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T17:32:34.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:34 vm00 bash[28333]: audit 2026-03-09T17:32:34.035642+0000 mon.c (mon.2) 535 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:34 vm00 bash[28333]: audit 2026-03-09T17:32:34.035642+0000 mon.c (mon.2) 535 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:34 vm00 bash[28333]: audit 2026-03-09T17:32:34.241503+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]': finished 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:34 vm00 bash[28333]: audit 2026-03-09T17:32:34.241503+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]': finished 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:34 vm00 bash[28333]: cluster 2026-03-09T17:32:34.245448+0000 mon.a (mon.0) 2173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:34 vm00 bash[28333]: cluster 2026-03-09T17:32:34.245448+0000 mon.a (mon.0) 2173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:34 vm00 bash[20770]: audit 2026-03-09T17:32:34.035642+0000 mon.c (mon.2) 535 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:34 vm00 bash[20770]: audit 2026-03-09T17:32:34.035642+0000 mon.c (mon.2) 535 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:34 vm00 bash[20770]: audit 2026-03-09T17:32:34.241503+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]': finished 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:34 vm00 bash[20770]: audit 2026-03-09T17:32:34.241503+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-41"}]': finished 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:34 vm00 bash[20770]: cluster 2026-03-09T17:32:34.245448+0000 mon.a (mon.0) 2173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T17:32:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:34 vm00 bash[20770]: cluster 2026-03-09T17:32:34.245448+0000 mon.a (mon.0) 2173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T17:32:35.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: cluster 2026-03-09T17:32:34.735334+0000 mgr.y (mgr.14505) 271 : cluster [DBG] pgmap v393: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: cluster 2026-03-09T17:32:34.735334+0000 mgr.y (mgr.14505) 271 : cluster [DBG] pgmap v393: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: audit 2026-03-09T17:32:35.036590+0000 mon.c (mon.2) 536 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: audit 2026-03-09T17:32:35.036590+0000 mon.c (mon.2) 536 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: cluster 2026-03-09T17:32:35.261233+0000 mon.a (mon.0) 2174 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: cluster 2026-03-09T17:32:35.261233+0000 mon.a (mon.0) 2174 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: audit 2026-03-09T17:32:35.261412+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: audit 2026-03-09T17:32:35.261412+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: audit 2026-03-09T17:32:35.262706+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:35 vm00 bash[28333]: audit 2026-03-09T17:32:35.262706+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: cluster 2026-03-09T17:32:34.735334+0000 mgr.y (mgr.14505) 271 : cluster [DBG] pgmap v393: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: cluster 2026-03-09T17:32:34.735334+0000 mgr.y (mgr.14505) 271 : cluster [DBG] pgmap v393: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: audit 2026-03-09T17:32:35.036590+0000 mon.c (mon.2) 536 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: audit 2026-03-09T17:32:35.036590+0000 mon.c (mon.2) 536 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: cluster 2026-03-09T17:32:35.261233+0000 mon.a (mon.0) 2174 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: cluster 2026-03-09T17:32:35.261233+0000 mon.a (mon.0) 2174 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: audit 2026-03-09T17:32:35.261412+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: audit 2026-03-09T17:32:35.261412+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: audit 2026-03-09T17:32:35.262706+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:35 vm00 bash[20770]: audit 2026-03-09T17:32:35.262706+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: cluster 2026-03-09T17:32:34.735334+0000 mgr.y (mgr.14505) 271 : cluster [DBG] pgmap v393: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: cluster 2026-03-09T17:32:34.735334+0000 mgr.y (mgr.14505) 271 : cluster [DBG] pgmap v393: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: audit 2026-03-09T17:32:35.036590+0000 mon.c (mon.2) 536 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: audit 2026-03-09T17:32:35.036590+0000 mon.c (mon.2) 536 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: cluster 2026-03-09T17:32:35.261233+0000 mon.a (mon.0) 2174 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: cluster 2026-03-09T17:32:35.261233+0000 mon.a (mon.0) 2174 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: audit 2026-03-09T17:32:35.261412+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: audit 2026-03-09T17:32:35.261412+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: audit 2026-03-09T17:32:35.262706+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:35 vm02 bash[23351]: audit 2026-03-09T17:32:35.262706+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: cluster 2026-03-09T17:32:35.380448+0000 mon.a (mon.0) 2176 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: cluster 2026-03-09T17:32:35.380448+0000 mon.a (mon.0) 2176 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.037672+0000 mon.c (mon.2) 537 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.037672+0000 mon.c (mon.2) 537 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.249010+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.249010+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.251793+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.251793+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: cluster 2026-03-09T17:32:36.257534+0000 mon.a (mon.0) 2178 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: cluster 2026-03-09T17:32:36.257534+0000 mon.a (mon.0) 2178 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.258245+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.258245+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.258430+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.258430+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.259898+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:36 vm00 bash[28333]: audit 2026-03-09T17:32:36.259898+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: cluster 2026-03-09T17:32:35.380448+0000 mon.a (mon.0) 2176 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: cluster 2026-03-09T17:32:35.380448+0000 mon.a (mon.0) 2176 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.037672+0000 mon.c (mon.2) 537 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.037672+0000 mon.c (mon.2) 537 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.249010+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.249010+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.251793+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.251793+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: cluster 2026-03-09T17:32:36.257534+0000 mon.a (mon.0) 2178 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: cluster 2026-03-09T17:32:36.257534+0000 mon.a (mon.0) 2178 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.258245+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.258245+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.258430+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.258430+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.259898+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:36 vm00 bash[20770]: audit 2026-03-09T17:32:36.259898+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:32:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:32:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: cluster 2026-03-09T17:32:35.380448+0000 mon.a (mon.0) 2176 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: cluster 2026-03-09T17:32:35.380448+0000 mon.a (mon.0) 2176 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.037672+0000 mon.c (mon.2) 537 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.037672+0000 mon.c (mon.2) 537 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.249010+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.249010+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.251793+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.251793+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/182560938' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: cluster 2026-03-09T17:32:36.257534+0000 mon.a (mon.0) 2178 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: cluster 2026-03-09T17:32:36.257534+0000 mon.a (mon.0) 2178 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.258245+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.258245+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.258430+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.258430+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.259898+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:36 vm02 bash[23351]: audit 2026-03-09T17:32:36.259898+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.682341+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.682341+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.682445+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.682445+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: cluster 2026-03-09T17:32:36.687668+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: cluster 2026-03-09T17:32:36.687668+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.699910+0000 mon.b (mon.1) 322 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.699910+0000 mon.b (mon.1) 322 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.701492+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.701492+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.705777+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.705777+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.713621+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.713621+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.718554+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.718554+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.721759+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:36.721759+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: cluster 2026-03-09T17:32:36.735663+0000 mgr.y (mgr.14505) 272 : cluster [DBG] pgmap v397: 315 pgs: 32 unknown, 283 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: cluster 2026-03-09T17:32:36.735663+0000 mgr.y (mgr.14505) 272 : cluster [DBG] pgmap v397: 315 pgs: 32 unknown, 283 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:37.038526+0000 mon.c (mon.2) 538 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:37 vm00 bash[20770]: audit 2026-03-09T17:32:37.038526+0000 mon.c (mon.2) 538 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.682341+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.682341+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.682445+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.682445+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: cluster 2026-03-09T17:32:36.687668+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: cluster 2026-03-09T17:32:36.687668+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.699910+0000 mon.b (mon.1) 322 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.699910+0000 mon.b (mon.1) 322 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.701492+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.701492+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.705777+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.705777+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.713621+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.713621+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.718554+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.718554+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.721759+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:36.721759+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: cluster 2026-03-09T17:32:36.735663+0000 mgr.y (mgr.14505) 272 : cluster [DBG] pgmap v397: 315 pgs: 32 unknown, 283 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: cluster 2026-03-09T17:32:36.735663+0000 mgr.y (mgr.14505) 272 : cluster [DBG] pgmap v397: 315 pgs: 32 unknown, 283 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:37.038526+0000 mon.c (mon.2) 538 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:38.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:37 vm00 bash[28333]: audit 2026-03-09T17:32:37.038526+0000 mon.c (mon.2) 538 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.682341+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.682341+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59916-56"}]': finished 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.682445+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.682445+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: cluster 2026-03-09T17:32:36.687668+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: cluster 2026-03-09T17:32:36.687668+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.699910+0000 mon.b (mon.1) 322 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.699910+0000 mon.b (mon.1) 322 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.701492+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.701492+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.705777+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.705777+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.713621+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.713621+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.718554+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.718554+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.721759+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:36.721759+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: cluster 2026-03-09T17:32:36.735663+0000 mgr.y (mgr.14505) 272 : cluster [DBG] pgmap v397: 315 pgs: 32 unknown, 283 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:38.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: cluster 2026-03-09T17:32:36.735663+0000 mgr.y (mgr.14505) 272 : cluster [DBG] pgmap v397: 315 pgs: 32 unknown, 283 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:38.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:37.038526+0000 mon.c (mon.2) 538 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:38.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:37 vm02 bash[23351]: audit 2026-03-09T17:32:37.038526+0000 mon.c (mon.2) 538 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:39.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.691296+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.691296+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.691344+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.691344+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.697429+0000 mon.b (mon.1) 324 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.697429+0000 mon.b (mon.1) 324 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: cluster 2026-03-09T17:32:37.705487+0000 mon.a (mon.0) 2190 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: cluster 2026-03-09T17:32:37.705487+0000 mon.a (mon.0) 2190 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.707423+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.707423+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.707596+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:37.707596+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:38.039420+0000 mon.c (mon.2) 539 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:38 vm00 bash[28333]: audit 2026-03-09T17:32:38.039420+0000 mon.c (mon.2) 539 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.691296+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.691296+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.691344+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.691344+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.697429+0000 mon.b (mon.1) 324 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.697429+0000 mon.b (mon.1) 324 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: cluster 2026-03-09T17:32:37.705487+0000 mon.a (mon.0) 2190 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: cluster 2026-03-09T17:32:37.705487+0000 mon.a (mon.0) 2190 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.707423+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.707423+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.707596+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:37.707596+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:38.039420+0000 mon.c (mon.2) 539 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:38 vm00 bash[20770]: audit 2026-03-09T17:32:38.039420+0000 mon.c (mon.2) 539 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.691296+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.691296+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.691344+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.691344+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59916-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.697429+0000 mon.b (mon.1) 324 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.697429+0000 mon.b (mon.1) 324 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: cluster 2026-03-09T17:32:37.705487+0000 mon.a (mon.0) 2190 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: cluster 2026-03-09T17:32:37.705487+0000 mon.a (mon.0) 2190 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.707423+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.707423+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.707596+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:37.707596+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:38.039420+0000 mon.c (mon.2) 539 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:38 vm02 bash[23351]: audit 2026-03-09T17:32:38.039420+0000 mon.c (mon.2) 539 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:40.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.694657+0000 mon.a (mon.0) 2193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.694657+0000 mon.a (mon.0) 2193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.698036+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.698036+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: cluster 2026-03-09T17:32:38.706459+0000 mon.a (mon.0) 2194 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: cluster 2026-03-09T17:32:38.706459+0000 mon.a (mon.0) 2194 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.707525+0000 mon.a (mon.0) 2195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.707525+0000 mon.a (mon.0) 2195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: cluster 2026-03-09T17:32:38.736033+0000 mgr.y (mgr.14505) 273 : cluster [DBG] pgmap v400: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: cluster 2026-03-09T17:32:38.736033+0000 mgr.y (mgr.14505) 273 : cluster [DBG] pgmap v400: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.736736+0000 mon.c (mon.2) 540 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.736736+0000 mon.c (mon.2) 540 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.737368+0000 mon.a (mon.0) 2196 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:38.737368+0000 mon.a (mon.0) 2196 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:39.040137+0000 mon.c (mon.2) 541 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:39.040137+0000 mon.c (mon.2) 541 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:39.698035+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:39 vm00 bash[28333]: audit 2026-03-09T17:32:39.698035+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.694657+0000 mon.a (mon.0) 2193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.694657+0000 mon.a (mon.0) 2193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.698036+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.698036+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: cluster 2026-03-09T17:32:38.706459+0000 mon.a (mon.0) 2194 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: cluster 2026-03-09T17:32:38.706459+0000 mon.a (mon.0) 2194 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.707525+0000 mon.a (mon.0) 2195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.707525+0000 mon.a (mon.0) 2195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: cluster 2026-03-09T17:32:38.736033+0000 mgr.y (mgr.14505) 273 : cluster [DBG] pgmap v400: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: cluster 2026-03-09T17:32:38.736033+0000 mgr.y (mgr.14505) 273 : cluster [DBG] pgmap v400: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.736736+0000 mon.c (mon.2) 540 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.736736+0000 mon.c (mon.2) 540 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.737368+0000 mon.a (mon.0) 2196 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:38.737368+0000 mon.a (mon.0) 2196 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:39.040137+0000 mon.c (mon.2) 541 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:39.040137+0000 mon.c (mon.2) 541 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:39.698035+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:39 vm00 bash[20770]: audit 2026-03-09T17:32:39.698035+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.694657+0000 mon.a (mon.0) 2193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.694657+0000 mon.a (mon.0) 2193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.698036+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.698036+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: cluster 2026-03-09T17:32:38.706459+0000 mon.a (mon.0) 2194 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: cluster 2026-03-09T17:32:38.706459+0000 mon.a (mon.0) 2194 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.707525+0000 mon.a (mon.0) 2195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.707525+0000 mon.a (mon.0) 2195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: cluster 2026-03-09T17:32:38.736033+0000 mgr.y (mgr.14505) 273 : cluster [DBG] pgmap v400: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: cluster 2026-03-09T17:32:38.736033+0000 mgr.y (mgr.14505) 273 : cluster [DBG] pgmap v400: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.736736+0000 mon.c (mon.2) 540 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.736736+0000 mon.c (mon.2) 540 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.737368+0000 mon.a (mon.0) 2196 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:38.737368+0000 mon.a (mon.0) 2196 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:39.040137+0000 mon.c (mon.2) 541 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:39.040137+0000 mon.c (mon.2) 541 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:39.698035+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:39 vm02 bash[23351]: audit 2026-03-09T17:32:39.698035+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59916-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:39.698077+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:39.698077+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:39.698105+0000 mon.a (mon.0) 2199 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:39.698105+0000 mon.a (mon.0) 2199 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:39.705529+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:39.705529+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: cluster 2026-03-09T17:32:39.713298+0000 mon.a (mon.0) 2200 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: cluster 2026-03-09T17:32:39.713298+0000 mon.a (mon.0) 2200 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:39.718735+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:39.718735+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.040948+0000 mon.c (mon.2) 542 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.040948+0000 mon.c (mon.2) 542 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.041436+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.041436+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.041663+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.041663+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.701794+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.701794+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.701865+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: audit 2026-03-09T17:32:40.701865+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: cluster 2026-03-09T17:32:40.718273+0000 mon.a (mon.0) 2205 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T17:32:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:40 vm00 bash[28333]: cluster 2026-03-09T17:32:40.718273+0000 mon.a (mon.0) 2205 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:39.698077+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:39.698077+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:39.698105+0000 mon.a (mon.0) 2199 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]': finished 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:39.698105+0000 mon.a (mon.0) 2199 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]': finished 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:39.705529+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:39.705529+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: cluster 2026-03-09T17:32:39.713298+0000 mon.a (mon.0) 2200 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: cluster 2026-03-09T17:32:39.713298+0000 mon.a (mon.0) 2200 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:39.718735+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:39.718735+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.040948+0000 mon.c (mon.2) 542 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.040948+0000 mon.c (mon.2) 542 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.041436+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.041436+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.041663+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.041663+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.701794+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.701794+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.701865+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: audit 2026-03-09T17:32:40.701865+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: cluster 2026-03-09T17:32:40.718273+0000 mon.a (mon.0) 2205 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T17:32:41.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:40 vm00 bash[20770]: cluster 2026-03-09T17:32:40.718273+0000 mon.a (mon.0) 2205 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetRead 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: ok, hit_set contains 256:602f83fe:::foo:head 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetRead (9320 ms) 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetWrite 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg_num = 32 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 0 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 1 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 2 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 3 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 4 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 5 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 6 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 7 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 8 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 9 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 10 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 11 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 12 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 13 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 14 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 15 ls 1773077561,0 2026-03-09T17:32:41.060 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 16 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 17 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 18 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 19 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 20 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 21 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 22 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 23 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 24 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 25 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 26 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 27 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 28 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 29 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 30 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 31 ls 1773077561,0 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg_num = 32 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:6cac518f:::0:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:02547ec2:::1:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f905c69b:::2:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:cfc208b3:::3:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d83876eb:::4:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b29083e3:::5:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c4fdafeb:::6:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:5c6b0b28:::7:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:bd63b0f1:::8:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:e960b815:::9:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:52ea6a34:::10:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:89d3ae78:::11:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:de5d7c5f:::12:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:566253c9:::13:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:62a1935d:::14:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:863748b0:::15:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:3958e169:::16:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:4d4dabf9:::17:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:8391935d:::18:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:28883081:::19:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:69259c59:::20:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:4bdb80b7:::21:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a11c5d71:::22:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:271af37b:::23:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:95b121be:::24:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:58d1031b:::25:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:0a050783:::26:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c709704c:::27:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:cbe56eaf:::28:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:86b4b162:::29:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:70d89383:::30:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:dd450c7c:::31:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:6d5729b1:::32:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c388f3fb:::33:head 2026-03-09T17:32:41.061 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:56cfea31:::34:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:9dbc1bf7:::35:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:40b74ccd:::36:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:4d5aaf42:::37:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:920f362c:::38:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:6cc53222:::39:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:9cad833f:::40:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:1ea84d41:::41:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c4480ef6:::42:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a694361e:::43:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d1bd33e9:::44:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:ddc2cd5d:::45:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:2b782207:::46:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:7b187fca:::47:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:90ecdf6f:::48:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a5ed95fe:::49:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:ea0eaa55:::50:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f33ef17b:::51:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a0d1b2f6:::52:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:60c5229e:::53:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:edcbc575:::54:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:102cf253:::55:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:efb7fb0b:::56:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:50d0a326:::57:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d4dc5daf:::58:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:3a130462:::59:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:ec87ed71:::60:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d5bc9454:::61:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:3ddfe313:::62:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:7c2816b9:::63:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:47e00e4d:::64:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c6410c18:::65:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b48ed237:::66:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:cd63ad31:::67:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b179e92b:::68:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:0d9f741a:::69:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:6d3352ae:::70:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c6d5c19e:::71:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:bc4729c3:::72:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:77e930b9:::73:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:0abeecfd:::74:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b7c37e15:::75:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b6378398:::76:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:02bd68de:::77:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:cc795d2d:::78:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:630d4fea:::79:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:e0d29ef5:::80:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:fd6f13d2:::81:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:606461d5:::82:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:eadbdc43:::83:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:8761d0bb:::84:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:9ef0186f:::85:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:e0d41294:::86:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:961de695:::87:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:1423148f:::88:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:633a8fa2:::89:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a8653809:::90:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:3dac8b33:::91:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:35aad435:::92:head 2026-03-09T17:32:41.062 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f6dcc343:::93:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:dbbdad87:::94:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:1cb48ce0:::95:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:03cd461c:::96:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:17a4ea99:::97:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:9993c9a7:::98:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:6394211c:::99:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:94c7ae57:::100:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:6fdee5bb:::101:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:9a477fd1:::102:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:eb850916:::103:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:affc56b9:::104:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b42dc814:::105:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f319f8f0:::106:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:9a40b9de:::107:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:8b524f28:::108:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:e3de589f:::109:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:90f90a5b:::110:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a7b4f1d7:::111:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:af51766e:::112:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b6f90bd1:::113:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:e0261208:::114:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c9569ef7:::115:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:61bebe50:::116:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:fe93412b:::117:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d3d38bee:::118:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:3100ba0c:::119:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d0560ada:::120:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f0ea8b35:::121:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:766f231a:::122:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a07a2582:::123:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:bd7c6b3a:::124:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:fb2ddaff:::125:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:4408e1fe:::126:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:ee1df7a7:::127:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c3002909:::128:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:4f48ffa9:::129:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:edf38733:::130:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c08425c0:::131:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:5f902d98:::132:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:41ea2c93:::133:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:813cee13:::134:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:0131818d:::135:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:26ba5a85:::136:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:381b8a5a:::137:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:28797e47:::138:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:bfca7f22:::139:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:36807075:::140:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:80b03975:::141:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:5c15709b:::142:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f39ea15e:::143:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:ea992956:::144:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:48887b1c:::145:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:9f24a9dd:::146:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:987f100b:::147:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d2dd3581:::148:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:7fed1808:::149:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c80b70e9:::150:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:85ed90f9:::151:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:36428b24:::152:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d044c34a:::153:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:7c18bf58:::154:head 2026-03-09T17:32:41.063 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d1c21232:::155:head 2026-03-09T17:32:41.064 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a7a3c575:::156:head 2026-03-09T17:32:41.064 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:87da0633:::157:head 2026-03-09T17:32:41.064 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d5ac3822:::158:head 2026-03-09T17:32:41.064 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:3f20522d:::159:head 2026-03-09T17:32:41.064 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:6ca26563:::160:head 2026-03-09T17:32:41.064 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:532ce135:::161:head 2026-03-09T17:32:41.064 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:c78863e6:::162:head 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:39.698077+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:39.698077+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:39.698105+0000 mon.a (mon.0) 2199 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]': finished 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:39.698105+0000 mon.a (mon.0) 2199 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "22"}]': finished 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:39.705529+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:39.705529+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: cluster 2026-03-09T17:32:39.713298+0000 mon.a (mon.0) 2200 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: cluster 2026-03-09T17:32:39.713298+0000 mon.a (mon.0) 2200 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:39.718735+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:39.718735+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.040948+0000 mon.c (mon.2) 542 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.040948+0000 mon.c (mon.2) 542 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.041436+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.041436+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.041663+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.041663+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.701794+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.701794+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.701865+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: audit 2026-03-09T17:32:40.701865+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pg_num","val":"11"}]': finished 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: cluster 2026-03-09T17:32:40.718273+0000 mon.a (mon.0) 2205 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T17:32:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:40 vm02 bash[23351]: cluster 2026-03-09T17:32:40.718273+0000 mon.a (mon.0) 2205 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:40.720510+0000 mon.c (mon.2) 544 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:40.720510+0000 mon.c (mon.2) 544 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:40.729883+0000 mon.c (mon.2) 545 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:40.729883+0000 mon.c (mon.2) 545 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:40.730329+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:40.730329+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: cluster 2026-03-09T17:32:40.736490+0000 mgr.y (mgr.14505) 274 : cluster [DBG] pgmap v403: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: cluster 2026-03-09T17:32:40.736490+0000 mgr.y (mgr.14505) 274 : cluster [DBG] pgmap v403: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:40.749801+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:40.749801+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.059366+0000 mon.b (mon.1) 327 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.059366+0000 mon.b (mon.1) 327 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.141515+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.141515+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.142429+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.142429+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.142866+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.142866+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.143633+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: audit 2026-03-09T17:32:41.143633+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: cluster 2026-03-09T17:32:41.681250+0000 mon.a (mon.0) 2209 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:41 vm02 bash[23351]: cluster 2026-03-09T17:32:41.681250+0000 mon.a (mon.0) 2209 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:42.137 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:32:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:40.720510+0000 mon.c (mon.2) 544 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:40.720510+0000 mon.c (mon.2) 544 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:40.729883+0000 mon.c (mon.2) 545 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:40.729883+0000 mon.c (mon.2) 545 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:40.730329+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:40.730329+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: cluster 2026-03-09T17:32:40.736490+0000 mgr.y (mgr.14505) 274 : cluster [DBG] pgmap v403: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: cluster 2026-03-09T17:32:40.736490+0000 mgr.y (mgr.14505) 274 : cluster [DBG] pgmap v403: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:40.749801+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:40.749801+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.059366+0000 mon.b (mon.1) 327 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.059366+0000 mon.b (mon.1) 327 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.141515+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.141515+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.142429+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.142429+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.142866+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.142866+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.143633+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: audit 2026-03-09T17:32:41.143633+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: cluster 2026-03-09T17:32:41.681250+0000 mon.a (mon.0) 2209 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:41 vm00 bash[28333]: cluster 2026-03-09T17:32:41.681250+0000 mon.a (mon.0) 2209 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:40.720510+0000 mon.c (mon.2) 544 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:40.720510+0000 mon.c (mon.2) 544 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:40.729883+0000 mon.c (mon.2) 545 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:40.729883+0000 mon.c (mon.2) 545 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:40.730329+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:40.730329+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: cluster 2026-03-09T17:32:40.736490+0000 mgr.y (mgr.14505) 274 : cluster [DBG] pgmap v403: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: cluster 2026-03-09T17:32:40.736490+0000 mgr.y (mgr.14505) 274 : cluster [DBG] pgmap v403: 323 pgs: 8 unknown, 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:40.749801+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:40.749801+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.059366+0000 mon.b (mon.1) 327 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.059366+0000 mon.b (mon.1) 327 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-60118-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.141515+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.141515+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.142429+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.142429+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.142866+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.142866+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:32:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.143633+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: audit 2026-03-09T17:32:41.143633+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]: dispatch 2026-03-09T17:32:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: cluster 2026-03-09T17:32:41.681250+0000 mon.a (mon.0) 2209 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:42.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:41 vm00 bash[20770]: cluster 2026-03-09T17:32:41.681250+0000 mon.a (mon.0) 2209 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.714260+0000 mgr.y (mgr.14505) 275 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.714260+0000 mgr.y (mgr.14505) 275 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.783610+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.783610+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.783902+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]': finished 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.783902+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]': finished 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.790014+0000 mon.c (mon.2) 547 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.790014+0000 mon.c (mon.2) 547 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: cluster 2026-03-09T17:32:41.800392+0000 mon.a (mon.0) 2212 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: cluster 2026-03-09T17:32:41.800392+0000 mon.a (mon.0) 2212 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.807596+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:41.807596+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:42.668909+0000 mon.c (mon.2) 548 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:42.668909+0000 mon.c (mon.2) 548 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:42.737296+0000 mon.c (mon.2) 549 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:42.737296+0000 mon.c (mon.2) 549 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:42.738510+0000 mon.a (mon.0) 2214 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:42 vm02 bash[23351]: audit 2026-03-09T17:32:42.738510+0000 mon.a (mon.0) 2214 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.714260+0000 mgr.y (mgr.14505) 275 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.714260+0000 mgr.y (mgr.14505) 275 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.783610+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.783610+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.783902+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]': finished 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.783902+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]': finished 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.790014+0000 mon.c (mon.2) 547 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.790014+0000 mon.c (mon.2) 547 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: cluster 2026-03-09T17:32:41.800392+0000 mon.a (mon.0) 2212 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: cluster 2026-03-09T17:32:41.800392+0000 mon.a (mon.0) 2212 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.807596+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:41.807596+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:42.668909+0000 mon.c (mon.2) 548 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:42.668909+0000 mon.c (mon.2) 548 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:42.737296+0000 mon.c (mon.2) 549 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:42.737296+0000 mon.c (mon.2) 549 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:42.738510+0000 mon.a (mon.0) 2214 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:42 vm00 bash[28333]: audit 2026-03-09T17:32:42.738510+0000 mon.a (mon.0) 2214 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.714260+0000 mgr.y (mgr.14505) 275 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.714260+0000 mgr.y (mgr.14505) 275 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.783610+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.783610+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-60009-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.783902+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]': finished 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.783902+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-43"}]': finished 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.790014+0000 mon.c (mon.2) 547 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.790014+0000 mon.c (mon.2) 547 : audit [DBG] from='client.? 192.168.123.100:0/4242335220' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: cluster 2026-03-09T17:32:41.800392+0000 mon.a (mon.0) 2212 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: cluster 2026-03-09T17:32:41.800392+0000 mon.a (mon.0) 2212 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.807596+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:41.807596+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:42.668909+0000 mon.c (mon.2) 548 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:42.668909+0000 mon.c (mon.2) 548 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:42.737296+0000 mon.c (mon.2) 549 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:42.737296+0000 mon.c (mon.2) 549 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:42.738510+0000 mon.a (mon.0) 2214 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:42 vm00 bash[20770]: audit 2026-03-09T17:32:42.738510+0000 mon.a (mon.0) 2214 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: cluster 2026-03-09T17:32:42.736838+0000 mgr.y (mgr.14505) 276 : cluster [DBG] pgmap v405: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: cluster 2026-03-09T17:32:42.736838+0000 mgr.y (mgr.14505) 276 : cluster [DBG] pgmap v405: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.823534+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.823534+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.823647+0000 mon.a (mon.0) 2216 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]': finished 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.823647+0000 mon.a (mon.0) 2216 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]': finished 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: cluster 2026-03-09T17:32:42.827876+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: cluster 2026-03-09T17:32:42.827876+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.836042+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.836042+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.841715+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.841715+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.861129+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.861129+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.861464+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.861464+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.861777+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.861777+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.862318+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.862318+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.863235+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:42.863235+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:43.827646+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:43.827646+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:43.827737+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:44.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: audit 2026-03-09T17:32:43.827737+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:44.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: cluster 2026-03-09T17:32:43.831623+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T17:32:44.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:43 vm02 bash[23351]: cluster 2026-03-09T17:32:43.831623+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: cluster 2026-03-09T17:32:42.736838+0000 mgr.y (mgr.14505) 276 : cluster [DBG] pgmap v405: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: cluster 2026-03-09T17:32:42.736838+0000 mgr.y (mgr.14505) 276 : cluster [DBG] pgmap v405: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.823534+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.823534+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.823647+0000 mon.a (mon.0) 2216 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.823647+0000 mon.a (mon.0) 2216 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: cluster 2026-03-09T17:32:42.827876+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: cluster 2026-03-09T17:32:42.827876+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.836042+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.836042+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.841715+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.841715+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.861129+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.861129+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.861464+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.861464+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.861777+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.861777+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.862318+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.862318+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.863235+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:42.863235+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:43.827646+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:43.827646+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:43.827737+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: audit 2026-03-09T17:32:43.827737+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: cluster 2026-03-09T17:32:43.831623+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:43 vm00 bash[20770]: cluster 2026-03-09T17:32:43.831623+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: cluster 2026-03-09T17:32:42.736838+0000 mgr.y (mgr.14505) 276 : cluster [DBG] pgmap v405: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: cluster 2026-03-09T17:32:42.736838+0000 mgr.y (mgr.14505) 276 : cluster [DBG] pgmap v405: 315 pgs: 315 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.823534+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.823534+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.823647+0000 mon.a (mon.0) 2216 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]': finished 2026-03-09T17:32:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.823647+0000 mon.a (mon.0) 2216 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-60009-1", "var": "pgp_num_actual", "val": "21"}]': finished 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: cluster 2026-03-09T17:32:42.827876+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: cluster 2026-03-09T17:32:42.827876+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.836042+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.836042+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.841715+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.841715+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.861129+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.861129+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.861464+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.861464+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.861777+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.861777+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.862318+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.862318+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.863235+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:42.863235+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:43.827646+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:43.827646+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? 192.168.123.100:0/2345612956' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59916-57"}]': finished 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:43.827737+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: audit 2026-03-09T17:32:43.827737+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-60009-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: cluster 2026-03-09T17:32:43.831623+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T17:32:44.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:43 vm00 bash[28333]: cluster 2026-03-09T17:32:43.831623+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.855775+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.855775+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.857053+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.857053+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.861625+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.861625+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874190+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874190+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874202+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874202+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874665+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874665+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874721+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874721+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874800+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.874800+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.875294+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.875294+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.875782+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:43.875782+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.831225+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.831225+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.831325+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.831325+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.836105+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.836105+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.841079+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.841079+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: cluster 2026-03-09T17:32:44.845992+0000 mon.a (mon.0) 2232 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T17:32:45.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: cluster 2026-03-09T17:32:44.845992+0000 mon.a (mon.0) 2232 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T17:32:45.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.847813+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.847813+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.848164+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:44 vm02 bash[23351]: audit 2026-03-09T17:32:44.848164+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.855775+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.855775+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.857053+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.857053+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.861625+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.861625+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874190+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874190+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874202+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874202+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874665+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874665+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874721+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874721+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874800+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.874800+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.875294+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.875294+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.875782+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:43.875782+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.831225+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.831225+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.831325+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.831325+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.836105+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.836105+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.855775+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.855775+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.857053+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.857053+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.861625+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.861625+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874190+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874190+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874202+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874202+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874665+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874665+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874721+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874721+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874800+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.874800+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.875294+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.875294+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.875782+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:43.875782+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.831225+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.831225+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.831325+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.831325+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59916-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.836105+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.836105+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.841079+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.841079+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: cluster 2026-03-09T17:32:44.845992+0000 mon.a (mon.0) 2232 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: cluster 2026-03-09T17:32:44.845992+0000 mon.a (mon.0) 2232 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.847813+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.847813+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.848164+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:44 vm00 bash[28333]: audit 2026-03-09T17:32:44.848164+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.841079+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.841079+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: cluster 2026-03-09T17:32:44.845992+0000 mon.a (mon.0) 2232 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: cluster 2026-03-09T17:32:44.845992+0000 mon.a (mon.0) 2232 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.847813+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.847813+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.848164+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:45.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:44 vm00 bash[20770]: audit 2026-03-09T17:32:44.848164+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:32:46.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:46 vm00 bash[28333]: cluster 2026-03-09T17:32:44.737170+0000 mgr.y (mgr.14505) 277 : cluster [DBG] pgmap v408: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:46 vm00 bash[28333]: cluster 2026-03-09T17:32:44.737170+0000 mgr.y (mgr.14505) 277 : cluster [DBG] pgmap v408: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:46 vm00 bash[20770]: cluster 2026-03-09T17:32:44.737170+0000 mgr.y (mgr.14505) 277 : cluster [DBG] pgmap v408: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:46 vm00 bash[20770]: cluster 2026-03-09T17:32:44.737170+0000 mgr.y (mgr.14505) 277 : cluster [DBG] pgmap v408: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:46 vm02 bash[23351]: cluster 2026-03-09T17:32:44.737170+0000 mgr.y (mgr.14505) 277 : cluster [DBG] pgmap v408: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:46 vm02 bash[23351]: cluster 2026-03-09T17:32:44.737170+0000 mgr.y (mgr.14505) 277 : cluster [DBG] pgmap v408: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:32:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:32:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:7462ddf6:::expected=5 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:89d3ae78:::11:head 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:89d3ae78:::11:head expected=13:89d3ae78:::11:head 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:89d3ae78:::11:head -> 11 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=11 expected=11 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:863748b0:::15:head 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:863748b0:::15:head expected=13:863748b0:::15:head 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:863748b0:::15:head -> 15 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=15 expected=15 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:d83876eb:::4:head 2026-03-09T17:32:47.070 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:d83876eb:::4:head expected=13:d83876eb:::4:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:d83876eb:::4:head -> 4 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=4 expected=4 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:f905c69b:::2:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:f905c69b:::2:head expected=13:f905c69b:::2:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:f905c69b:::2:head -> 2 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=2 expected=2 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:e960b815:::9:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:e960b815:::9:head expected=13:e960b815:::9:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:e960b815:::9:head -> 9 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=9 expected=9 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:de5d7c5f:::12:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:de5d7c5f:::12:head expected=13:de5d7c5f:::12:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:de5d7c5f:::12:head -> 12 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=12 expected=12 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:5c6b0b28:::7:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:5c6b0b28:::7:head expected=13:5c6b0b28:::7:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:5c6b0b28:::7:head -> 7 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=7 expected=7 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 13:62a1935d:::14:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=13:62a1935d:::14:head expected=13:62a1935d:::14:head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 13:62a1935d:::14:head -> 14 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=14 expected=14 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjectsCursor (147 ms) 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjects 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.EnumerateObjects (89707 ms) 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjectsSplit 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 0/5 -> MIN 13:33333333::::head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 1/5 -> 13:33333333::::head 13:66666666::::head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 2/5 -> 13:66666666::::head 13:99999999::::head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 3/5 -> 13:99999999::::head 13:cccccccc::::head 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 4/5 -> 13:cccccccc::::head MAX 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.EnumerateObjectsSplit (138051 ms) 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 7 tests from LibRadosList (228361 ms total) 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 3 tests from LibRadosListEC 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosListEC.ListObjects 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosListEC.ListObjects (1028 ms) 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsNS 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo1,foo2,foo3 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo1 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo2 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo3 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo1,foo4,foo5 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo4 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo5 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo1 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo6,foo7 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo7 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo6 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo4 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo5 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns2:foo7 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns2:foo6 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo1 2026-03-09T17:32:47.071 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo1 2026-03-09T17:32:47.072 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo2 2026-03-09T17:32:47.072 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo3 2026-03-09T17:32:47.072 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsNS (41 ms) 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:45.980224+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:45.980224+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:45.980322+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:45.980322+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:45.988561+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:45.988561+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: cluster 2026-03-09T17:32:46.007342+0000 mon.a (mon.0) 2237 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: cluster 2026-03-09T17:32:46.007342+0000 mon.a (mon.0) 2237 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:46.008777+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:46.008777+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: cluster 2026-03-09T17:32:46.681897+0000 mon.a (mon.0) 2239 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: cluster 2026-03-09T17:32:46.681897+0000 mon.a (mon.0) 2239 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:46.983996+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:46.983996+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:46.984077+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:47 vm00 bash[28333]: audit 2026-03-09T17:32:46.984077+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:45.980224+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:45.980224+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:45.980322+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:45.980322+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:45.988561+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:45.988561+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: cluster 2026-03-09T17:32:46.007342+0000 mon.a (mon.0) 2237 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: cluster 2026-03-09T17:32:46.007342+0000 mon.a (mon.0) 2237 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:46.008777+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:46.008777+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: cluster 2026-03-09T17:32:46.681897+0000 mon.a (mon.0) 2239 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: cluster 2026-03-09T17:32:46.681897+0000 mon.a (mon.0) 2239 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:46.983996+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:46.983996+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:46.984077+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:32:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:47 vm00 bash[20770]: audit 2026-03-09T17:32:46.984077+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:45.980224+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:45.980224+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-60009-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:45.980322+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:45.980322+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:45.988561+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:45.988561+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: cluster 2026-03-09T17:32:46.007342+0000 mon.a (mon.0) 2237 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: cluster 2026-03-09T17:32:46.007342+0000 mon.a (mon.0) 2237 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:46.008777+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:46.008777+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: cluster 2026-03-09T17:32:46.681897+0000 mon.a (mon.0) 2239 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: cluster 2026-03-09T17:32:46.681897+0000 mon.a (mon.0) 2239 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:46.983996+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:46.983996+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59916-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:46.984077+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:32:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:47 vm02 bash[23351]: audit 2026-03-09T17:32:46.984077+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: cluster 2026-03-09T17:32:46.737532+0000 mgr.y (mgr.14505) 278 : cluster [DBG] pgmap v411: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: cluster 2026-03-09T17:32:46.737532+0000 mgr.y (mgr.14505) 278 : cluster [DBG] pgmap v411: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:46.992354+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:46.992354+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: cluster 2026-03-09T17:32:47.012229+0000 mon.a (mon.0) 2242 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: cluster 2026-03-09T17:32:47.012229+0000 mon.a (mon.0) 2242 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.018810+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.018810+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.170940+0000 mon.c (mon.2) 550 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.170940+0000 mon.c (mon.2) 550 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.500870+0000 mon.c (mon.2) 551 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.500870+0000 mon.c (mon.2) 551 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.501832+0000 mon.c (mon.2) 552 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.501832+0000 mon.c (mon.2) 552 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.517176+0000 mon.a (mon.0) 2244 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.517176+0000 mon.a (mon.0) 2244 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.988319+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.988319+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: cluster 2026-03-09T17:32:46.737532+0000 mgr.y (mgr.14505) 278 : cluster [DBG] pgmap v411: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: cluster 2026-03-09T17:32:46.737532+0000 mgr.y (mgr.14505) 278 : cluster [DBG] pgmap v411: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:46.992354+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:46.992354+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: cluster 2026-03-09T17:32:47.012229+0000 mon.a (mon.0) 2242 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: cluster 2026-03-09T17:32:47.012229+0000 mon.a (mon.0) 2242 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.018810+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.018810+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.170940+0000 mon.c (mon.2) 550 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.170940+0000 mon.c (mon.2) 550 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.500870+0000 mon.c (mon.2) 551 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.500870+0000 mon.c (mon.2) 551 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.501832+0000 mon.c (mon.2) 552 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.501832+0000 mon.c (mon.2) 552 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.517176+0000 mon.a (mon.0) 2244 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.517176+0000 mon.a (mon.0) 2244 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.988319+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.988319+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.991974+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.991974+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: cluster 2026-03-09T17:32:47.995306+0000 mon.a (mon.0) 2246 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: cluster 2026-03-09T17:32:47.995306+0000 mon.a (mon.0) 2246 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.998208+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:47.998208+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:48.000749+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:48.000749+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:48.000829+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:48 vm00 bash[28333]: audit 2026-03-09T17:32:48.000829+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.991974+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.991974+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: cluster 2026-03-09T17:32:47.995306+0000 mon.a (mon.0) 2246 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: cluster 2026-03-09T17:32:47.995306+0000 mon.a (mon.0) 2246 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.998208+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:47.998208+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:48.000749+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:48.000749+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:48.000829+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:48 vm00 bash[20770]: audit 2026-03-09T17:32:48.000829+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: cluster 2026-03-09T17:32:46.737532+0000 mgr.y (mgr.14505) 278 : cluster [DBG] pgmap v411: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: cluster 2026-03-09T17:32:46.737532+0000 mgr.y (mgr.14505) 278 : cluster [DBG] pgmap v411: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:46.992354+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:46.992354+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: cluster 2026-03-09T17:32:47.012229+0000 mon.a (mon.0) 2242 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: cluster 2026-03-09T17:32:47.012229+0000 mon.a (mon.0) 2242 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.018810+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.018810+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.170940+0000 mon.c (mon.2) 550 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.170940+0000 mon.c (mon.2) 550 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.500870+0000 mon.c (mon.2) 551 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.500870+0000 mon.c (mon.2) 551 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.501832+0000 mon.c (mon.2) 552 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.501832+0000 mon.c (mon.2) 552 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.517176+0000 mon.a (mon.0) 2244 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.517176+0000 mon.a (mon.0) 2244 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.988319+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.988319+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.991974+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.991974+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: cluster 2026-03-09T17:32:47.995306+0000 mon.a (mon.0) 2246 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: cluster 2026-03-09T17:32:47.995306+0000 mon.a (mon.0) 2246 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T17:32:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.998208+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:47.998208+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:48.000749+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:48.000749+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:32:48.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:48.000829+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:48.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:48 vm02 bash[23351]: audit 2026-03-09T17:32:48.000829+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: cluster 2026-03-09T17:32:48.737923+0000 mgr.y (mgr.14505) 279 : cluster [DBG] pgmap v414: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: cluster 2026-03-09T17:32:48.737923+0000 mgr.y (mgr.14505) 279 : cluster [DBG] pgmap v414: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:48.997044+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:48.997044+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:48.997111+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:48.997111+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: cluster 2026-03-09T17:32:49.001449+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: cluster 2026-03-09T17:32:49.001449+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.011617+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.011617+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.011766+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.011766+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.011915+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.011915+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.015556+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.015556+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.015643+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.015643+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.015744+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:50 vm00 bash[28333]: audit 2026-03-09T17:32:49.015744+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: cluster 2026-03-09T17:32:48.737923+0000 mgr.y (mgr.14505) 279 : cluster [DBG] pgmap v414: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: cluster 2026-03-09T17:32:48.737923+0000 mgr.y (mgr.14505) 279 : cluster [DBG] pgmap v414: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:48.997044+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:48.997044+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:48.997111+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:48.997111+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: cluster 2026-03-09T17:32:49.001449+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: cluster 2026-03-09T17:32:49.001449+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.011617+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.011617+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.011766+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.011766+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.011915+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.011915+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.015556+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.015556+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.015643+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.015643+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.015744+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:50 vm00 bash[20770]: audit 2026-03-09T17:32:49.015744+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: cluster 2026-03-09T17:32:48.737923+0000 mgr.y (mgr.14505) 279 : cluster [DBG] pgmap v414: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: cluster 2026-03-09T17:32:48.737923+0000 mgr.y (mgr.14505) 279 : cluster [DBG] pgmap v414: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:48.997044+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:48.997044+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:48.997111+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:48.997111+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: cluster 2026-03-09T17:32:49.001449+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: cluster 2026-03-09T17:32:49.001449+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.011617+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.011617+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.011766+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.011766+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/1060103403' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.011915+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.011915+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.015556+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.015556+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.015643+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.015643+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.015744+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:50 vm02 bash[23351]: audit 2026-03-09T17:32:49.015744+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.000860+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.000860+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.000990+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.000990+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.002692+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.002692+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.005426+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.005426+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: cluster 2026-03-09T17:32:50.010831+0000 mon.a (mon.0) 2258 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: cluster 2026-03-09T17:32:50.010831+0000 mon.a (mon.0) 2258 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.015489+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:51 vm02 bash[23351]: audit 2026-03-09T17:32:50.015489+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.000860+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:32:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.000860+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:32:51.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.000990+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.000990+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.002692+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.002692+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.005426+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.005426+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: cluster 2026-03-09T17:32:50.010831+0000 mon.a (mon.0) 2258 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: cluster 2026-03-09T17:32:50.010831+0000 mon.a (mon.0) 2258 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.015489+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:51 vm00 bash[28333]: audit 2026-03-09T17:32:50.015489+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.000860+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.000860+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.000990+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.000990+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-60009-2"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.002692+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.002692+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.005426+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.005426+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/1085655346' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: cluster 2026-03-09T17:32:50.010831+0000 mon.a (mon.0) 2258 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: cluster 2026-03-09T17:32:50.010831+0000 mon.a (mon.0) 2258 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.015489+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:51 vm00 bash[20770]: audit 2026-03-09T17:32:50.015489+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]: dispatch 2026-03-09T17:32:52.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:32:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: cluster 2026-03-09T17:32:50.738351+0000 mgr.y (mgr.14505) 280 : cluster [DBG] pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: cluster 2026-03-09T17:32:50.738351+0000 mgr.y (mgr.14505) 280 : cluster [DBG] pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.030499+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.030499+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: cluster 2026-03-09T17:32:51.036138+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: cluster 2026-03-09T17:32:51.036138+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.058355+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.058355+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.073856+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.073856+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.078153+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.078153+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.078651+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.078651+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.079394+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.079394+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.079828+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.079828+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.080486+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.080486+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.080921+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: audit 2026-03-09T17:32:51.080921+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: cluster 2026-03-09T17:32:51.682562+0000 mon.a (mon.0) 2266 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:52 vm02 bash[23351]: cluster 2026-03-09T17:32:51.682562+0000 mon.a (mon.0) 2266 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: cluster 2026-03-09T17:32:50.738351+0000 mgr.y (mgr.14505) 280 : cluster [DBG] pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: cluster 2026-03-09T17:32:50.738351+0000 mgr.y (mgr.14505) 280 : cluster [DBG] pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.030499+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.030499+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: cluster 2026-03-09T17:32:51.036138+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: cluster 2026-03-09T17:32:51.036138+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.058355+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.058355+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.073856+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.073856+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.078153+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.078153+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.078651+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.078651+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.079394+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.079394+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.079828+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.079828+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.080486+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.080486+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.080921+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: audit 2026-03-09T17:32:51.080921+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: cluster 2026-03-09T17:32:51.682562+0000 mon.a (mon.0) 2266 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:52 vm00 bash[28333]: cluster 2026-03-09T17:32:51.682562+0000 mon.a (mon.0) 2266 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: cluster 2026-03-09T17:32:50.738351+0000 mgr.y (mgr.14505) 280 : cluster [DBG] pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: cluster 2026-03-09T17:32:50.738351+0000 mgr.y (mgr.14505) 280 : cluster [DBG] pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.030499+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.030499+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59916-58"}]': finished 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: cluster 2026-03-09T17:32:51.036138+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: cluster 2026-03-09T17:32:51.036138+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.058355+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.058355+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.073856+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.073856+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.078153+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.078153+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.078651+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.078651+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.079394+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.079394+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.079828+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.079828+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.080486+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.080486+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.080921+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: audit 2026-03-09T17:32:51.080921+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: cluster 2026-03-09T17:32:51.682562+0000 mon.a (mon.0) 2266 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:52.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:52 vm00 bash[20770]: cluster 2026-03-09T17:32:51.682562+0000 mon.a (mon.0) 2266 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsStart 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 1 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 10 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 13 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 7 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 14 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 0 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 15 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 11 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 5 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 8 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 6 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 3 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 4 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 12 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 9 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2 0 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsStart (59 ms) 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 3 tests from LibRadosListEC (1128 ms total) 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 1 test from LibRadosListNP 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosListNP.ListObjectsError 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosListNP.ListObjectsError (3080 ms) 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 1 test from LibRadosListNP (3080 ms total) 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] Global test environment tear-down 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [==========] 11 tests from 3 test suites ran. (240889 ms total) 2026-03-09T17:32:53.099 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ PASSED ] 11 tests. 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:51.724687+0000 mgr.y (mgr.14505) 281 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:51.724687+0000 mgr.y (mgr.14505) 281 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.088775+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.088775+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.088820+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.088820+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: cluster 2026-03-09T17:32:52.099523+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: cluster 2026-03-09T17:32:52.099523+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.100204+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.100204+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.119131+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.119131+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.152627+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.152627+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.161253+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:52.161253+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:53.092303+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: audit 2026-03-09T17:32:53.092303+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: cluster 2026-03-09T17:32:53.111867+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:53 vm00 bash[20770]: cluster 2026-03-09T17:32:53.111867+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:51.724687+0000 mgr.y (mgr.14505) 281 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:51.724687+0000 mgr.y (mgr.14505) 281 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.088775+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.088775+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.088820+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.088820+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: cluster 2026-03-09T17:32:52.099523+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: cluster 2026-03-09T17:32:52.099523+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.100204+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.100204+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.119131+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.119131+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.152627+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.152627+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.161253+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:52.161253+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:53.092303+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: audit 2026-03-09T17:32:53.092303+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: cluster 2026-03-09T17:32:53.111867+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T17:32:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:53 vm00 bash[28333]: cluster 2026-03-09T17:32:53.111867+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:51.724687+0000 mgr.y (mgr.14505) 281 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:51.724687+0000 mgr.y (mgr.14505) 281 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.088775+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.088775+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60009-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.088820+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.088820+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: cluster 2026-03-09T17:32:52.099523+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: cluster 2026-03-09T17:32:52.099523+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.100204+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.100204+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.119131+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.119131+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.152627+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.152627+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.100:0/2909685283' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.161253+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:52.161253+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:53.092303+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: audit 2026-03-09T17:32:53.092303+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-60009-3","pool2":"test-rados-api-vm00-60009-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: cluster 2026-03-09T17:32:53.111867+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T17:32:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:53 vm02 bash[23351]: cluster 2026-03-09T17:32:53.111867+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T17:32:54.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:54 vm00 bash[28333]: cluster 2026-03-09T17:32:52.738786+0000 mgr.y (mgr.14505) 282 : cluster [DBG] pgmap v420: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:54.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:54 vm00 bash[28333]: cluster 2026-03-09T17:32:52.738786+0000 mgr.y (mgr.14505) 282 : cluster [DBG] pgmap v420: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:54.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:54 vm00 bash[28333]: audit 2026-03-09T17:32:54.095443+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:54 vm00 bash[28333]: audit 2026-03-09T17:32:54.095443+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:54 vm00 bash[28333]: cluster 2026-03-09T17:32:54.099613+0000 mon.a (mon.0) 2275 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:54 vm00 bash[28333]: cluster 2026-03-09T17:32:54.099613+0000 mon.a (mon.0) 2275 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:54 vm00 bash[20770]: cluster 2026-03-09T17:32:52.738786+0000 mgr.y (mgr.14505) 282 : cluster [DBG] pgmap v420: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:54 vm00 bash[20770]: cluster 2026-03-09T17:32:52.738786+0000 mgr.y (mgr.14505) 282 : cluster [DBG] pgmap v420: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:54 vm00 bash[20770]: audit 2026-03-09T17:32:54.095443+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:54 vm00 bash[20770]: audit 2026-03-09T17:32:54.095443+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:54 vm00 bash[20770]: cluster 2026-03-09T17:32:54.099613+0000 mon.a (mon.0) 2275 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T17:32:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:54 vm00 bash[20770]: cluster 2026-03-09T17:32:54.099613+0000 mon.a (mon.0) 2275 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T17:32:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:54 vm02 bash[23351]: cluster 2026-03-09T17:32:52.738786+0000 mgr.y (mgr.14505) 282 : cluster [DBG] pgmap v420: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:54 vm02 bash[23351]: cluster 2026-03-09T17:32:52.738786+0000 mgr.y (mgr.14505) 282 : cluster [DBG] pgmap v420: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 686 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:32:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:54 vm02 bash[23351]: audit 2026-03-09T17:32:54.095443+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:54 vm02 bash[23351]: audit 2026-03-09T17:32:54.095443+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59916-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:54 vm02 bash[23351]: cluster 2026-03-09T17:32:54.099613+0000 mon.a (mon.0) 2275 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T17:32:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:54 vm02 bash[23351]: cluster 2026-03-09T17:32:54.099613+0000 mon.a (mon.0) 2275 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T17:32:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:56 vm02 bash[23351]: cluster 2026-03-09T17:32:54.739124+0000 mgr.y (mgr.14505) 283 : cluster [DBG] pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:56 vm02 bash[23351]: cluster 2026-03-09T17:32:54.739124+0000 mgr.y (mgr.14505) 283 : cluster [DBG] pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:56 vm02 bash[23351]: cluster 2026-03-09T17:32:55.111175+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T17:32:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:56 vm02 bash[23351]: cluster 2026-03-09T17:32:55.111175+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T17:32:56.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:56 vm00 bash[28333]: cluster 2026-03-09T17:32:54.739124+0000 mgr.y (mgr.14505) 283 : cluster [DBG] pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:56.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:56 vm00 bash[28333]: cluster 2026-03-09T17:32:54.739124+0000 mgr.y (mgr.14505) 283 : cluster [DBG] pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:56.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:56 vm00 bash[28333]: cluster 2026-03-09T17:32:55.111175+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T17:32:56.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:56 vm00 bash[28333]: cluster 2026-03-09T17:32:55.111175+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T17:32:56.537 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:56 vm00 bash[20770]: cluster 2026-03-09T17:32:54.739124+0000 mgr.y (mgr.14505) 283 : cluster [DBG] pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:56 vm00 bash[20770]: cluster 2026-03-09T17:32:54.739124+0000 mgr.y (mgr.14505) 283 : cluster [DBG] pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:56 vm00 bash[20770]: cluster 2026-03-09T17:32:55.111175+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T17:32:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:56 vm00 bash[20770]: cluster 2026-03-09T17:32:55.111175+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T17:32:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:32:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:32:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:32:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:57 vm02 bash[23351]: cluster 2026-03-09T17:32:56.108066+0000 mon.a (mon.0) 2277 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T17:32:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:57 vm02 bash[23351]: cluster 2026-03-09T17:32:56.108066+0000 mon.a (mon.0) 2277 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T17:32:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:57 vm02 bash[23351]: audit 2026-03-09T17:32:56.110462+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:57 vm02 bash[23351]: audit 2026-03-09T17:32:56.110462+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:57 vm02 bash[23351]: audit 2026-03-09T17:32:56.122997+0000 mon.a (mon.0) 2278 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:57 vm02 bash[23351]: audit 2026-03-09T17:32:56.122997+0000 mon.a (mon.0) 2278 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:57 vm02 bash[23351]: cluster 2026-03-09T17:32:56.683381+0000 mon.a (mon.0) 2279 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:57 vm02 bash[23351]: cluster 2026-03-09T17:32:56.683381+0000 mon.a (mon.0) 2279 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:57.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:57 vm00 bash[28333]: cluster 2026-03-09T17:32:56.108066+0000 mon.a (mon.0) 2277 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:57 vm00 bash[28333]: cluster 2026-03-09T17:32:56.108066+0000 mon.a (mon.0) 2277 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:57 vm00 bash[28333]: audit 2026-03-09T17:32:56.110462+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:57 vm00 bash[28333]: audit 2026-03-09T17:32:56.110462+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:57 vm00 bash[28333]: audit 2026-03-09T17:32:56.122997+0000 mon.a (mon.0) 2278 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:57 vm00 bash[28333]: audit 2026-03-09T17:32:56.122997+0000 mon.a (mon.0) 2278 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:57 vm00 bash[28333]: cluster 2026-03-09T17:32:56.683381+0000 mon.a (mon.0) 2279 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:57 vm00 bash[28333]: cluster 2026-03-09T17:32:56.683381+0000 mon.a (mon.0) 2279 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:57 vm00 bash[20770]: cluster 2026-03-09T17:32:56.108066+0000 mon.a (mon.0) 2277 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:57 vm00 bash[20770]: cluster 2026-03-09T17:32:56.108066+0000 mon.a (mon.0) 2277 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:57 vm00 bash[20770]: audit 2026-03-09T17:32:56.110462+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:57 vm00 bash[20770]: audit 2026-03-09T17:32:56.110462+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:57 vm00 bash[20770]: audit 2026-03-09T17:32:56.122997+0000 mon.a (mon.0) 2278 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:57 vm00 bash[20770]: audit 2026-03-09T17:32:56.122997+0000 mon.a (mon.0) 2278 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:57 vm00 bash[20770]: cluster 2026-03-09T17:32:56.683381+0000 mon.a (mon.0) 2279 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:57 vm00 bash[20770]: cluster 2026-03-09T17:32:56.683381+0000 mon.a (mon.0) 2279 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout:.RoundTripAppendPP (2999 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RacingRemovePP 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RacingRemovePP (3041 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP (3142 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP2 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP2 (3054 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolEIOFlag 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: setting pool EIO 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: max_success 165, min_failed 166 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolEIOFlag (4024 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiReads 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiReads (3009 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio (118855 ms total) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.ReadIntoBufferlist 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioPP.ReadIntoBufferlist (3013 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.XattrsRoundTripPP 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioPP.XattrsRoundTripPP (9044 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RmXattrPP 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RmXattrPP (15199 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RemoveTestPP 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RemoveTestPP (3038 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP (30294 ms total) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosIoPP.XattrListPP (3024 ms) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP (3024 ms total) 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC 2026-03-09T17:32:58.171 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleWritePP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleWritePP (13638 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.WaitForSafePP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.WaitForSafePP (7191 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP (7088 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP2 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP2 (6501 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP3 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP3 (3162 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripSparseReadPP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripSparseReadPP (7073 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripAppendPP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripAppendPP (7061 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsCompletePP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsCompletePP (6850 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsSafePP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsSafePP (7425 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ReturnValuePP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ReturnValuePP (6403 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushPP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushPP (7144 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushAsyncPP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushAsyncPP (7228 ms) 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP 2026-03-09T17:32:58.172 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP (7104 ms) 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: cluster 2026-03-09T17:32:56.739464+0000 mgr.y (mgr.14505) 284 : cluster [DBG] pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: cluster 2026-03-09T17:32:56.739464+0000 mgr.y (mgr.14505) 284 : cluster [DBG] pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.117076+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.117076+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: cluster 2026-03-09T17:32:57.120007+0000 mon.a (mon.0) 2281 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: cluster 2026-03-09T17:32:57.120007+0000 mon.a (mon.0) 2281 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.129471+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.129471+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.138379+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.138379+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.675559+0000 mon.c (mon.2) 561 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.675559+0000 mon.c (mon.2) 561 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.686258+0000 mon.c (mon.2) 562 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.686258+0000 mon.c (mon.2) 562 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.686785+0000 mon.c (mon.2) 563 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.686785+0000 mon.c (mon.2) 563 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.687161+0000 mon.c (mon.2) 564 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.687161+0000 mon.c (mon.2) 564 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.687637+0000 mon.a (mon.0) 2283 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.687637+0000 mon.a (mon.0) 2283 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.687844+0000 mon.a (mon.0) 2284 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.687844+0000 mon.a (mon.0) 2284 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.687933+0000 mon.a (mon.0) 2285 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:58 vm02 bash[23351]: audit 2026-03-09T17:32:57.687933+0000 mon.a (mon.0) 2285 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: cluster 2026-03-09T17:32:56.739464+0000 mgr.y (mgr.14505) 284 : cluster [DBG] pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: cluster 2026-03-09T17:32:56.739464+0000 mgr.y (mgr.14505) 284 : cluster [DBG] pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.117076+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.117076+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: cluster 2026-03-09T17:32:57.120007+0000 mon.a (mon.0) 2281 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: cluster 2026-03-09T17:32:57.120007+0000 mon.a (mon.0) 2281 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.129471+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.129471+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.138379+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.138379+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.675559+0000 mon.c (mon.2) 561 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.675559+0000 mon.c (mon.2) 561 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.686258+0000 mon.c (mon.2) 562 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.686258+0000 mon.c (mon.2) 562 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.686785+0000 mon.c (mon.2) 563 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.686785+0000 mon.c (mon.2) 563 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.687161+0000 mon.c (mon.2) 564 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: cluster 2026-03-09T17:32:56.739464+0000 mgr.y (mgr.14505) 284 : cluster [DBG] pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: cluster 2026-03-09T17:32:56.739464+0000 mgr.y (mgr.14505) 284 : cluster [DBG] pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.117076+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.117076+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: cluster 2026-03-09T17:32:57.120007+0000 mon.a (mon.0) 2281 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: cluster 2026-03-09T17:32:57.120007+0000 mon.a (mon.0) 2281 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.129471+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.129471+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.100:0/2421229377' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.138379+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.138379+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.675559+0000 mon.c (mon.2) 561 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.675559+0000 mon.c (mon.2) 561 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.686258+0000 mon.c (mon.2) 562 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.686258+0000 mon.c (mon.2) 562 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.686785+0000 mon.c (mon.2) 563 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.686785+0000 mon.c (mon.2) 563 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.687161+0000 mon.c (mon.2) 564 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.687161+0000 mon.c (mon.2) 564 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.687637+0000 mon.a (mon.0) 2283 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.687637+0000 mon.a (mon.0) 2283 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.687844+0000 mon.a (mon.0) 2284 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.687844+0000 mon.a (mon.0) 2284 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.687933+0000 mon.a (mon.0) 2285 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:58 vm00 bash[28333]: audit 2026-03-09T17:32:57.687933+0000 mon.a (mon.0) 2285 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.687161+0000 mon.c (mon.2) 564 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.687637+0000 mon.a (mon.0) 2283 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.687637+0000 mon.a (mon.0) 2283 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.687844+0000 mon.a (mon.0) 2284 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.687844+0000 mon.a (mon.0) 2284 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.687933+0000 mon.a (mon.0) 2285 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:58 vm00 bash[20770]: audit 2026-03-09T17:32:57.687933+0000 mon.a (mon.0) 2285 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]: dispatch 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: audit 2026-03-09T17:32:58.145172+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: audit 2026-03-09T17:32:58.145172+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: audit 2026-03-09T17:32:58.145232+0000 mon.a (mon.0) 2287 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: audit 2026-03-09T17:32:58.145232+0000 mon.a (mon.0) 2287 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: audit 2026-03-09T17:32:58.145262+0000 mon.a (mon.0) 2288 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: audit 2026-03-09T17:32:58.145262+0000 mon.a (mon.0) 2288 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: audit 2026-03-09T17:32:58.145288+0000 mon.a (mon.0) 2289 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: audit 2026-03-09T17:32:58.145288+0000 mon.a (mon.0) 2289 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: cluster 2026-03-09T17:32:58.152302+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:32:59 vm00 bash[28333]: cluster 2026-03-09T17:32:58.152302+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: audit 2026-03-09T17:32:58.145172+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: audit 2026-03-09T17:32:58.145172+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: audit 2026-03-09T17:32:58.145232+0000 mon.a (mon.0) 2287 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: audit 2026-03-09T17:32:58.145232+0000 mon.a (mon.0) 2287 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: audit 2026-03-09T17:32:58.145262+0000 mon.a (mon.0) 2288 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: audit 2026-03-09T17:32:58.145262+0000 mon.a (mon.0) 2288 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: audit 2026-03-09T17:32:58.145288+0000 mon.a (mon.0) 2289 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: audit 2026-03-09T17:32:58.145288+0000 mon.a (mon.0) 2289 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]': finished 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: cluster 2026-03-09T17:32:58.152302+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T17:32:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:32:59 vm00 bash[20770]: cluster 2026-03-09T17:32:58.152302+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: audit 2026-03-09T17:32:58.145172+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: audit 2026-03-09T17:32:58.145172+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59916-59"}]': finished 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: audit 2026-03-09T17:32:58.145232+0000 mon.a (mon.0) 2287 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]': finished 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: audit 2026-03-09T17:32:58.145232+0000 mon.a (mon.0) 2287 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.5", "id": [6, 5]}]': finished 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: audit 2026-03-09T17:32:58.145262+0000 mon.a (mon.0) 2288 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]': finished 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: audit 2026-03-09T17:32:58.145262+0000 mon.a (mon.0) 2288 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.c", "id": [6, 3]}]': finished 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: audit 2026-03-09T17:32:58.145288+0000 mon.a (mon.0) 2289 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]': finished 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: audit 2026-03-09T17:32:58.145288+0000 mon.a (mon.0) 2289 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "260.10", "id": [6, 5]}]': finished 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: cluster 2026-03-09T17:32:58.152302+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T17:32:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:32:59 vm02 bash[23351]: cluster 2026-03-09T17:32:58.152302+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: cluster 2026-03-09T17:32:58.739801+0000 mgr.y (mgr.14505) 285 : cluster [DBG] pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: cluster 2026-03-09T17:32:58.739801+0000 mgr.y (mgr.14505) 285 : cluster [DBG] pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: cluster 2026-03-09T17:32:59.164807+0000 mon.a (mon.0) 2291 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: cluster 2026-03-09T17:32:59.164807+0000 mon.a (mon.0) 2291 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: audit 2026-03-09T17:32:59.172403+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: audit 2026-03-09T17:32:59.172403+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: audit 2026-03-09T17:33:00.152452+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: audit 2026-03-09T17:33:00.152452+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: cluster 2026-03-09T17:33:00.156757+0000 mon.a (mon.0) 2294 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:00 vm00 bash[28333]: cluster 2026-03-09T17:33:00.156757+0000 mon.a (mon.0) 2294 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: cluster 2026-03-09T17:32:58.739801+0000 mgr.y (mgr.14505) 285 : cluster [DBG] pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: cluster 2026-03-09T17:32:58.739801+0000 mgr.y (mgr.14505) 285 : cluster [DBG] pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: cluster 2026-03-09T17:32:59.164807+0000 mon.a (mon.0) 2291 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: cluster 2026-03-09T17:32:59.164807+0000 mon.a (mon.0) 2291 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: audit 2026-03-09T17:32:59.172403+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: audit 2026-03-09T17:32:59.172403+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: audit 2026-03-09T17:33:00.152452+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: audit 2026-03-09T17:33:00.152452+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: cluster 2026-03-09T17:33:00.156757+0000 mon.a (mon.0) 2294 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T17:33:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:00 vm00 bash[20770]: cluster 2026-03-09T17:33:00.156757+0000 mon.a (mon.0) 2294 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: cluster 2026-03-09T17:32:58.739801+0000 mgr.y (mgr.14505) 285 : cluster [DBG] pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: cluster 2026-03-09T17:32:58.739801+0000 mgr.y (mgr.14505) 285 : cluster [DBG] pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: cluster 2026-03-09T17:32:59.164807+0000 mon.a (mon.0) 2291 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: cluster 2026-03-09T17:32:59.164807+0000 mon.a (mon.0) 2291 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: audit 2026-03-09T17:32:59.172403+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: audit 2026-03-09T17:32:59.172403+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: audit 2026-03-09T17:33:00.152452+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: audit 2026-03-09T17:33:00.152452+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.100:0/1854718803' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59916-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: cluster 2026-03-09T17:33:00.156757+0000 mon.a (mon.0) 2294 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T17:33:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:00 vm02 bash[23351]: cluster 2026-03-09T17:33:00.156757+0000 mon.a (mon.0) 2294 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T17:33:02.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:33:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: cluster 2026-03-09T17:33:00.740110+0000 mgr.y (mgr.14505) 286 : cluster [DBG] pgmap v432: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: cluster 2026-03-09T17:33:00.740110+0000 mgr.y (mgr.14505) 286 : cluster [DBG] pgmap v432: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: cluster 2026-03-09T17:33:01.158861+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: cluster 2026-03-09T17:33:01.158861+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.172840+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.172840+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.173954+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.173954+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.175304+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.175304+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.175499+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.175499+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.176722+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.176722+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.176884+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: cluster 2026-03-09T17:33:00.740110+0000 mgr.y (mgr.14505) 286 : cluster [DBG] pgmap v432: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: cluster 2026-03-09T17:33:00.740110+0000 mgr.y (mgr.14505) 286 : cluster [DBG] pgmap v432: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: cluster 2026-03-09T17:33:01.158861+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: cluster 2026-03-09T17:33:01.158861+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.172840+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.172840+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.173954+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.173954+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.175304+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.175304+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.175499+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.175499+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.176722+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.176722+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.176884+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:01.176884+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: cluster 2026-03-09T17:33:01.683954+0000 mon.a (mon.0) 2299 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: cluster 2026-03-09T17:33:01.683954+0000 mon.a (mon.0) 2299 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:02.113553+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:02.113553+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:02.114958+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:02.114958+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:02.115085+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:02.115085+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:02.116232+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:02 vm00 bash[28333]: audit 2026-03-09T17:33:02.116232+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:01.176884+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: cluster 2026-03-09T17:33:01.683954+0000 mon.a (mon.0) 2299 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: cluster 2026-03-09T17:33:01.683954+0000 mon.a (mon.0) 2299 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:02.113553+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:02.113553+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:02.114958+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:02.114958+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:02.115085+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:02.115085+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:02.116232+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:02 vm00 bash[20770]: audit 2026-03-09T17:33:02.116232+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: cluster 2026-03-09T17:33:00.740110+0000 mgr.y (mgr.14505) 286 : cluster [DBG] pgmap v432: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: cluster 2026-03-09T17:33:00.740110+0000 mgr.y (mgr.14505) 286 : cluster [DBG] pgmap v432: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: cluster 2026-03-09T17:33:01.158861+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: cluster 2026-03-09T17:33:01.158861+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.172840+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.172840+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.173954+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.173954+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.175304+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.175304+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.175499+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.175499+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.176722+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.176722+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.176884+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:01.176884+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: cluster 2026-03-09T17:33:01.683954+0000 mon.a (mon.0) 2299 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: cluster 2026-03-09T17:33:01.683954+0000 mon.a (mon.0) 2299 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:02.113553+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:02.113553+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:02.114958+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:02.114958+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:02.115085+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:02.115085+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:02.116232+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:02.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:02 vm02 bash[23351]: audit 2026-03-09T17:33:02.116232+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:01.734236+0000 mgr.y (mgr.14505) 287 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:01.734236+0000 mgr.y (mgr.14505) 287 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:02.171854+0000 mon.a (mon.0) 2302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:02.171854+0000 mon.a (mon.0) 2302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:02.171911+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]': finished 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:02.171911+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]': finished 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: cluster 2026-03-09T17:33:02.185292+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: cluster 2026-03-09T17:33:02.185292+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:02.186470+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:02.186470+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:02.186855+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:03 vm00 bash[28333]: audit 2026-03-09T17:33:02.186855+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:01.734236+0000 mgr.y (mgr.14505) 287 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:01.734236+0000 mgr.y (mgr.14505) 287 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:02.171854+0000 mon.a (mon.0) 2302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:02.171854+0000 mon.a (mon.0) 2302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:02.171911+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]': finished 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:02.171911+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]': finished 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: cluster 2026-03-09T17:33:02.185292+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: cluster 2026-03-09T17:33:02.185292+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:02.186470+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:02.186470+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:02.186855+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:03 vm00 bash[20770]: audit 2026-03-09T17:33:02.186855+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:01.734236+0000 mgr.y (mgr.14505) 287 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:01.734236+0000 mgr.y (mgr.14505) 287 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:02.171854+0000 mon.a (mon.0) 2302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:02.171854+0000 mon.a (mon.0) 2302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59916-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:02.171911+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]': finished 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:02.171911+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-45"}]': finished 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: cluster 2026-03-09T17:33:02.185292+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: cluster 2026-03-09T17:33:02.185292+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:02.186470+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:02.186470+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:02.186855+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:03 vm02 bash[23351]: audit 2026-03-09T17:33:02.186855+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: cluster 2026-03-09T17:33:02.740546+0000 mgr.y (mgr.14505) 288 : cluster [DBG] pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: cluster 2026-03-09T17:33:02.740546+0000 mgr.y (mgr.14505) 288 : cluster [DBG] pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: cluster 2026-03-09T17:33:03.196441+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: cluster 2026-03-09T17:33:03.196441+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: audit 2026-03-09T17:33:04.179950+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: audit 2026-03-09T17:33:04.179950+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: cluster 2026-03-09T17:33:04.209198+0000 mon.a (mon.0) 2308 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: cluster 2026-03-09T17:33:04.209198+0000 mon.a (mon.0) 2308 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: audit 2026-03-09T17:33:04.209453+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: audit 2026-03-09T17:33:04.209453+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: audit 2026-03-09T17:33:04.211464+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:04 vm00 bash[28333]: audit 2026-03-09T17:33:04.211464+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: cluster 2026-03-09T17:33:02.740546+0000 mgr.y (mgr.14505) 288 : cluster [DBG] pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: cluster 2026-03-09T17:33:02.740546+0000 mgr.y (mgr.14505) 288 : cluster [DBG] pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: cluster 2026-03-09T17:33:03.196441+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: cluster 2026-03-09T17:33:03.196441+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: audit 2026-03-09T17:33:04.179950+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: audit 2026-03-09T17:33:04.179950+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:04.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: cluster 2026-03-09T17:33:04.209198+0000 mon.a (mon.0) 2308 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T17:33:04.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: cluster 2026-03-09T17:33:04.209198+0000 mon.a (mon.0) 2308 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T17:33:04.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: audit 2026-03-09T17:33:04.209453+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: audit 2026-03-09T17:33:04.209453+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: audit 2026-03-09T17:33:04.211464+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:04 vm00 bash[20770]: audit 2026-03-09T17:33:04.211464+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: cluster 2026-03-09T17:33:02.740546+0000 mgr.y (mgr.14505) 288 : cluster [DBG] pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: cluster 2026-03-09T17:33:02.740546+0000 mgr.y (mgr.14505) 288 : cluster [DBG] pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: cluster 2026-03-09T17:33:03.196441+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: cluster 2026-03-09T17:33:03.196441+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: audit 2026-03-09T17:33:04.179950+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: audit 2026-03-09T17:33:04.179950+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59916-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: cluster 2026-03-09T17:33:04.209198+0000 mon.a (mon.0) 2308 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: cluster 2026-03-09T17:33:04.209198+0000 mon.a (mon.0) 2308 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: audit 2026-03-09T17:33:04.209453+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: audit 2026-03-09T17:33:04.209453+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: audit 2026-03-09T17:33:04.211464+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:04 vm02 bash[23351]: audit 2026-03-09T17:33:04.211464+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: cluster 2026-03-09T17:33:04.740996+0000 mgr.y (mgr.14505) 289 : cluster [DBG] pgmap v438: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: cluster 2026-03-09T17:33:04.740996+0000 mgr.y (mgr.14505) 289 : cluster [DBG] pgmap v438: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: audit 2026-03-09T17:33:05.183245+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: audit 2026-03-09T17:33:05.183245+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: cluster 2026-03-09T17:33:05.197692+0000 mon.a (mon.0) 2311 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: cluster 2026-03-09T17:33:05.197692+0000 mon.a (mon.0) 2311 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: audit 2026-03-09T17:33:05.265597+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: audit 2026-03-09T17:33:05.265597+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: audit 2026-03-09T17:33:05.266942+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:06 vm00 bash[28333]: audit 2026-03-09T17:33:05.266942+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:33:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:33:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: cluster 2026-03-09T17:33:04.740996+0000 mgr.y (mgr.14505) 289 : cluster [DBG] pgmap v438: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: cluster 2026-03-09T17:33:04.740996+0000 mgr.y (mgr.14505) 289 : cluster [DBG] pgmap v438: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: audit 2026-03-09T17:33:05.183245+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: audit 2026-03-09T17:33:05.183245+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: cluster 2026-03-09T17:33:05.197692+0000 mon.a (mon.0) 2311 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T17:33:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: cluster 2026-03-09T17:33:05.197692+0000 mon.a (mon.0) 2311 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T17:33:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: audit 2026-03-09T17:33:05.265597+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: audit 2026-03-09T17:33:05.265597+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: audit 2026-03-09T17:33:05.266942+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:06 vm00 bash[20770]: audit 2026-03-09T17:33:05.266942+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: cluster 2026-03-09T17:33:04.740996+0000 mgr.y (mgr.14505) 289 : cluster [DBG] pgmap v438: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: cluster 2026-03-09T17:33:04.740996+0000 mgr.y (mgr.14505) 289 : cluster [DBG] pgmap v438: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: audit 2026-03-09T17:33:05.183245+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: audit 2026-03-09T17:33:05.183245+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: cluster 2026-03-09T17:33:05.197692+0000 mon.a (mon.0) 2311 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: cluster 2026-03-09T17:33:05.197692+0000 mon.a (mon.0) 2311 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: audit 2026-03-09T17:33:05.265597+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: audit 2026-03-09T17:33:05.265597+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: audit 2026-03-09T17:33:05.266942+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:06 vm02 bash[23351]: audit 2026-03-09T17:33:05.266942+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.195356+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.195356+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.200429+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.200429+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: cluster 2026-03-09T17:33:06.200756+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: cluster 2026-03-09T17:33:06.200756+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.206806+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.206806+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.208500+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.208500+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.208731+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: audit 2026-03-09T17:33:06.208731+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: cluster 2026-03-09T17:33:06.684566+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:07 vm00 bash[28333]: cluster 2026-03-09T17:33:06.684566+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.195356+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.195356+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.200429+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.200429+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: cluster 2026-03-09T17:33:06.200756+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: cluster 2026-03-09T17:33:06.200756+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.206806+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.206806+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.208500+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.208500+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.208731+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: audit 2026-03-09T17:33:06.208731+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: cluster 2026-03-09T17:33:06.684566+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:07 vm00 bash[20770]: cluster 2026-03-09T17:33:06.684566+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.195356+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.195356+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.200429+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.200429+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: cluster 2026-03-09T17:33:06.200756+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: cluster 2026-03-09T17:33:06.200756+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.206806+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.206806+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.208500+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.208500+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.208731+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: audit 2026-03-09T17:33:06.208731+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: cluster 2026-03-09T17:33:06.684566+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:07.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:07 vm02 bash[23351]: cluster 2026-03-09T17:33:06.684566+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: cluster 2026-03-09T17:33:06.741352+0000 mgr.y (mgr.14505) 290 : cluster [DBG] pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: cluster 2026-03-09T17:33:06.741352+0000 mgr.y (mgr.14505) 290 : cluster [DBG] pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.232340+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.232340+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.232433+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.232433+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: cluster 2026-03-09T17:33:07.236241+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: cluster 2026-03-09T17:33:07.236241+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.243939+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.243939+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.245441+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.245441+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.247697+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.247697+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.247893+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:08 vm02 bash[23351]: audit 2026-03-09T17:33:07.247893+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: cluster 2026-03-09T17:33:06.741352+0000 mgr.y (mgr.14505) 290 : cluster [DBG] pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: cluster 2026-03-09T17:33:06.741352+0000 mgr.y (mgr.14505) 290 : cluster [DBG] pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.232340+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.232340+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.232433+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.232433+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: cluster 2026-03-09T17:33:07.236241+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: cluster 2026-03-09T17:33:07.236241+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.243939+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.243939+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.245441+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.245441+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.247697+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.247697+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.247893+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:08 vm00 bash[28333]: audit 2026-03-09T17:33:07.247893+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: cluster 2026-03-09T17:33:06.741352+0000 mgr.y (mgr.14505) 290 : cluster [DBG] pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: cluster 2026-03-09T17:33:06.741352+0000 mgr.y (mgr.14505) 290 : cluster [DBG] pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.232340+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.232340+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.232433+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.232433+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: cluster 2026-03-09T17:33:07.236241+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: cluster 2026-03-09T17:33:07.236241+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.243939+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.243939+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.245441+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.245441+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.247697+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.247697+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.100:0/4094953006' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.247893+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:08 vm00 bash[20770]: audit 2026-03-09T17:33:07.247893+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: cluster 2026-03-09T17:33:08.314191+0000 mon.a (mon.0) 2323 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: cluster 2026-03-09T17:33:08.314191+0000 mon.a (mon.0) 2323 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.332024+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]': finished 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.332024+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]': finished 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.332124+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.332124+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.335786+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.335786+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: cluster 2026-03-09T17:33:08.339408+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: cluster 2026-03-09T17:33:08.339408+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.339831+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.339831+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.357349+0000 mon.a (mon.0) 2328 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.357349+0000 mon.a (mon.0) 2328 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.358105+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.358105+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.358261+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:09 vm02 bash[23351]: audit 2026-03-09T17:33:08.358261+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: cluster 2026-03-09T17:33:08.314191+0000 mon.a (mon.0) 2323 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: cluster 2026-03-09T17:33:08.314191+0000 mon.a (mon.0) 2323 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.332024+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]': finished 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.332024+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]': finished 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.332124+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.332124+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.335786+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.335786+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: cluster 2026-03-09T17:33:08.339408+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T17:33:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: cluster 2026-03-09T17:33:08.339408+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.339831+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.339831+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.357349+0000 mon.a (mon.0) 2328 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.357349+0000 mon.a (mon.0) 2328 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.358105+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.358105+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.358261+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:09 vm00 bash[28333]: audit 2026-03-09T17:33:08.358261+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: cluster 2026-03-09T17:33:08.314191+0000 mon.a (mon.0) 2323 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: cluster 2026-03-09T17:33:08.314191+0000 mon.a (mon.0) 2323 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.332024+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]': finished 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.332024+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-47", "mode": "writeback"}]': finished 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.332124+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.332124+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59916-61"}]': finished 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.335786+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.335786+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: cluster 2026-03-09T17:33:08.339408+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: cluster 2026-03-09T17:33:08.339408+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.339831+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.339831+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.357349+0000 mon.a (mon.0) 2328 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.357349+0000 mon.a (mon.0) 2328 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.358105+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.358105+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.358261+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:09 vm00 bash[20770]: audit 2026-03-09T17:33:08.358261+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:10.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: cluster 2026-03-09T17:33:08.741739+0000 mgr.y (mgr.14505) 291 : cluster [DBG] pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: cluster 2026-03-09T17:33:08.741739+0000 mgr.y (mgr.14505) 291 : cluster [DBG] pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.346253+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.346253+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.346399+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.346399+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.352679+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.352679+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: cluster 2026-03-09T17:33:09.356538+0000 mon.a (mon.0) 2333 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: cluster 2026-03-09T17:33:09.356538+0000 mon.a (mon.0) 2333 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.357771+0000 mon.a (mon.0) 2334 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.357771+0000 mon.a (mon.0) 2334 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.357951+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:10 vm00 bash[28333]: audit 2026-03-09T17:33:09.357951+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: cluster 2026-03-09T17:33:08.741739+0000 mgr.y (mgr.14505) 291 : cluster [DBG] pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: cluster 2026-03-09T17:33:08.741739+0000 mgr.y (mgr.14505) 291 : cluster [DBG] pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.346253+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.346253+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.346399+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.346399+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.352679+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.352679+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: cluster 2026-03-09T17:33:09.356538+0000 mon.a (mon.0) 2333 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: cluster 2026-03-09T17:33:09.356538+0000 mon.a (mon.0) 2333 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.357771+0000 mon.a (mon.0) 2334 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.357771+0000 mon.a (mon.0) 2334 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.357951+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:10 vm00 bash[20770]: audit 2026-03-09T17:33:09.357951+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: cluster 2026-03-09T17:33:08.741739+0000 mgr.y (mgr.14505) 291 : cluster [DBG] pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: cluster 2026-03-09T17:33:08.741739+0000 mgr.y (mgr.14505) 291 : cluster [DBG] pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.346253+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.346253+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.346399+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.346399+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59916-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.352679+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.352679+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: cluster 2026-03-09T17:33:09.356538+0000 mon.a (mon.0) 2333 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: cluster 2026-03-09T17:33:09.356538+0000 mon.a (mon.0) 2333 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.357771+0000 mon.a (mon.0) 2334 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.357771+0000 mon.a (mon.0) 2334 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.357951+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:10 vm02 bash[23351]: audit 2026-03-09T17:33:09.357951+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:10.361998+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:10.361998+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:10.372742+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:10.372742+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: cluster 2026-03-09T17:33:10.380382+0000 mon.a (mon.0) 2337 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: cluster 2026-03-09T17:33:10.380382+0000 mon.a (mon.0) 2337 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:10.382940+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:10.382940+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: cluster 2026-03-09T17:33:10.742105+0000 mgr.y (mgr.14505) 292 : cluster [DBG] pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: cluster 2026-03-09T17:33:10.742105+0000 mgr.y (mgr.14505) 292 : cluster [DBG] pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: cluster 2026-03-09T17:33:11.361976+0000 mon.a (mon.0) 2339 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: cluster 2026-03-09T17:33:11.361976+0000 mon.a (mon.0) 2339 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:11.365415+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:11.365415+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:11.365660+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:11.365660+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: cluster 2026-03-09T17:33:11.370446+0000 mon.a (mon.0) 2342 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: cluster 2026-03-09T17:33:11.370446+0000 mon.a (mon.0) 2342 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:11.373031+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:11.373031+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:11.375221+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.745 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:11 vm02 bash[23351]: audit 2026-03-09T17:33:11.375221+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:10.361998+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:10.361998+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:10.372742+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:10.372742+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: cluster 2026-03-09T17:33:10.380382+0000 mon.a (mon.0) 2337 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T17:33:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: cluster 2026-03-09T17:33:10.380382+0000 mon.a (mon.0) 2337 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:10.382940+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:10.382940+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: cluster 2026-03-09T17:33:10.742105+0000 mgr.y (mgr.14505) 292 : cluster [DBG] pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: cluster 2026-03-09T17:33:10.742105+0000 mgr.y (mgr.14505) 292 : cluster [DBG] pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: cluster 2026-03-09T17:33:11.361976+0000 mon.a (mon.0) 2339 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: cluster 2026-03-09T17:33:11.361976+0000 mon.a (mon.0) 2339 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:11.365415+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:11.365415+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:11.365660+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:11.365660+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: cluster 2026-03-09T17:33:11.370446+0000 mon.a (mon.0) 2342 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: cluster 2026-03-09T17:33:11.370446+0000 mon.a (mon.0) 2342 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:11.373031+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:11.373031+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:11.375221+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:11 vm00 bash[28333]: audit 2026-03-09T17:33:11.375221+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:10.361998+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:10.361998+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:10.372742+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:10.372742+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: cluster 2026-03-09T17:33:10.380382+0000 mon.a (mon.0) 2337 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: cluster 2026-03-09T17:33:10.380382+0000 mon.a (mon.0) 2337 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:10.382940+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:10.382940+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: cluster 2026-03-09T17:33:10.742105+0000 mgr.y (mgr.14505) 292 : cluster [DBG] pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: cluster 2026-03-09T17:33:10.742105+0000 mgr.y (mgr.14505) 292 : cluster [DBG] pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: cluster 2026-03-09T17:33:11.361976+0000 mon.a (mon.0) 2339 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: cluster 2026-03-09T17:33:11.361976+0000 mon.a (mon.0) 2339 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:11.365415+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:11.365415+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59916-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:11.365660+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:11.365660+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: cluster 2026-03-09T17:33:11.370446+0000 mon.a (mon.0) 2342 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: cluster 2026-03-09T17:33:11.370446+0000 mon.a (mon.0) 2342 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:11.373031+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:11.373031+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:11.375221+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:11 vm00 bash[20770]: audit 2026-03-09T17:33:11.375221+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:12.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:33:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:33:12.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: cluster 2026-03-09T17:33:11.685219+0000 mon.a (mon.0) 2344 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: cluster 2026-03-09T17:33:11.685219+0000 mon.a (mon.0) 2344 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: audit 2026-03-09T17:33:11.745069+0000 mgr.y (mgr.14505) 293 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: audit 2026-03-09T17:33:11.745069+0000 mgr.y (mgr.14505) 293 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: audit 2026-03-09T17:33:12.369164+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: audit 2026-03-09T17:33:12.369164+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: cluster 2026-03-09T17:33:12.373680+0000 mon.a (mon.0) 2346 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: cluster 2026-03-09T17:33:12.373680+0000 mon.a (mon.0) 2346 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: audit 2026-03-09T17:33:12.381333+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: audit 2026-03-09T17:33:12.381333+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: audit 2026-03-09T17:33:12.385996+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:12 vm00 bash[28333]: audit 2026-03-09T17:33:12.385996+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: cluster 2026-03-09T17:33:11.685219+0000 mon.a (mon.0) 2344 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: cluster 2026-03-09T17:33:11.685219+0000 mon.a (mon.0) 2344 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: audit 2026-03-09T17:33:11.745069+0000 mgr.y (mgr.14505) 293 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: audit 2026-03-09T17:33:11.745069+0000 mgr.y (mgr.14505) 293 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: audit 2026-03-09T17:33:12.369164+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: audit 2026-03-09T17:33:12.369164+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: cluster 2026-03-09T17:33:12.373680+0000 mon.a (mon.0) 2346 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: cluster 2026-03-09T17:33:12.373680+0000 mon.a (mon.0) 2346 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: audit 2026-03-09T17:33:12.381333+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: audit 2026-03-09T17:33:12.381333+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: audit 2026-03-09T17:33:12.385996+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:12 vm00 bash[20770]: audit 2026-03-09T17:33:12.385996+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: cluster 2026-03-09T17:33:11.685219+0000 mon.a (mon.0) 2344 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: cluster 2026-03-09T17:33:11.685219+0000 mon.a (mon.0) 2344 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: audit 2026-03-09T17:33:11.745069+0000 mgr.y (mgr.14505) 293 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: audit 2026-03-09T17:33:11.745069+0000 mgr.y (mgr.14505) 293 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: audit 2026-03-09T17:33:12.369164+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: audit 2026-03-09T17:33:12.369164+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: cluster 2026-03-09T17:33:12.373680+0000 mon.a (mon.0) 2346 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: cluster 2026-03-09T17:33:12.373680+0000 mon.a (mon.0) 2346 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: audit 2026-03-09T17:33:12.381333+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: audit 2026-03-09T17:33:12.381333+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: audit 2026-03-09T17:33:12.385996+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:12 vm02 bash[23351]: audit 2026-03-09T17:33:12.385996+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:33:13.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:12.681606+0000 mon.c (mon.2) 571 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:12.681606+0000 mon.c (mon.2) 571 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: cluster 2026-03-09T17:33:12.742539+0000 mgr.y (mgr.14505) 294 : cluster [DBG] pgmap v450: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: cluster 2026-03-09T17:33:12.742539+0000 mgr.y (mgr.14505) 294 : cluster [DBG] pgmap v450: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:13.379372+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:13.379372+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:13.381820+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:13.381820+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: cluster 2026-03-09T17:33:13.383258+0000 mon.a (mon.0) 2349 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: cluster 2026-03-09T17:33:13.383258+0000 mon.a (mon.0) 2349 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:13.388736+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:13.388736+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:13.391017+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:13 vm00 bash[28333]: audit 2026-03-09T17:33:13.391017+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:12.681606+0000 mon.c (mon.2) 571 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:12.681606+0000 mon.c (mon.2) 571 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: cluster 2026-03-09T17:33:12.742539+0000 mgr.y (mgr.14505) 294 : cluster [DBG] pgmap v450: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: cluster 2026-03-09T17:33:12.742539+0000 mgr.y (mgr.14505) 294 : cluster [DBG] pgmap v450: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:13.379372+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:13.379372+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:13.381820+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:13.381820+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: cluster 2026-03-09T17:33:13.383258+0000 mon.a (mon.0) 2349 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: cluster 2026-03-09T17:33:13.383258+0000 mon.a (mon.0) 2349 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:13.388736+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:13.388736+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:13.391017+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:13 vm00 bash[20770]: audit 2026-03-09T17:33:13.391017+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:12.681606+0000 mon.c (mon.2) 571 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:12.681606+0000 mon.c (mon.2) 571 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: cluster 2026-03-09T17:33:12.742539+0000 mgr.y (mgr.14505) 294 : cluster [DBG] pgmap v450: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: cluster 2026-03-09T17:33:12.742539+0000 mgr.y (mgr.14505) 294 : cluster [DBG] pgmap v450: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:13.379372+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:13.379372+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:13.381820+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:13.381820+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: cluster 2026-03-09T17:33:13.383258+0000 mon.a (mon.0) 2349 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: cluster 2026-03-09T17:33:13.383258+0000 mon.a (mon.0) 2349 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:13.388736+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:13.388736+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:13.391017+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:13 vm02 bash[23351]: audit 2026-03-09T17:33:13.391017+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.382897+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.382897+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.383021+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.383021+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: cluster 2026-03-09T17:33:14.394995+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: cluster 2026-03-09T17:33:14.394995+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.396221+0000 mon.a (mon.0) 2355 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.396221+0000 mon.a (mon.0) 2355 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.431867+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.431867+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.433174+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: audit 2026-03-09T17:33:14.433174+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: cluster 2026-03-09T17:33:14.742891+0000 mgr.y (mgr.14505) 295 : cluster [DBG] pgmap v453: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:15 vm00 bash[28333]: cluster 2026-03-09T17:33:14.742891+0000 mgr.y (mgr.14505) 295 : cluster [DBG] pgmap v453: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:15.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.382897+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.382897+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.383021+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.383021+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: cluster 2026-03-09T17:33:14.394995+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: cluster 2026-03-09T17:33:14.394995+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.396221+0000 mon.a (mon.0) 2355 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.396221+0000 mon.a (mon.0) 2355 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.431867+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.431867+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.433174+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: audit 2026-03-09T17:33:14.433174+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: cluster 2026-03-09T17:33:14.742891+0000 mgr.y (mgr.14505) 295 : cluster [DBG] pgmap v453: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:15.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:15 vm00 bash[20770]: cluster 2026-03-09T17:33:14.742891+0000 mgr.y (mgr.14505) 295 : cluster [DBG] pgmap v453: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.382897+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.382897+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.383021+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.383021+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: cluster 2026-03-09T17:33:14.394995+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: cluster 2026-03-09T17:33:14.394995+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.396221+0000 mon.a (mon.0) 2355 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.396221+0000 mon.a (mon.0) 2355 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]: dispatch 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.431867+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.431867+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.433174+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: audit 2026-03-09T17:33:14.433174+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: cluster 2026-03-09T17:33:14.742891+0000 mgr.y (mgr.14505) 295 : cluster [DBG] pgmap v453: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:15 vm02 bash[23351]: cluster 2026-03-09T17:33:14.742891+0000 mgr.y (mgr.14505) 295 : cluster [DBG] pgmap v453: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.386848+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.386848+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.386892+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.386892+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.389717+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.389717+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: cluster 2026-03-09T17:33:15.398707+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: cluster 2026-03-09T17:33:15.398707+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.400440+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.400440+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.419255+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.419255+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.421593+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.421593+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.422287+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.422287+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.422649+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.422649+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.423214+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.423214+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.423572+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:15.423572+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:16.389985+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:16.389985+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:16.390121+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:16.390121+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:16.398126+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:16.398126+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: cluster 2026-03-09T17:33:16.400775+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: cluster 2026-03-09T17:33:16.400775+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:16.402374+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:16 vm00 bash[28333]: audit 2026-03-09T17:33:16.402374+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.386848+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.386848+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.386892+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.386892+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.389717+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.389717+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: cluster 2026-03-09T17:33:15.398707+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: cluster 2026-03-09T17:33:15.398707+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.400440+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.400440+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.419255+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.419255+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.421593+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.421593+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.422287+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.422287+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.422649+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.422649+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.423214+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.423214+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.423572+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:15.423572+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:16.389985+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:16.389985+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:16.390121+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:16.390121+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:16.398126+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:16.398126+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: cluster 2026-03-09T17:33:16.400775+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: cluster 2026-03-09T17:33:16.400775+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:16.402374+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:16 vm00 bash[20770]: audit 2026-03-09T17:33:16.402374+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:33:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:33:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.386848+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.386848+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? 192.168.123.100:0/1264116498' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59916-62"}]': finished 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.386892+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.386892+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.389717+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.389717+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: cluster 2026-03-09T17:33:15.398707+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: cluster 2026-03-09T17:33:15.398707+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.400440+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.400440+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.419255+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.419255+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.421593+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.421593+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.422287+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.422287+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.422649+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.422649+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.423214+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.423214+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.423572+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:15.423572+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:16.389985+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:16.389985+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]': finished 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:16.390121+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:16.390121+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59916-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:16.398126+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:16.398126+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: cluster 2026-03-09T17:33:16.400775+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: cluster 2026-03-09T17:33:16.400775+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:16.402374+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:16.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:16 vm02 bash[23351]: audit 2026-03-09T17:33:16.402374+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: audit 2026-03-09T17:33:16.439448+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: audit 2026-03-09T17:33:16.439448+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: audit 2026-03-09T17:33:16.440202+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: audit 2026-03-09T17:33:16.440202+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: audit 2026-03-09T17:33:16.440635+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: audit 2026-03-09T17:33:16.440635+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: audit 2026-03-09T17:33:16.441326+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: audit 2026-03-09T17:33:16.441326+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: cluster 2026-03-09T17:33:16.743250+0000 mgr.y (mgr.14505) 296 : cluster [DBG] pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:17 vm00 bash[28333]: cluster 2026-03-09T17:33:16.743250+0000 mgr.y (mgr.14505) 296 : cluster [DBG] pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: audit 2026-03-09T17:33:16.439448+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: audit 2026-03-09T17:33:16.439448+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: audit 2026-03-09T17:33:16.440202+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: audit 2026-03-09T17:33:16.440202+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: audit 2026-03-09T17:33:16.440635+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: audit 2026-03-09T17:33:16.440635+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: audit 2026-03-09T17:33:16.441326+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: audit 2026-03-09T17:33:16.441326+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: cluster 2026-03-09T17:33:16.743250+0000 mgr.y (mgr.14505) 296 : cluster [DBG] pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:17 vm00 bash[20770]: cluster 2026-03-09T17:33:16.743250+0000 mgr.y (mgr.14505) 296 : cluster [DBG] pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: audit 2026-03-09T17:33:16.439448+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: audit 2026-03-09T17:33:16.439448+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: audit 2026-03-09T17:33:16.440202+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: audit 2026-03-09T17:33:16.440202+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: audit 2026-03-09T17:33:16.440635+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: audit 2026-03-09T17:33:16.440635+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: audit 2026-03-09T17:33:16.441326+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: audit 2026-03-09T17:33:16.441326+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-47"}]: dispatch 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: cluster 2026-03-09T17:33:16.743250+0000 mgr.y (mgr.14505) 296 : cluster [DBG] pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:17 vm02 bash[23351]: cluster 2026-03-09T17:33:16.743250+0000 mgr.y (mgr.14505) 296 : cluster [DBG] pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: cluster 2026-03-09T17:33:17.415984+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T17:33:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: cluster 2026-03-09T17:33:17.415984+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T17:33:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: audit 2026-03-09T17:33:18.396402+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: audit 2026-03-09T17:33:18.396402+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: cluster 2026-03-09T17:33:18.399302+0000 mon.a (mon.0) 2372 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T17:33:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: cluster 2026-03-09T17:33:18.399302+0000 mon.a (mon.0) 2372 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T17:33:18.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: audit 2026-03-09T17:33:18.401488+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: audit 2026-03-09T17:33:18.401488+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: audit 2026-03-09T17:33:18.403445+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:18 vm00 bash[28333]: audit 2026-03-09T17:33:18.403445+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: cluster 2026-03-09T17:33:17.415984+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: cluster 2026-03-09T17:33:17.415984+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: audit 2026-03-09T17:33:18.396402+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: audit 2026-03-09T17:33:18.396402+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: cluster 2026-03-09T17:33:18.399302+0000 mon.a (mon.0) 2372 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: cluster 2026-03-09T17:33:18.399302+0000 mon.a (mon.0) 2372 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: audit 2026-03-09T17:33:18.401488+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: audit 2026-03-09T17:33:18.401488+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: audit 2026-03-09T17:33:18.403445+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:18 vm00 bash[20770]: audit 2026-03-09T17:33:18.403445+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: cluster 2026-03-09T17:33:17.415984+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: cluster 2026-03-09T17:33:17.415984+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: audit 2026-03-09T17:33:18.396402+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: audit 2026-03-09T17:33:18.396402+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59916-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: cluster 2026-03-09T17:33:18.399302+0000 mon.a (mon.0) 2372 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: cluster 2026-03-09T17:33:18.399302+0000 mon.a (mon.0) 2372 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: audit 2026-03-09T17:33:18.401488+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: audit 2026-03-09T17:33:18.401488+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: audit 2026-03-09T17:33:18.403445+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:18 vm02 bash[23351]: audit 2026-03-09T17:33:18.403445+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:19.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:19 vm00 bash[28333]: cluster 2026-03-09T17:33:18.743552+0000 mgr.y (mgr.14505) 297 : cluster [DBG] pgmap v459: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:19.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:19 vm00 bash[28333]: cluster 2026-03-09T17:33:18.743552+0000 mgr.y (mgr.14505) 297 : cluster [DBG] pgmap v459: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:19.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:19 vm00 bash[28333]: audit 2026-03-09T17:33:19.399899+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:19.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:19 vm00 bash[28333]: audit 2026-03-09T17:33:19.399899+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:19.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:19 vm00 bash[28333]: cluster 2026-03-09T17:33:19.402227+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T17:33:19.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:19 vm00 bash[28333]: cluster 2026-03-09T17:33:19.402227+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T17:33:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:19 vm00 bash[20770]: cluster 2026-03-09T17:33:18.743552+0000 mgr.y (mgr.14505) 297 : cluster [DBG] pgmap v459: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:19 vm00 bash[20770]: cluster 2026-03-09T17:33:18.743552+0000 mgr.y (mgr.14505) 297 : cluster [DBG] pgmap v459: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:19 vm00 bash[20770]: audit 2026-03-09T17:33:19.399899+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:19 vm00 bash[20770]: audit 2026-03-09T17:33:19.399899+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:19 vm00 bash[20770]: cluster 2026-03-09T17:33:19.402227+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T17:33:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:19 vm00 bash[20770]: cluster 2026-03-09T17:33:19.402227+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T17:33:19.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:19 vm02 bash[23351]: cluster 2026-03-09T17:33:18.743552+0000 mgr.y (mgr.14505) 297 : cluster [DBG] pgmap v459: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:19 vm02 bash[23351]: cluster 2026-03-09T17:33:18.743552+0000 mgr.y (mgr.14505) 297 : cluster [DBG] pgmap v459: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:19 vm02 bash[23351]: audit 2026-03-09T17:33:19.399899+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:19 vm02 bash[23351]: audit 2026-03-09T17:33:19.399899+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:19 vm02 bash[23351]: cluster 2026-03-09T17:33:19.402227+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T17:33:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:19 vm02 bash[23351]: cluster 2026-03-09T17:33:19.402227+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T17:33:20.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: cluster 2026-03-09T17:33:19.431708+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: cluster 2026-03-09T17:33:19.431708+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:19.440682+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:19.440682+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:19.446546+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:19.446546+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.402794+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.402794+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.405371+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.405371+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: cluster 2026-03-09T17:33:20.405591+0000 mon.a (mon.0) 2379 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: cluster 2026-03-09T17:33:20.405591+0000 mon.a (mon.0) 2379 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.406861+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.406861+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.412667+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.412667+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.412912+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:20 vm00 bash[28333]: audit 2026-03-09T17:33:20.412912+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: cluster 2026-03-09T17:33:19.431708+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: cluster 2026-03-09T17:33:19.431708+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:19.440682+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:19.440682+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:19.446546+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:19.446546+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.402794+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.402794+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.405371+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.405371+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: cluster 2026-03-09T17:33:20.405591+0000 mon.a (mon.0) 2379 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: cluster 2026-03-09T17:33:20.405591+0000 mon.a (mon.0) 2379 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.406861+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.406861+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.412667+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.412667+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.412912+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:20 vm00 bash[20770]: audit 2026-03-09T17:33:20.412912+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: cluster 2026-03-09T17:33:19.431708+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: cluster 2026-03-09T17:33:19.431708+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:19.440682+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:19.440682+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:19.446546+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:19.446546+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.402794+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.402794+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.405371+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.405371+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: cluster 2026-03-09T17:33:20.405591+0000 mon.a (mon.0) 2379 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: cluster 2026-03-09T17:33:20.405591+0000 mon.a (mon.0) 2379 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.406861+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.406861+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.412667+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.412667+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.412912+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:20 vm02 bash[23351]: audit 2026-03-09T17:33:20.412912+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.755 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: cluster 2026-03-09T17:33:20.743848+0000 mgr.y (mgr.14505) 298 : cluster [DBG] pgmap v462: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:21.755 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: cluster 2026-03-09T17:33:20.743848+0000 mgr.y (mgr.14505) 298 : cluster [DBG] pgmap v462: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.405600+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.405600+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.405650+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.405650+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.408021+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.408021+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: cluster 2026-03-09T17:33:21.409173+0000 mon.a (mon.0) 2384 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: cluster 2026-03-09T17:33:21.409173+0000 mon.a (mon.0) 2384 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.410954+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.410954+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.417651+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.417651+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.417866+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.756 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:21 vm02 bash[23351]: audit 2026-03-09T17:33:21.417866+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: cluster 2026-03-09T17:33:20.743848+0000 mgr.y (mgr.14505) 298 : cluster [DBG] pgmap v462: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: cluster 2026-03-09T17:33:20.743848+0000 mgr.y (mgr.14505) 298 : cluster [DBG] pgmap v462: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.405600+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.405600+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.405650+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.405650+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.408021+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.408021+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: cluster 2026-03-09T17:33:21.409173+0000 mon.a (mon.0) 2384 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: cluster 2026-03-09T17:33:21.409173+0000 mon.a (mon.0) 2384 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.410954+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.410954+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.417651+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.417651+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.417866+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:21 vm00 bash[28333]: audit 2026-03-09T17:33:21.417866+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: cluster 2026-03-09T17:33:20.743848+0000 mgr.y (mgr.14505) 298 : cluster [DBG] pgmap v462: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: cluster 2026-03-09T17:33:20.743848+0000 mgr.y (mgr.14505) 298 : cluster [DBG] pgmap v462: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.405600+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.405600+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.405650+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.405650+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.408021+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.408021+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: cluster 2026-03-09T17:33:21.409173+0000 mon.a (mon.0) 2384 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: cluster 2026-03-09T17:33:21.409173+0000 mon.a (mon.0) 2384 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.410954+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.410954+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.417651+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.417651+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.100:0/3451083253' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.417866+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:21 vm00 bash[20770]: audit 2026-03-09T17:33:21.417866+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]: dispatch 2026-03-09T17:33:22.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:33:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:21.755612+0000 mgr.y (mgr.14505) 299 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:21.755612+0000 mgr.y (mgr.14505) 299 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: cluster 2026-03-09T17:33:22.406114+0000 mon.a (mon.0) 2387 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: cluster 2026-03-09T17:33:22.406114+0000 mon.a (mon.0) 2387 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.415438+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]': finished 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.415438+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]': finished 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.415564+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.415564+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: cluster 2026-03-09T17:33:22.423086+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: cluster 2026-03-09T17:33:22.423086+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.440304+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.440304+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.441560+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.441560+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.441771+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:22 vm00 bash[28333]: audit 2026-03-09T17:33:22.441771+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:21.755612+0000 mgr.y (mgr.14505) 299 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:21.755612+0000 mgr.y (mgr.14505) 299 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: cluster 2026-03-09T17:33:22.406114+0000 mon.a (mon.0) 2387 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: cluster 2026-03-09T17:33:22.406114+0000 mon.a (mon.0) 2387 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.415438+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]': finished 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.415438+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]': finished 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.415564+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.415564+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: cluster 2026-03-09T17:33:22.423086+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: cluster 2026-03-09T17:33:22.423086+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.440304+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.440304+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.441560+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.441560+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.441771+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:22 vm00 bash[20770]: audit 2026-03-09T17:33:22.441771+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:21.755612+0000 mgr.y (mgr.14505) 299 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:21.755612+0000 mgr.y (mgr.14505) 299 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: cluster 2026-03-09T17:33:22.406114+0000 mon.a (mon.0) 2387 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: cluster 2026-03-09T17:33:22.406114+0000 mon.a (mon.0) 2387 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.415438+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]': finished 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.415438+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-49", "mode": "readproxy"}]': finished 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.415564+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.415564+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59916-63"}]': finished 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: cluster 2026-03-09T17:33:22.423086+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: cluster 2026-03-09T17:33:22.423086+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.440304+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.440304+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.441560+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.441560+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.441771+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:22 vm02 bash[23351]: audit 2026-03-09T17:33:22.441771+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:23.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:23 vm00 bash[28333]: cluster 2026-03-09T17:33:22.744201+0000 mgr.y (mgr.14505) 300 : cluster [DBG] pgmap v465: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:23.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:23 vm00 bash[28333]: cluster 2026-03-09T17:33:22.744201+0000 mgr.y (mgr.14505) 300 : cluster [DBG] pgmap v465: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:23.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:23 vm00 bash[28333]: audit 2026-03-09T17:33:23.418671+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:23.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:23 vm00 bash[28333]: audit 2026-03-09T17:33:23.418671+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:23 vm00 bash[28333]: cluster 2026-03-09T17:33:23.422777+0000 mon.a (mon.0) 2395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:23 vm00 bash[28333]: cluster 2026-03-09T17:33:23.422777+0000 mon.a (mon.0) 2395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:23 vm00 bash[28333]: audit 2026-03-09T17:33:23.423090+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:23 vm00 bash[28333]: audit 2026-03-09T17:33:23.423090+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:23 vm00 bash[20770]: cluster 2026-03-09T17:33:22.744201+0000 mgr.y (mgr.14505) 300 : cluster [DBG] pgmap v465: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:23 vm00 bash[20770]: cluster 2026-03-09T17:33:22.744201+0000 mgr.y (mgr.14505) 300 : cluster [DBG] pgmap v465: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:23 vm00 bash[20770]: audit 2026-03-09T17:33:23.418671+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:23 vm00 bash[20770]: audit 2026-03-09T17:33:23.418671+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:23 vm00 bash[20770]: cluster 2026-03-09T17:33:23.422777+0000 mon.a (mon.0) 2395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:23 vm00 bash[20770]: cluster 2026-03-09T17:33:23.422777+0000 mon.a (mon.0) 2395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:23 vm00 bash[20770]: audit 2026-03-09T17:33:23.423090+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:23 vm00 bash[20770]: audit 2026-03-09T17:33:23.423090+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:23 vm02 bash[23351]: cluster 2026-03-09T17:33:22.744201+0000 mgr.y (mgr.14505) 300 : cluster [DBG] pgmap v465: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:23 vm02 bash[23351]: cluster 2026-03-09T17:33:22.744201+0000 mgr.y (mgr.14505) 300 : cluster [DBG] pgmap v465: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:23 vm02 bash[23351]: audit 2026-03-09T17:33:23.418671+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:23 vm02 bash[23351]: audit 2026-03-09T17:33:23.418671+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59916-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:23 vm02 bash[23351]: cluster 2026-03-09T17:33:23.422777+0000 mon.a (mon.0) 2395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T17:33:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:23 vm02 bash[23351]: cluster 2026-03-09T17:33:23.422777+0000 mon.a (mon.0) 2395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T17:33:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:23 vm02 bash[23351]: audit 2026-03-09T17:33:23.423090+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:23 vm02 bash[23351]: audit 2026-03-09T17:33:23.423090+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:25.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:25 vm00 bash[28333]: cluster 2026-03-09T17:33:24.438033+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T17:33:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:25 vm00 bash[28333]: cluster 2026-03-09T17:33:24.438033+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T17:33:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:25 vm00 bash[28333]: cluster 2026-03-09T17:33:24.744481+0000 mgr.y (mgr.14505) 301 : cluster [DBG] pgmap v468: 292 pgs: 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:25 vm00 bash[28333]: cluster 2026-03-09T17:33:24.744481+0000 mgr.y (mgr.14505) 301 : cluster [DBG] pgmap v468: 292 pgs: 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:25 vm00 bash[20770]: cluster 2026-03-09T17:33:24.438033+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T17:33:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:25 vm00 bash[20770]: cluster 2026-03-09T17:33:24.438033+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T17:33:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:25 vm00 bash[20770]: cluster 2026-03-09T17:33:24.744481+0000 mgr.y (mgr.14505) 301 : cluster [DBG] pgmap v468: 292 pgs: 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:25 vm00 bash[20770]: cluster 2026-03-09T17:33:24.744481+0000 mgr.y (mgr.14505) 301 : cluster [DBG] pgmap v468: 292 pgs: 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:25 vm02 bash[23351]: cluster 2026-03-09T17:33:24.438033+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T17:33:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:25 vm02 bash[23351]: cluster 2026-03-09T17:33:24.438033+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T17:33:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:25 vm02 bash[23351]: cluster 2026-03-09T17:33:24.744481+0000 mgr.y (mgr.14505) 301 : cluster [DBG] pgmap v468: 292 pgs: 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:25 vm02 bash[23351]: cluster 2026-03-09T17:33:24.744481+0000 mgr.y (mgr.14505) 301 : cluster [DBG] pgmap v468: 292 pgs: 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:26.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:26 vm00 bash[28333]: audit 2026-03-09T17:33:25.431117+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:26 vm00 bash[28333]: audit 2026-03-09T17:33:25.431117+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:26 vm00 bash[28333]: cluster 2026-03-09T17:33:25.437960+0000 mon.a (mon.0) 2399 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T17:33:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:26 vm00 bash[28333]: cluster 2026-03-09T17:33:25.437960+0000 mon.a (mon.0) 2399 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T17:33:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:26 vm00 bash[20770]: audit 2026-03-09T17:33:25.431117+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:26 vm00 bash[20770]: audit 2026-03-09T17:33:25.431117+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:26 vm00 bash[20770]: cluster 2026-03-09T17:33:25.437960+0000 mon.a (mon.0) 2399 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T17:33:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:26 vm00 bash[20770]: cluster 2026-03-09T17:33:25.437960+0000 mon.a (mon.0) 2399 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T17:33:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:33:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:33:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:33:26.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:26 vm02 bash[23351]: audit 2026-03-09T17:33:25.431117+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:26 vm02 bash[23351]: audit 2026-03-09T17:33:25.431117+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/3743045471' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59916-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:26 vm02 bash[23351]: cluster 2026-03-09T17:33:25.437960+0000 mon.a (mon.0) 2399 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T17:33:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:26 vm02 bash[23351]: cluster 2026-03-09T17:33:25.437960+0000 mon.a (mon.0) 2399 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T17:33:27.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:27 vm00 bash[28333]: cluster 2026-03-09T17:33:26.451855+0000 mon.a (mon.0) 2400 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:27 vm00 bash[28333]: cluster 2026-03-09T17:33:26.451855+0000 mon.a (mon.0) 2400 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:27 vm00 bash[28333]: cluster 2026-03-09T17:33:26.687295+0000 mon.a (mon.0) 2401 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:27 vm00 bash[28333]: cluster 2026-03-09T17:33:26.687295+0000 mon.a (mon.0) 2401 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:27 vm00 bash[28333]: cluster 2026-03-09T17:33:26.744928+0000 mgr.y (mgr.14505) 302 : cluster [DBG] pgmap v471: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:27 vm00 bash[28333]: cluster 2026-03-09T17:33:26.744928+0000 mgr.y (mgr.14505) 302 : cluster [DBG] pgmap v471: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:27 vm00 bash[20770]: cluster 2026-03-09T17:33:26.451855+0000 mon.a (mon.0) 2400 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:27 vm00 bash[20770]: cluster 2026-03-09T17:33:26.451855+0000 mon.a (mon.0) 2400 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:27 vm00 bash[20770]: cluster 2026-03-09T17:33:26.687295+0000 mon.a (mon.0) 2401 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:27 vm00 bash[20770]: cluster 2026-03-09T17:33:26.687295+0000 mon.a (mon.0) 2401 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:27 vm00 bash[20770]: cluster 2026-03-09T17:33:26.744928+0000 mgr.y (mgr.14505) 302 : cluster [DBG] pgmap v471: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:27 vm00 bash[20770]: cluster 2026-03-09T17:33:26.744928+0000 mgr.y (mgr.14505) 302 : cluster [DBG] pgmap v471: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:27 vm02 bash[23351]: cluster 2026-03-09T17:33:26.451855+0000 mon.a (mon.0) 2400 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T17:33:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:27 vm02 bash[23351]: cluster 2026-03-09T17:33:26.451855+0000 mon.a (mon.0) 2400 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T17:33:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:27 vm02 bash[23351]: cluster 2026-03-09T17:33:26.687295+0000 mon.a (mon.0) 2401 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:27 vm02 bash[23351]: cluster 2026-03-09T17:33:26.687295+0000 mon.a (mon.0) 2401 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:27 vm02 bash[23351]: cluster 2026-03-09T17:33:26.744928+0000 mgr.y (mgr.14505) 302 : cluster [DBG] pgmap v471: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:27 vm02 bash[23351]: cluster 2026-03-09T17:33:26.744928+0000 mgr.y (mgr.14505) 302 : cluster [DBG] pgmap v471: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:28.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: cluster 2026-03-09T17:33:27.451927+0000 mon.a (mon.0) 2402 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T17:33:28.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: cluster 2026-03-09T17:33:27.451927+0000 mon.a (mon.0) 2402 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: audit 2026-03-09T17:33:27.453753+0000 mon.b (mon.1) 368 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: audit 2026-03-09T17:33:27.453753+0000 mon.b (mon.1) 368 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: audit 2026-03-09T17:33:27.469354+0000 mon.a (mon.0) 2403 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: audit 2026-03-09T17:33:27.469354+0000 mon.a (mon.0) 2403 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: audit 2026-03-09T17:33:27.693175+0000 mon.a (mon.0) 2404 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: audit 2026-03-09T17:33:27.693175+0000 mon.a (mon.0) 2404 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: audit 2026-03-09T17:33:27.695033+0000 mon.c (mon.2) 578 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:28 vm00 bash[28333]: audit 2026-03-09T17:33:27.695033+0000 mon.c (mon.2) 578 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: cluster 2026-03-09T17:33:27.451927+0000 mon.a (mon.0) 2402 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: cluster 2026-03-09T17:33:27.451927+0000 mon.a (mon.0) 2402 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: audit 2026-03-09T17:33:27.453753+0000 mon.b (mon.1) 368 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: audit 2026-03-09T17:33:27.453753+0000 mon.b (mon.1) 368 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: audit 2026-03-09T17:33:27.469354+0000 mon.a (mon.0) 2403 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: audit 2026-03-09T17:33:27.469354+0000 mon.a (mon.0) 2403 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: audit 2026-03-09T17:33:27.693175+0000 mon.a (mon.0) 2404 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: audit 2026-03-09T17:33:27.693175+0000 mon.a (mon.0) 2404 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: audit 2026-03-09T17:33:27.695033+0000 mon.c (mon.2) 578 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:28 vm00 bash[20770]: audit 2026-03-09T17:33:27.695033+0000 mon.c (mon.2) 578 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: cluster 2026-03-09T17:33:27.451927+0000 mon.a (mon.0) 2402 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: cluster 2026-03-09T17:33:27.451927+0000 mon.a (mon.0) 2402 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: audit 2026-03-09T17:33:27.453753+0000 mon.b (mon.1) 368 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: audit 2026-03-09T17:33:27.453753+0000 mon.b (mon.1) 368 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: audit 2026-03-09T17:33:27.469354+0000 mon.a (mon.0) 2403 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: audit 2026-03-09T17:33:27.469354+0000 mon.a (mon.0) 2403 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: audit 2026-03-09T17:33:27.693175+0000 mon.a (mon.0) 2404 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: audit 2026-03-09T17:33:27.693175+0000 mon.a (mon.0) 2404 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: audit 2026-03-09T17:33:27.695033+0000 mon.c (mon.2) 578 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:28 vm02 bash[23351]: audit 2026-03-09T17:33:27.695033+0000 mon.c (mon.2) 578 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: audit 2026-03-09T17:33:28.468597+0000 mon.a (mon.0) 2405 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: audit 2026-03-09T17:33:28.468597+0000 mon.a (mon.0) 2405 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: cluster 2026-03-09T17:33:28.472158+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: cluster 2026-03-09T17:33:28.472158+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: audit 2026-03-09T17:33:28.473695+0000 mon.b (mon.1) 369 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: audit 2026-03-09T17:33:28.473695+0000 mon.b (mon.1) 369 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: audit 2026-03-09T17:33:28.482730+0000 mon.a (mon.0) 2407 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: audit 2026-03-09T17:33:28.482730+0000 mon.a (mon.0) 2407 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: cluster 2026-03-09T17:33:28.745264+0000 mgr.y (mgr.14505) 303 : cluster [DBG] pgmap v474: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:29 vm02 bash[23351]: cluster 2026-03-09T17:33:28.745264+0000 mgr.y (mgr.14505) 303 : cluster [DBG] pgmap v474: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:30.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: audit 2026-03-09T17:33:28.468597+0000 mon.a (mon.0) 2405 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: audit 2026-03-09T17:33:28.468597+0000 mon.a (mon.0) 2405 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: cluster 2026-03-09T17:33:28.472158+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: cluster 2026-03-09T17:33:28.472158+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: audit 2026-03-09T17:33:28.473695+0000 mon.b (mon.1) 369 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: audit 2026-03-09T17:33:28.473695+0000 mon.b (mon.1) 369 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: audit 2026-03-09T17:33:28.482730+0000 mon.a (mon.0) 2407 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: audit 2026-03-09T17:33:28.482730+0000 mon.a (mon.0) 2407 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: cluster 2026-03-09T17:33:28.745264+0000 mgr.y (mgr.14505) 303 : cluster [DBG] pgmap v474: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:29 vm00 bash[28333]: cluster 2026-03-09T17:33:28.745264+0000 mgr.y (mgr.14505) 303 : cluster [DBG] pgmap v474: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: audit 2026-03-09T17:33:28.468597+0000 mon.a (mon.0) 2405 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: audit 2026-03-09T17:33:28.468597+0000 mon.a (mon.0) 2405 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: cluster 2026-03-09T17:33:28.472158+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: cluster 2026-03-09T17:33:28.472158+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: audit 2026-03-09T17:33:28.473695+0000 mon.b (mon.1) 369 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: audit 2026-03-09T17:33:28.473695+0000 mon.b (mon.1) 369 : audit [INF] from='client.33753 192.168.123.100:0/3743045471' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: audit 2026-03-09T17:33:28.482730+0000 mon.a (mon.0) 2407 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: audit 2026-03-09T17:33:28.482730+0000 mon.a (mon.0) 2407 : audit [INF] from='client.33753 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]: dispatch 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: cluster 2026-03-09T17:33:28.745264+0000 mgr.y (mgr.14505) 303 : cluster [DBG] pgmap v474: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:29 vm00 bash[20770]: cluster 2026-03-09T17:33:28.745264+0000 mgr.y (mgr.14505) 303 : cluster [DBG] pgmap v474: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.590045+0000 mon.a (mon.0) 2408 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.590045+0000 mon.a (mon.0) 2408 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: cluster 2026-03-09T17:33:29.594836+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: cluster 2026-03-09T17:33:29.594836+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.610090+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.610090+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.611723+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.611723+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.612387+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.612387+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.612625+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.612625+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.613254+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.613254+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.613486+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:30 vm02 bash[23351]: audit 2026-03-09T17:33:29.613486+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.590045+0000 mon.a (mon.0) 2408 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.590045+0000 mon.a (mon.0) 2408 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: cluster 2026-03-09T17:33:29.594836+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: cluster 2026-03-09T17:33:29.594836+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.610090+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.610090+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.611723+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.611723+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.612387+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.612387+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.612625+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.612625+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.613254+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.613254+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.613486+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:30 vm00 bash[28333]: audit 2026-03-09T17:33:29.613486+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.590045+0000 mon.a (mon.0) 2408 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.590045+0000 mon.a (mon.0) 2408 : audit [INF] from='client.33753 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59916-64"}]': finished 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: cluster 2026-03-09T17:33:29.594836+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: cluster 2026-03-09T17:33:29.594836+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.610090+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.610090+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.611723+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.611723+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.612387+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.612387+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.612625+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.612625+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.613254+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.613254+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.613486+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:30 vm00 bash[20770]: audit 2026-03-09T17:33:29.613486+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: audit 2026-03-09T17:33:30.593734+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: audit 2026-03-09T17:33:30.593734+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: audit 2026-03-09T17:33:30.602266+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: audit 2026-03-09T17:33:30.602266+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: cluster 2026-03-09T17:33:30.605653+0000 mon.a (mon.0) 2414 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: cluster 2026-03-09T17:33:30.605653+0000 mon.a (mon.0) 2414 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: audit 2026-03-09T17:33:30.608272+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: audit 2026-03-09T17:33:30.608272+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: cluster 2026-03-09T17:33:30.745593+0000 mgr.y (mgr.14505) 304 : cluster [DBG] pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: cluster 2026-03-09T17:33:30.745593+0000 mgr.y (mgr.14505) 304 : cluster [DBG] pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: cluster 2026-03-09T17:33:31.613906+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T17:33:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:31 vm02 bash[23351]: cluster 2026-03-09T17:33:31.613906+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T17:33:31.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:33:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:33:32.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: audit 2026-03-09T17:33:30.593734+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:32.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: audit 2026-03-09T17:33:30.593734+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:32.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: audit 2026-03-09T17:33:30.602266+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: audit 2026-03-09T17:33:30.602266+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: cluster 2026-03-09T17:33:30.605653+0000 mon.a (mon.0) 2414 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: cluster 2026-03-09T17:33:30.605653+0000 mon.a (mon.0) 2414 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: audit 2026-03-09T17:33:30.608272+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: audit 2026-03-09T17:33:30.608272+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: cluster 2026-03-09T17:33:30.745593+0000 mgr.y (mgr.14505) 304 : cluster [DBG] pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: cluster 2026-03-09T17:33:30.745593+0000 mgr.y (mgr.14505) 304 : cluster [DBG] pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: cluster 2026-03-09T17:33:31.613906+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:31 vm00 bash[28333]: cluster 2026-03-09T17:33:31.613906+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: audit 2026-03-09T17:33:30.593734+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: audit 2026-03-09T17:33:30.593734+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59916-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: audit 2026-03-09T17:33:30.602266+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: audit 2026-03-09T17:33:30.602266+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: cluster 2026-03-09T17:33:30.605653+0000 mon.a (mon.0) 2414 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: cluster 2026-03-09T17:33:30.605653+0000 mon.a (mon.0) 2414 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: audit 2026-03-09T17:33:30.608272+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: audit 2026-03-09T17:33:30.608272+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: cluster 2026-03-09T17:33:30.745593+0000 mgr.y (mgr.14505) 304 : cluster [DBG] pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: cluster 2026-03-09T17:33:30.745593+0000 mgr.y (mgr.14505) 304 : cluster [DBG] pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: cluster 2026-03-09T17:33:31.613906+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T17:33:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:31 vm00 bash[20770]: cluster 2026-03-09T17:33:31.613906+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T17:33:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:32 vm02 bash[23351]: audit 2026-03-09T17:33:31.762478+0000 mgr.y (mgr.14505) 305 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:32 vm02 bash[23351]: audit 2026-03-09T17:33:31.762478+0000 mgr.y (mgr.14505) 305 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:32 vm02 bash[23351]: audit 2026-03-09T17:33:32.600987+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:32 vm02 bash[23351]: audit 2026-03-09T17:33:32.600987+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:32 vm02 bash[23351]: cluster 2026-03-09T17:33:32.624304+0000 mon.a (mon.0) 2418 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T17:33:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:32 vm02 bash[23351]: cluster 2026-03-09T17:33:32.624304+0000 mon.a (mon.0) 2418 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T17:33:33.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:32 vm00 bash[28333]: audit 2026-03-09T17:33:31.762478+0000 mgr.y (mgr.14505) 305 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:33.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:32 vm00 bash[28333]: audit 2026-03-09T17:33:31.762478+0000 mgr.y (mgr.14505) 305 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:32 vm00 bash[28333]: audit 2026-03-09T17:33:32.600987+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:32 vm00 bash[28333]: audit 2026-03-09T17:33:32.600987+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:32 vm00 bash[28333]: cluster 2026-03-09T17:33:32.624304+0000 mon.a (mon.0) 2418 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:32 vm00 bash[28333]: cluster 2026-03-09T17:33:32.624304+0000 mon.a (mon.0) 2418 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:32 vm00 bash[20770]: audit 2026-03-09T17:33:31.762478+0000 mgr.y (mgr.14505) 305 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:32 vm00 bash[20770]: audit 2026-03-09T17:33:31.762478+0000 mgr.y (mgr.14505) 305 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:32 vm00 bash[20770]: audit 2026-03-09T17:33:32.600987+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:32 vm00 bash[20770]: audit 2026-03-09T17:33:32.600987+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59916-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:32 vm00 bash[20770]: cluster 2026-03-09T17:33:32.624304+0000 mon.a (mon.0) 2418 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T17:33:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:32 vm00 bash[20770]: cluster 2026-03-09T17:33:32.624304+0000 mon.a (mon.0) 2418 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T17:33:34.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:33 vm00 bash[28333]: audit 2026-03-09T17:33:32.689894+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:33 vm00 bash[28333]: audit 2026-03-09T17:33:32.689894+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:33 vm00 bash[28333]: audit 2026-03-09T17:33:32.691053+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:33 vm00 bash[28333]: audit 2026-03-09T17:33:32.691053+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:33 vm00 bash[28333]: cluster 2026-03-09T17:33:32.745880+0000 mgr.y (mgr.14505) 306 : cluster [DBG] pgmap v480: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:34.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:33 vm00 bash[28333]: cluster 2026-03-09T17:33:32.745880+0000 mgr.y (mgr.14505) 306 : cluster [DBG] pgmap v480: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:33 vm00 bash[20770]: audit 2026-03-09T17:33:32.689894+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:33 vm00 bash[20770]: audit 2026-03-09T17:33:32.689894+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:33 vm00 bash[20770]: audit 2026-03-09T17:33:32.691053+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:33 vm00 bash[20770]: audit 2026-03-09T17:33:32.691053+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:33 vm00 bash[20770]: cluster 2026-03-09T17:33:32.745880+0000 mgr.y (mgr.14505) 306 : cluster [DBG] pgmap v480: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:33 vm00 bash[20770]: cluster 2026-03-09T17:33:32.745880+0000 mgr.y (mgr.14505) 306 : cluster [DBG] pgmap v480: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:33 vm02 bash[23351]: audit 2026-03-09T17:33:32.689894+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:33 vm02 bash[23351]: audit 2026-03-09T17:33:32.689894+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:33 vm02 bash[23351]: audit 2026-03-09T17:33:32.691053+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:33 vm02 bash[23351]: audit 2026-03-09T17:33:32.691053+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:33 vm02 bash[23351]: cluster 2026-03-09T17:33:32.745880+0000 mgr.y (mgr.14505) 306 : cluster [DBG] pgmap v480: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:33 vm02 bash[23351]: cluster 2026-03-09T17:33:32.745880+0000 mgr.y (mgr.14505) 306 : cluster [DBG] pgmap v480: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:35.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: cluster 2026-03-09T17:33:33.667557+0000 mon.a (mon.0) 2420 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: cluster 2026-03-09T17:33:33.667557+0000 mon.a (mon.0) 2420 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: audit 2026-03-09T17:33:33.669414+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: audit 2026-03-09T17:33:33.669414+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: cluster 2026-03-09T17:33:33.672561+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: cluster 2026-03-09T17:33:33.672561+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: audit 2026-03-09T17:33:33.678401+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: audit 2026-03-09T17:33:33.678401+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: audit 2026-03-09T17:33:33.697025+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:34 vm00 bash[28333]: audit 2026-03-09T17:33:33.697025+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: cluster 2026-03-09T17:33:33.667557+0000 mon.a (mon.0) 2420 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: cluster 2026-03-09T17:33:33.667557+0000 mon.a (mon.0) 2420 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: audit 2026-03-09T17:33:33.669414+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: audit 2026-03-09T17:33:33.669414+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: cluster 2026-03-09T17:33:33.672561+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: cluster 2026-03-09T17:33:33.672561+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: audit 2026-03-09T17:33:33.678401+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: audit 2026-03-09T17:33:33.678401+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: audit 2026-03-09T17:33:33.697025+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:34 vm00 bash[20770]: audit 2026-03-09T17:33:33.697025+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: cluster 2026-03-09T17:33:33.667557+0000 mon.a (mon.0) 2420 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: cluster 2026-03-09T17:33:33.667557+0000 mon.a (mon.0) 2420 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: audit 2026-03-09T17:33:33.669414+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: audit 2026-03-09T17:33:33.669414+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: cluster 2026-03-09T17:33:33.672561+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: cluster 2026-03-09T17:33:33.672561+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: audit 2026-03-09T17:33:33.678401+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: audit 2026-03-09T17:33:33.678401+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: audit 2026-03-09T17:33:33.697025+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:34 vm02 bash[23351]: audit 2026-03-09T17:33:33.697025+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:35.717 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP2163:head 2026-03-09T17:33:35.717 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:5d165639:::164:head 2026-03-09T17:33:35.717 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f43765fc:::165:head 2026-03-09T17:33:35.717 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b4c720e9:::166:head 2026-03-09T17:33:35.717 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:e694b040:::167:head 2026-03-09T17:33:35.717 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:afa38db2:::168:head 2026-03-09T17:33:35.717 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:77ba9f53:::169:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:87495034:::170:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:7c96bf0e:::171:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:dbe346cc:::172:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:e943ec24:::173:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f97a9c0c:::174:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:6f26e74d:::175:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:4f95e106:::176:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:0e6f2f8f:::177:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:05db05f1:::178:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:38a78d66:::179:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:d095610b:::180:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:a1a9d709:::181:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:1e5d39db:::182:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:f7df4fb9:::183:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:03a7f161:::184:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:ba70721e:::185:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:28e5662d:::186:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:973d52de:::187:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:4303eb1c:::188:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:b990b48e:::189:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:29b8165b:::190:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:3547f197:::191:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:7e260936:::192:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:1abec7b1:::193:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:10fdda93:::194:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:15817eea:::195:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:770bab57:::196:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:ed9e13e7:::197:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:71471a8f:::198:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 258:10fb1d02:::199:head 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetWrite (7566 ms) 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetTrim 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: first is 1773077571 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,1773077576,1773077577,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,1773077576,1773077577,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,1773077576,1773077577,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,1773077576,1773077577,1773077579,1773077580,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,1773077576,1773077577,1773077579,1773077580,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077571,1773077573,1773077574,1773077576,1773077577,1773077579,1773077580,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773077574,1773077576,1773077577,1773077579,1773077580,1773077582,1773077583,0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: first now 1773077574, trimmed 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetTrim (20449 ms) 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteOn2ndRead 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: foo0 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteOn2ndRead (14140 ms) 2026-03-09T17:33:35.718 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ProxyRead 2026-03-09T17:33:36.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: cluster 2026-03-09T17:33:34.690085+0000 mon.a (mon.0) 2424 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: cluster 2026-03-09T17:33:34.690085+0000 mon.a (mon.0) 2424 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.708402+0000 mon.a (mon.0) 2425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.708402+0000 mon.a (mon.0) 2425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: cluster 2026-03-09T17:33:34.720964+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: cluster 2026-03-09T17:33:34.720964+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.721402+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.721402+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.721792+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.721792+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: cluster 2026-03-09T17:33:34.746248+0000 mgr.y (mgr.14505) 307 : cluster [DBG] pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: cluster 2026-03-09T17:33:34.746248+0000 mgr.y (mgr.14505) 307 : cluster [DBG] pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.747068+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.747068+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.747637+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.747637+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.748148+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.748148+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.748707+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:35 vm00 bash[28333]: audit 2026-03-09T17:33:34.748707+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: cluster 2026-03-09T17:33:34.690085+0000 mon.a (mon.0) 2424 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: cluster 2026-03-09T17:33:34.690085+0000 mon.a (mon.0) 2424 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.708402+0000 mon.a (mon.0) 2425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.708402+0000 mon.a (mon.0) 2425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: cluster 2026-03-09T17:33:34.720964+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: cluster 2026-03-09T17:33:34.720964+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.721402+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.721402+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.721792+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.721792+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: cluster 2026-03-09T17:33:34.746248+0000 mgr.y (mgr.14505) 307 : cluster [DBG] pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: cluster 2026-03-09T17:33:34.746248+0000 mgr.y (mgr.14505) 307 : cluster [DBG] pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.747068+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.747068+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.747637+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.747637+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.748148+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.748148+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.748707+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:35 vm00 bash[20770]: audit 2026-03-09T17:33:34.748707+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: cluster 2026-03-09T17:33:34.690085+0000 mon.a (mon.0) 2424 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: cluster 2026-03-09T17:33:34.690085+0000 mon.a (mon.0) 2424 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.708402+0000 mon.a (mon.0) 2425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.708402+0000 mon.a (mon.0) 2425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]': finished 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: cluster 2026-03-09T17:33:34.720964+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: cluster 2026-03-09T17:33:34.720964+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.721402+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.721402+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.721792+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.721792+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: cluster 2026-03-09T17:33:34.746248+0000 mgr.y (mgr.14505) 307 : cluster [DBG] pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: cluster 2026-03-09T17:33:34.746248+0000 mgr.y (mgr.14505) 307 : cluster [DBG] pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.747068+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.747068+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.747637+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.747637+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.748148+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.748148+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.748707+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:35 vm02 bash[23351]: audit 2026-03-09T17:33:34.748707+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-49"}]: dispatch 2026-03-09T17:33:36.744 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:33:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:33:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:36 vm00 bash[28333]: audit 2026-03-09T17:33:35.710464+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:36 vm00 bash[28333]: audit 2026-03-09T17:33:35.710464+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:36 vm00 bash[28333]: audit 2026-03-09T17:33:35.714164+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:36 vm00 bash[28333]: audit 2026-03-09T17:33:35.714164+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:36 vm00 bash[28333]: cluster 2026-03-09T17:33:35.716111+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:36 vm00 bash[28333]: cluster 2026-03-09T17:33:35.716111+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:36 vm00 bash[28333]: audit 2026-03-09T17:33:35.720241+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:36 vm00 bash[28333]: audit 2026-03-09T17:33:35.720241+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:36 vm00 bash[20770]: audit 2026-03-09T17:33:35.710464+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:36 vm00 bash[20770]: audit 2026-03-09T17:33:35.710464+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:36 vm00 bash[20770]: audit 2026-03-09T17:33:35.714164+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:36 vm00 bash[20770]: audit 2026-03-09T17:33:35.714164+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:36 vm00 bash[20770]: cluster 2026-03-09T17:33:35.716111+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:36 vm00 bash[20770]: cluster 2026-03-09T17:33:35.716111+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:36 vm00 bash[20770]: audit 2026-03-09T17:33:35.720241+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:36 vm00 bash[20770]: audit 2026-03-09T17:33:35.720241+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:36 vm02 bash[23351]: audit 2026-03-09T17:33:35.710464+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:36 vm02 bash[23351]: audit 2026-03-09T17:33:35.710464+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:36 vm02 bash[23351]: audit 2026-03-09T17:33:35.714164+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:36 vm02 bash[23351]: audit 2026-03-09T17:33:35.714164+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.100:0/308390546' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:36 vm02 bash[23351]: cluster 2026-03-09T17:33:35.716111+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T17:33:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:36 vm02 bash[23351]: cluster 2026-03-09T17:33:35.716111+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T17:33:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:36 vm02 bash[23351]: audit 2026-03-09T17:33:35.720241+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:36 vm02 bash[23351]: audit 2026-03-09T17:33:35.720241+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.714170+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.714170+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: cluster 2026-03-09T17:33:36.730442+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: cluster 2026-03-09T17:33:36.730442+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.733616+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.733616+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.736583+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.736583+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: cluster 2026-03-09T17:33:36.746535+0000 mgr.y (mgr.14505) 308 : cluster [DBG] pgmap v486: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: cluster 2026-03-09T17:33:36.746535+0000 mgr.y (mgr.14505) 308 : cluster [DBG] pgmap v486: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.758553+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.758553+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.761093+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.761093+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.761335+0000 mon.a (mon.0) 2438 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:36.761335+0000 mon.a (mon.0) 2438 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:37.717351+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:37.717351+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:37.717419+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:37.717419+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: cluster 2026-03-09T17:33:37.734388+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: cluster 2026-03-09T17:33:37.734388+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:37.734826+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:37 vm00 bash[28333]: audit 2026-03-09T17:33:37.734826+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.714170+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.714170+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: cluster 2026-03-09T17:33:36.730442+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T17:33:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: cluster 2026-03-09T17:33:36.730442+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.733616+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.733616+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.736583+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.736583+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: cluster 2026-03-09T17:33:36.746535+0000 mgr.y (mgr.14505) 308 : cluster [DBG] pgmap v486: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: cluster 2026-03-09T17:33:36.746535+0000 mgr.y (mgr.14505) 308 : cluster [DBG] pgmap v486: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.758553+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.758553+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.761093+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.761093+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.761335+0000 mon.a (mon.0) 2438 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:36.761335+0000 mon.a (mon.0) 2438 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:37.717351+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:37.717351+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:37.717419+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:37.717419+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: cluster 2026-03-09T17:33:37.734388+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: cluster 2026-03-09T17:33:37.734388+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:37.734826+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:37 vm00 bash[20770]: audit 2026-03-09T17:33:37.734826+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.714170+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.714170+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59916-65"}]': finished 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: cluster 2026-03-09T17:33:36.730442+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: cluster 2026-03-09T17:33:36.730442+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.733616+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.733616+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.736583+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.736583+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: cluster 2026-03-09T17:33:36.746535+0000 mgr.y (mgr.14505) 308 : cluster [DBG] pgmap v486: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: cluster 2026-03-09T17:33:36.746535+0000 mgr.y (mgr.14505) 308 : cluster [DBG] pgmap v486: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 722 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.758553+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.758553+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.761093+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.761093+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.761335+0000 mon.a (mon.0) 2438 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:36.761335+0000 mon.a (mon.0) 2438 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:37.717351+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:37.717351+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:37.717419+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:37.717419+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59916-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: cluster 2026-03-09T17:33:37.734388+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: cluster 2026-03-09T17:33:37.734388+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:37.734826+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:37 vm02 bash[23351]: audit 2026-03-09T17:33:37.734826+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:37.784124+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:37.784124+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:37.785282+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:37.785282+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:38.721502+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:38.721502+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:38.729139+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:38.729139+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: cluster 2026-03-09T17:33:38.736368+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: cluster 2026-03-09T17:33:38.736368+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:38.736627+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:38 vm00 bash[28333]: audit 2026-03-09T17:33:38.736627+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:37.784124+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:37.784124+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:37.785282+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:37.785282+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:38.721502+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:38.721502+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:38.729139+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:38.729139+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: cluster 2026-03-09T17:33:38.736368+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: cluster 2026-03-09T17:33:38.736368+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:38.736627+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:38 vm00 bash[20770]: audit 2026-03-09T17:33:38.736627+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:37.784124+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:37.784124+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:37.785282+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:37.785282+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:38.721502+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:38.721502+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:38.729139+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:38.729139+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: cluster 2026-03-09T17:33:38.736368+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: cluster 2026-03-09T17:33:38.736368+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:38.736627+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:38 vm02 bash[23351]: audit 2026-03-09T17:33:38.736627+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:40.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: cluster 2026-03-09T17:33:38.746861+0000 mgr.y (mgr.14505) 309 : cluster [DBG] pgmap v489: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: cluster 2026-03-09T17:33:38.746861+0000 mgr.y (mgr.14505) 309 : cluster [DBG] pgmap v489: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: cluster 2026-03-09T17:33:39.721418+0000 mon.a (mon.0) 2447 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: cluster 2026-03-09T17:33:39.721418+0000 mon.a (mon.0) 2447 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: audit 2026-03-09T17:33:39.732715+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: audit 2026-03-09T17:33:39.732715+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: audit 2026-03-09T17:33:39.733103+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: audit 2026-03-09T17:33:39.733103+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: audit 2026-03-09T17:33:39.735957+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: audit 2026-03-09T17:33:39.735957+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: cluster 2026-03-09T17:33:39.739096+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: cluster 2026-03-09T17:33:39.739096+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: audit 2026-03-09T17:33:39.741418+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:39 vm00 bash[28333]: audit 2026-03-09T17:33:39.741418+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: cluster 2026-03-09T17:33:38.746861+0000 mgr.y (mgr.14505) 309 : cluster [DBG] pgmap v489: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: cluster 2026-03-09T17:33:38.746861+0000 mgr.y (mgr.14505) 309 : cluster [DBG] pgmap v489: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: cluster 2026-03-09T17:33:39.721418+0000 mon.a (mon.0) 2447 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: cluster 2026-03-09T17:33:39.721418+0000 mon.a (mon.0) 2447 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: audit 2026-03-09T17:33:39.732715+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: audit 2026-03-09T17:33:39.732715+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: audit 2026-03-09T17:33:39.733103+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: audit 2026-03-09T17:33:39.733103+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: audit 2026-03-09T17:33:39.735957+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: audit 2026-03-09T17:33:39.735957+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: cluster 2026-03-09T17:33:39.739096+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: cluster 2026-03-09T17:33:39.739096+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: audit 2026-03-09T17:33:39.741418+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:39 vm00 bash[20770]: audit 2026-03-09T17:33:39.741418+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: cluster 2026-03-09T17:33:38.746861+0000 mgr.y (mgr.14505) 309 : cluster [DBG] pgmap v489: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: cluster 2026-03-09T17:33:38.746861+0000 mgr.y (mgr.14505) 309 : cluster [DBG] pgmap v489: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: cluster 2026-03-09T17:33:39.721418+0000 mon.a (mon.0) 2447 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: cluster 2026-03-09T17:33:39.721418+0000 mon.a (mon.0) 2447 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: audit 2026-03-09T17:33:39.732715+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: audit 2026-03-09T17:33:39.732715+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59916-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: audit 2026-03-09T17:33:39.733103+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: audit 2026-03-09T17:33:39.733103+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: audit 2026-03-09T17:33:39.735957+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: audit 2026-03-09T17:33:39.735957+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: cluster 2026-03-09T17:33:39.739096+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: cluster 2026-03-09T17:33:39.739096+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: audit 2026-03-09T17:33:39.741418+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:39 vm02 bash[23351]: audit 2026-03-09T17:33:39.741418+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]: dispatch 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:40 vm00 bash[28333]: cluster 2026-03-09T17:33:40.732773+0000 mon.a (mon.0) 2452 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:40 vm00 bash[28333]: cluster 2026-03-09T17:33:40.732773+0000 mon.a (mon.0) 2452 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:40 vm00 bash[28333]: audit 2026-03-09T17:33:40.735906+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]': finished 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:40 vm00 bash[28333]: audit 2026-03-09T17:33:40.735906+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]': finished 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:40 vm00 bash[28333]: cluster 2026-03-09T17:33:40.741872+0000 mon.a (mon.0) 2454 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:40 vm00 bash[28333]: cluster 2026-03-09T17:33:40.741872+0000 mon.a (mon.0) 2454 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:40 vm00 bash[20770]: cluster 2026-03-09T17:33:40.732773+0000 mon.a (mon.0) 2452 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:40 vm00 bash[20770]: cluster 2026-03-09T17:33:40.732773+0000 mon.a (mon.0) 2452 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:40 vm00 bash[20770]: audit 2026-03-09T17:33:40.735906+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]': finished 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:40 vm00 bash[20770]: audit 2026-03-09T17:33:40.735906+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]': finished 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:40 vm00 bash[20770]: cluster 2026-03-09T17:33:40.741872+0000 mon.a (mon.0) 2454 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T17:33:41.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:40 vm00 bash[20770]: cluster 2026-03-09T17:33:40.741872+0000 mon.a (mon.0) 2454 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T17:33:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:40 vm02 bash[23351]: cluster 2026-03-09T17:33:40.732773+0000 mon.a (mon.0) 2452 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:40 vm02 bash[23351]: cluster 2026-03-09T17:33:40.732773+0000 mon.a (mon.0) 2452 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:33:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:40 vm02 bash[23351]: audit 2026-03-09T17:33:40.735906+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]': finished 2026-03-09T17:33:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:40 vm02 bash[23351]: audit 2026-03-09T17:33:40.735906+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-51", "mode": "writeback"}]': finished 2026-03-09T17:33:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:40 vm02 bash[23351]: cluster 2026-03-09T17:33:40.741872+0000 mon.a (mon.0) 2454 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T17:33:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:40 vm02 bash[23351]: cluster 2026-03-09T17:33:40.741872+0000 mon.a (mon.0) 2454 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: cluster 2026-03-09T17:33:40.747210+0000 mgr.y (mgr.14505) 310 : cluster [DBG] pgmap v492: 300 pgs: 8 unknown, 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: cluster 2026-03-09T17:33:40.747210+0000 mgr.y (mgr.14505) 310 : cluster [DBG] pgmap v492: 300 pgs: 8 unknown, 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:40.789161+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:40.789161+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:40.790402+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:40.790402+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:41.745894+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:41.745894+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:41.749401+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:41.749401+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: cluster 2026-03-09T17:33:41.753828+0000 mon.a (mon.0) 2457 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: cluster 2026-03-09T17:33:41.753828+0000 mon.a (mon.0) 2457 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:41.757132+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:41.757132+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:41.757780+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:41 vm00 bash[28333]: audit 2026-03-09T17:33:41.757780+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: cluster 2026-03-09T17:33:40.747210+0000 mgr.y (mgr.14505) 310 : cluster [DBG] pgmap v492: 300 pgs: 8 unknown, 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: cluster 2026-03-09T17:33:40.747210+0000 mgr.y (mgr.14505) 310 : cluster [DBG] pgmap v492: 300 pgs: 8 unknown, 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:40.789161+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:40.789161+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:40.790402+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:40.790402+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:41.745894+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:41.745894+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:41.749401+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:41.749401+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: cluster 2026-03-09T17:33:41.753828+0000 mon.a (mon.0) 2457 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: cluster 2026-03-09T17:33:41.753828+0000 mon.a (mon.0) 2457 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:41.757132+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:41.757132+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:41.757780+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:41 vm00 bash[20770]: audit 2026-03-09T17:33:41.757780+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: cluster 2026-03-09T17:33:40.747210+0000 mgr.y (mgr.14505) 310 : cluster [DBG] pgmap v492: 300 pgs: 8 unknown, 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: cluster 2026-03-09T17:33:40.747210+0000 mgr.y (mgr.14505) 310 : cluster [DBG] pgmap v492: 300 pgs: 8 unknown, 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:40.789161+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:40.789161+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:40.790402+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:40.790402+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:41.745894+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:41.745894+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:41.749401+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:41.749401+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: cluster 2026-03-09T17:33:41.753828+0000 mon.a (mon.0) 2457 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: cluster 2026-03-09T17:33:41.753828+0000 mon.a (mon.0) 2457 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:41.757132+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:41.757132+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:41.757780+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:41 vm02 bash[23351]: audit 2026-03-09T17:33:41.757780+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:42.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:33:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:41.770052+0000 mgr.y (mgr.14505) 311 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:41.770052+0000 mgr.y (mgr.14505) 311 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.701694+0000 mon.c (mon.2) 585 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.701694+0000 mon.c (mon.2) 585 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.750307+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.750307+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.750452+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.750452+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.753375+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.753375+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: cluster 2026-03-09T17:33:42.758112+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: cluster 2026-03-09T17:33:42.758112+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.758401+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.758401+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.759814+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:42 vm00 bash[28333]: audit 2026-03-09T17:33:42.759814+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:41.770052+0000 mgr.y (mgr.14505) 311 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:41.770052+0000 mgr.y (mgr.14505) 311 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.701694+0000 mon.c (mon.2) 585 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.701694+0000 mon.c (mon.2) 585 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.750307+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.750307+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.750452+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.750452+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.753375+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.753375+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: cluster 2026-03-09T17:33:42.758112+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: cluster 2026-03-09T17:33:42.758112+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.758401+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.758401+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.759814+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:42 vm00 bash[20770]: audit 2026-03-09T17:33:42.759814+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:41.770052+0000 mgr.y (mgr.14505) 311 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:41.770052+0000 mgr.y (mgr.14505) 311 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.701694+0000 mon.c (mon.2) 585 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.701694+0000 mon.c (mon.2) 585 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.750307+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.750307+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.750452+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.750452+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.753375+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.753375+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: cluster 2026-03-09T17:33:42.758112+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: cluster 2026-03-09T17:33:42.758112+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.758401+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.758401+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.759814+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:42 vm02 bash[23351]: audit 2026-03-09T17:33:42.759814+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP2 (2989 ms) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPP 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPP (7184 ms) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPPNS 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPPNS (7057 ms) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.StatRemovePP 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.StatRemovePP (7028 ms) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ExecuteClassPP 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ExecuteClassPP (7168 ms) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.OmapPP 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.OmapPP (7135 ms) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.MultiWritePP 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.MultiWritePP (7037 ms) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC (139466 ms total) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] Global test environment tear-down 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [==========] 57 tests from 4 test suites ran. (291639 ms total) 2026-03-09T17:33:43.772 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ PASSED ] 57 tests. 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: cluster 2026-03-09T17:33:42.747667+0000 mgr.y (mgr.14505) 312 : cluster [DBG] pgmap v494: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: cluster 2026-03-09T17:33:42.747667+0000 mgr.y (mgr.14505) 312 : cluster [DBG] pgmap v494: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: cluster 2026-03-09T17:33:43.750561+0000 mon.a (mon.0) 2465 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: cluster 2026-03-09T17:33:43.750561+0000 mon.a (mon.0) 2465 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: audit 2026-03-09T17:33:43.753551+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: audit 2026-03-09T17:33:43.753551+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: audit 2026-03-09T17:33:43.753598+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: audit 2026-03-09T17:33:43.753598+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: audit 2026-03-09T17:33:43.757039+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: audit 2026-03-09T17:33:43.757039+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: cluster 2026-03-09T17:33:43.757087+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: cluster 2026-03-09T17:33:43.757087+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: audit 2026-03-09T17:33:43.767984+0000 mon.a (mon.0) 2469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:43 vm02 bash[23351]: audit 2026-03-09T17:33:43.767984+0000 mon.a (mon.0) 2469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: cluster 2026-03-09T17:33:42.747667+0000 mgr.y (mgr.14505) 312 : cluster [DBG] pgmap v494: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: cluster 2026-03-09T17:33:42.747667+0000 mgr.y (mgr.14505) 312 : cluster [DBG] pgmap v494: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: cluster 2026-03-09T17:33:43.750561+0000 mon.a (mon.0) 2465 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: cluster 2026-03-09T17:33:43.750561+0000 mon.a (mon.0) 2465 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: audit 2026-03-09T17:33:43.753551+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: audit 2026-03-09T17:33:43.753551+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: audit 2026-03-09T17:33:43.753598+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: audit 2026-03-09T17:33:43.753598+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: audit 2026-03-09T17:33:43.757039+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: audit 2026-03-09T17:33:43.757039+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: cluster 2026-03-09T17:33:43.757087+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: cluster 2026-03-09T17:33:43.757087+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: audit 2026-03-09T17:33:43.767984+0000 mon.a (mon.0) 2469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:43 vm00 bash[28333]: audit 2026-03-09T17:33:43.767984+0000 mon.a (mon.0) 2469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: cluster 2026-03-09T17:33:42.747667+0000 mgr.y (mgr.14505) 312 : cluster [DBG] pgmap v494: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: cluster 2026-03-09T17:33:42.747667+0000 mgr.y (mgr.14505) 312 : cluster [DBG] pgmap v494: 292 pgs: 22 creating+activating, 270 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: cluster 2026-03-09T17:33:43.750561+0000 mon.a (mon.0) 2465 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: cluster 2026-03-09T17:33:43.750561+0000 mon.a (mon.0) 2465 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: audit 2026-03-09T17:33:43.753551+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: audit 2026-03-09T17:33:43.753551+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? 192.168.123.100:0/1053815791' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59916-66"}]': finished 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: audit 2026-03-09T17:33:43.753598+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: audit 2026-03-09T17:33:43.753598+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: audit 2026-03-09T17:33:43.757039+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: audit 2026-03-09T17:33:43.757039+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: cluster 2026-03-09T17:33:43.757087+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: cluster 2026-03-09T17:33:43.757087+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: audit 2026-03-09T17:33:43.767984+0000 mon.a (mon.0) 2469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:43 vm00 bash[20770]: audit 2026-03-09T17:33:43.767984+0000 mon.a (mon.0) 2469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: cluster 2026-03-09T17:33:44.748375+0000 mgr.y (mgr.14505) 313 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 766 B/s wr, 4 op/s 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: cluster 2026-03-09T17:33:44.748375+0000 mgr.y (mgr.14505) 313 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 766 B/s wr, 4 op/s 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: audit 2026-03-09T17:33:44.757296+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: audit 2026-03-09T17:33:44.757296+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: audit 2026-03-09T17:33:44.760327+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: audit 2026-03-09T17:33:44.760327+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: cluster 2026-03-09T17:33:44.760628+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: cluster 2026-03-09T17:33:44.760628+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: audit 2026-03-09T17:33:44.762015+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:45 vm00 bash[28333]: audit 2026-03-09T17:33:44.762015+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: cluster 2026-03-09T17:33:44.748375+0000 mgr.y (mgr.14505) 313 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 766 B/s wr, 4 op/s 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: cluster 2026-03-09T17:33:44.748375+0000 mgr.y (mgr.14505) 313 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 766 B/s wr, 4 op/s 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: audit 2026-03-09T17:33:44.757296+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: audit 2026-03-09T17:33:44.757296+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: audit 2026-03-09T17:33:44.760327+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: audit 2026-03-09T17:33:44.760327+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: cluster 2026-03-09T17:33:44.760628+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: cluster 2026-03-09T17:33:44.760628+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: audit 2026-03-09T17:33:44.762015+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:45 vm00 bash[20770]: audit 2026-03-09T17:33:44.762015+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: cluster 2026-03-09T17:33:44.748375+0000 mgr.y (mgr.14505) 313 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 766 B/s wr, 4 op/s 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: cluster 2026-03-09T17:33:44.748375+0000 mgr.y (mgr.14505) 313 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 766 B/s wr, 4 op/s 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: audit 2026-03-09T17:33:44.757296+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: audit 2026-03-09T17:33:44.757296+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: audit 2026-03-09T17:33:44.760327+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: audit 2026-03-09T17:33:44.760327+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: cluster 2026-03-09T17:33:44.760628+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: cluster 2026-03-09T17:33:44.760628+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: audit 2026-03-09T17:33:44.762015+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:45 vm02 bash[23351]: audit 2026-03-09T17:33:44.762015+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:33:46.783 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:33:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:33:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:33:47.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:46 vm00 bash[28333]: audit 2026-03-09T17:33:45.761203+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:33:47.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:46 vm00 bash[28333]: audit 2026-03-09T17:33:45.761203+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:33:47.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:46 vm00 bash[28333]: cluster 2026-03-09T17:33:45.763919+0000 mon.a (mon.0) 2474 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T17:33:47.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:46 vm00 bash[28333]: cluster 2026-03-09T17:33:45.763919+0000 mon.a (mon.0) 2474 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T17:33:47.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:46 vm00 bash[20770]: audit 2026-03-09T17:33:45.761203+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:33:47.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:46 vm00 bash[20770]: audit 2026-03-09T17:33:45.761203+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:33:47.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:46 vm00 bash[20770]: cluster 2026-03-09T17:33:45.763919+0000 mon.a (mon.0) 2474 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T17:33:47.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:46 vm00 bash[20770]: cluster 2026-03-09T17:33:45.763919+0000 mon.a (mon.0) 2474 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T17:33:47.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:46 vm02 bash[23351]: audit 2026-03-09T17:33:45.761203+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:33:47.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:46 vm02 bash[23351]: audit 2026-03-09T17:33:45.761203+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:33:47.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:46 vm02 bash[23351]: cluster 2026-03-09T17:33:45.763919+0000 mon.a (mon.0) 2474 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T17:33:47.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:46 vm02 bash[23351]: cluster 2026-03-09T17:33:45.763919+0000 mon.a (mon.0) 2474 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:47 vm00 bash[20770]: cluster 2026-03-09T17:33:46.748737+0000 mgr.y (mgr.14505) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:47 vm00 bash[20770]: cluster 2026-03-09T17:33:46.748737+0000 mgr.y (mgr.14505) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:47 vm00 bash[20770]: cluster 2026-03-09T17:33:46.773378+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:47 vm00 bash[20770]: cluster 2026-03-09T17:33:46.773378+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:47 vm00 bash[20770]: audit 2026-03-09T17:33:47.558682+0000 mon.c (mon.2) 586 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:47 vm00 bash[20770]: audit 2026-03-09T17:33:47.558682+0000 mon.c (mon.2) 586 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:47 vm00 bash[28333]: cluster 2026-03-09T17:33:46.748737+0000 mgr.y (mgr.14505) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:47 vm00 bash[28333]: cluster 2026-03-09T17:33:46.748737+0000 mgr.y (mgr.14505) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:47 vm00 bash[28333]: cluster 2026-03-09T17:33:46.773378+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:47 vm00 bash[28333]: cluster 2026-03-09T17:33:46.773378+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:47 vm00 bash[28333]: audit 2026-03-09T17:33:47.558682+0000 mon.c (mon.2) 586 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:33:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:47 vm00 bash[28333]: audit 2026-03-09T17:33:47.558682+0000 mon.c (mon.2) 586 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:33:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:47 vm02 bash[23351]: cluster 2026-03-09T17:33:46.748737+0000 mgr.y (mgr.14505) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:33:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:47 vm02 bash[23351]: cluster 2026-03-09T17:33:46.748737+0000 mgr.y (mgr.14505) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-09T17:33:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:47 vm02 bash[23351]: cluster 2026-03-09T17:33:46.773378+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:33:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:47 vm02 bash[23351]: cluster 2026-03-09T17:33:46.773378+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:33:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:47 vm02 bash[23351]: audit 2026-03-09T17:33:47.558682+0000 mon.c (mon.2) 586 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:33:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:47 vm02 bash[23351]: audit 2026-03-09T17:33:47.558682+0000 mon.c (mon.2) 586 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:33:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:48 vm02 bash[23351]: audit 2026-03-09T17:33:47.885828+0000 mon.c (mon.2) 587 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:33:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:48 vm02 bash[23351]: audit 2026-03-09T17:33:47.885828+0000 mon.c (mon.2) 587 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:33:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:48 vm02 bash[23351]: audit 2026-03-09T17:33:47.886500+0000 mon.c (mon.2) 588 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:33:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:48 vm02 bash[23351]: audit 2026-03-09T17:33:47.886500+0000 mon.c (mon.2) 588 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:33:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:48 vm02 bash[23351]: audit 2026-03-09T17:33:47.892271+0000 mon.a (mon.0) 2476 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:48 vm02 bash[23351]: audit 2026-03-09T17:33:47.892271+0000 mon.a (mon.0) 2476 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:48 vm00 bash[20770]: audit 2026-03-09T17:33:47.885828+0000 mon.c (mon.2) 587 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:48 vm00 bash[20770]: audit 2026-03-09T17:33:47.885828+0000 mon.c (mon.2) 587 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:48 vm00 bash[20770]: audit 2026-03-09T17:33:47.886500+0000 mon.c (mon.2) 588 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:48 vm00 bash[20770]: audit 2026-03-09T17:33:47.886500+0000 mon.c (mon.2) 588 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:48 vm00 bash[20770]: audit 2026-03-09T17:33:47.892271+0000 mon.a (mon.0) 2476 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:48 vm00 bash[20770]: audit 2026-03-09T17:33:47.892271+0000 mon.a (mon.0) 2476 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:48 vm00 bash[28333]: audit 2026-03-09T17:33:47.885828+0000 mon.c (mon.2) 587 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:48 vm00 bash[28333]: audit 2026-03-09T17:33:47.885828+0000 mon.c (mon.2) 587 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:48 vm00 bash[28333]: audit 2026-03-09T17:33:47.886500+0000 mon.c (mon.2) 588 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:48 vm00 bash[28333]: audit 2026-03-09T17:33:47.886500+0000 mon.c (mon.2) 588 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:48 vm00 bash[28333]: audit 2026-03-09T17:33:47.892271+0000 mon.a (mon.0) 2476 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:49.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:48 vm00 bash[28333]: audit 2026-03-09T17:33:47.892271+0000 mon.a (mon.0) 2476 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:49 vm00 bash[20770]: cluster 2026-03-09T17:33:48.749524+0000 mgr.y (mgr.14505) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 3 op/s 2026-03-09T17:33:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:49 vm00 bash[20770]: cluster 2026-03-09T17:33:48.749524+0000 mgr.y (mgr.14505) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 3 op/s 2026-03-09T17:33:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:49 vm00 bash[28333]: cluster 2026-03-09T17:33:48.749524+0000 mgr.y (mgr.14505) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 3 op/s 2026-03-09T17:33:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:49 vm00 bash[28333]: cluster 2026-03-09T17:33:48.749524+0000 mgr.y (mgr.14505) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 3 op/s 2026-03-09T17:33:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:49 vm02 bash[23351]: cluster 2026-03-09T17:33:48.749524+0000 mgr.y (mgr.14505) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 3 op/s 2026-03-09T17:33:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:49 vm02 bash[23351]: cluster 2026-03-09T17:33:48.749524+0000 mgr.y (mgr.14505) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 3 op/s 2026-03-09T17:33:52.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:51 vm02 bash[23351]: cluster 2026-03-09T17:33:50.749913+0000 mgr.y (mgr.14505) 316 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 732 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:52.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:51 vm02 bash[23351]: cluster 2026-03-09T17:33:50.749913+0000 mgr.y (mgr.14505) 316 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 732 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:52.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:33:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:33:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:51 vm00 bash[20770]: cluster 2026-03-09T17:33:50.749913+0000 mgr.y (mgr.14505) 316 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 732 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:51 vm00 bash[20770]: cluster 2026-03-09T17:33:50.749913+0000 mgr.y (mgr.14505) 316 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 732 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:51 vm00 bash[28333]: cluster 2026-03-09T17:33:50.749913+0000 mgr.y (mgr.14505) 316 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 732 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:51 vm00 bash[28333]: cluster 2026-03-09T17:33:50.749913+0000 mgr.y (mgr.14505) 316 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 732 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:53.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:52 vm00 bash[20770]: audit 2026-03-09T17:33:51.780885+0000 mgr.y (mgr.14505) 317 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:53.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:52 vm00 bash[20770]: audit 2026-03-09T17:33:51.780885+0000 mgr.y (mgr.14505) 317 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:53.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:52 vm00 bash[28333]: audit 2026-03-09T17:33:51.780885+0000 mgr.y (mgr.14505) 317 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:53.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:52 vm00 bash[28333]: audit 2026-03-09T17:33:51.780885+0000 mgr.y (mgr.14505) 317 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:53.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:52 vm02 bash[23351]: audit 2026-03-09T17:33:51.780885+0000 mgr.y (mgr.14505) 317 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:52 vm02 bash[23351]: audit 2026-03-09T17:33:51.780885+0000 mgr.y (mgr.14505) 317 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:33:54.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:53 vm00 bash[28333]: cluster 2026-03-09T17:33:52.750409+0000 mgr.y (mgr.14505) 318 : cluster [DBG] pgmap v503: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:33:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:53 vm00 bash[28333]: cluster 2026-03-09T17:33:52.750409+0000 mgr.y (mgr.14505) 318 : cluster [DBG] pgmap v503: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:33:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:53 vm00 bash[20770]: cluster 2026-03-09T17:33:52.750409+0000 mgr.y (mgr.14505) 318 : cluster [DBG] pgmap v503: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:33:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:53 vm00 bash[20770]: cluster 2026-03-09T17:33:52.750409+0000 mgr.y (mgr.14505) 318 : cluster [DBG] pgmap v503: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:33:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:53 vm02 bash[23351]: cluster 2026-03-09T17:33:52.750409+0000 mgr.y (mgr.14505) 318 : cluster [DBG] pgmap v503: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:33:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:53 vm02 bash[23351]: cluster 2026-03-09T17:33:52.750409+0000 mgr.y (mgr.14505) 318 : cluster [DBG] pgmap v503: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:33:56.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:55 vm00 bash[20770]: cluster 2026-03-09T17:33:54.751192+0000 mgr.y (mgr.14505) 319 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:55 vm00 bash[20770]: cluster 2026-03-09T17:33:54.751192+0000 mgr.y (mgr.14505) 319 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:55 vm00 bash[20770]: audit 2026-03-09T17:33:55.779166+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:55 vm00 bash[20770]: audit 2026-03-09T17:33:55.779166+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:55 vm00 bash[20770]: audit 2026-03-09T17:33:55.780377+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:55 vm00 bash[20770]: audit 2026-03-09T17:33:55.780377+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:55 vm00 bash[28333]: cluster 2026-03-09T17:33:54.751192+0000 mgr.y (mgr.14505) 319 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:55 vm00 bash[28333]: cluster 2026-03-09T17:33:54.751192+0000 mgr.y (mgr.14505) 319 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:55 vm00 bash[28333]: audit 2026-03-09T17:33:55.779166+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:55 vm00 bash[28333]: audit 2026-03-09T17:33:55.779166+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:55 vm00 bash[28333]: audit 2026-03-09T17:33:55.780377+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:55 vm00 bash[28333]: audit 2026-03-09T17:33:55.780377+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:55 vm02 bash[23351]: cluster 2026-03-09T17:33:54.751192+0000 mgr.y (mgr.14505) 319 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:55 vm02 bash[23351]: cluster 2026-03-09T17:33:54.751192+0000 mgr.y (mgr.14505) 319 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:55 vm02 bash[23351]: audit 2026-03-09T17:33:55.779166+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:55 vm02 bash[23351]: audit 2026-03-09T17:33:55.779166+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:55 vm02 bash[23351]: audit 2026-03-09T17:33:55.780377+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:55 vm02 bash[23351]: audit 2026-03-09T17:33:55.780377+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:33:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:33:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: audit 2026-03-09T17:33:55.945094+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: audit 2026-03-09T17:33:55.945094+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: cluster 2026-03-09T17:33:55.954826+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: cluster 2026-03-09T17:33:55.954826+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: audit 2026-03-09T17:33:55.955351+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: audit 2026-03-09T17:33:55.955351+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: audit 2026-03-09T17:33:55.957264+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: audit 2026-03-09T17:33:55.957264+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: audit 2026-03-09T17:33:56.948402+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: audit 2026-03-09T17:33:56.948402+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: cluster 2026-03-09T17:33:56.951797+0000 mon.a (mon.0) 2482 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:56 vm00 bash[20770]: cluster 2026-03-09T17:33:56.951797+0000 mon.a (mon.0) 2482 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: audit 2026-03-09T17:33:55.945094+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: audit 2026-03-09T17:33:55.945094+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: cluster 2026-03-09T17:33:55.954826+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: cluster 2026-03-09T17:33:55.954826+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: audit 2026-03-09T17:33:55.955351+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: audit 2026-03-09T17:33:55.955351+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: audit 2026-03-09T17:33:55.957264+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: audit 2026-03-09T17:33:55.957264+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: audit 2026-03-09T17:33:56.948402+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: audit 2026-03-09T17:33:56.948402+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: cluster 2026-03-09T17:33:56.951797+0000 mon.a (mon.0) 2482 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T17:33:57.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:56 vm00 bash[28333]: cluster 2026-03-09T17:33:56.951797+0000 mon.a (mon.0) 2482 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: audit 2026-03-09T17:33:55.945094+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: audit 2026-03-09T17:33:55.945094+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: cluster 2026-03-09T17:33:55.954826+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: cluster 2026-03-09T17:33:55.954826+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: audit 2026-03-09T17:33:55.955351+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: audit 2026-03-09T17:33:55.955351+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: audit 2026-03-09T17:33:55.957264+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: audit 2026-03-09T17:33:55.957264+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: audit 2026-03-09T17:33:56.948402+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: audit 2026-03-09T17:33:56.948402+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]': finished 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: cluster 2026-03-09T17:33:56.951797+0000 mon.a (mon.0) 2482 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T17:33:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:56 vm02 bash[23351]: cluster 2026-03-09T17:33:56.951797+0000 mon.a (mon.0) 2482 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:57 vm00 bash[28333]: cluster 2026-03-09T17:33:56.751488+0000 mgr.y (mgr.14505) 320 : cluster [DBG] pgmap v506: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:57 vm00 bash[28333]: cluster 2026-03-09T17:33:56.751488+0000 mgr.y (mgr.14505) 320 : cluster [DBG] pgmap v506: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:57 vm00 bash[28333]: audit 2026-03-09T17:33:56.996023+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:57 vm00 bash[28333]: audit 2026-03-09T17:33:56.996023+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:57 vm00 bash[28333]: audit 2026-03-09T17:33:56.996939+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:57 vm00 bash[28333]: audit 2026-03-09T17:33:56.996939+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:58 vm00 bash[28333]: audit 2026-03-09T17:33:56.997187+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:58 vm00 bash[28333]: audit 2026-03-09T17:33:56.997187+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:58 vm00 bash[28333]: audit 2026-03-09T17:33:56.998035+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:58 vm00 bash[28333]: audit 2026-03-09T17:33:56.998035+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:58 vm00 bash[28333]: audit 2026-03-09T17:33:57.713552+0000 mon.a (mon.0) 2485 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:58 vm00 bash[28333]: audit 2026-03-09T17:33:57.713552+0000 mon.a (mon.0) 2485 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:58 vm00 bash[28333]: audit 2026-03-09T17:33:57.717512+0000 mon.c (mon.2) 589 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:58 vm00 bash[28333]: audit 2026-03-09T17:33:57.717512+0000 mon.c (mon.2) 589 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: cluster 2026-03-09T17:33:56.751488+0000 mgr.y (mgr.14505) 320 : cluster [DBG] pgmap v506: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: cluster 2026-03-09T17:33:56.751488+0000 mgr.y (mgr.14505) 320 : cluster [DBG] pgmap v506: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: audit 2026-03-09T17:33:56.996023+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: audit 2026-03-09T17:33:56.996023+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: audit 2026-03-09T17:33:56.996939+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: audit 2026-03-09T17:33:56.996939+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: audit 2026-03-09T17:33:56.997187+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: audit 2026-03-09T17:33:56.997187+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: audit 2026-03-09T17:33:56.998035+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:57 vm00 bash[20770]: audit 2026-03-09T17:33:56.998035+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:58 vm00 bash[20770]: audit 2026-03-09T17:33:57.713552+0000 mon.a (mon.0) 2485 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:58 vm00 bash[20770]: audit 2026-03-09T17:33:57.713552+0000 mon.a (mon.0) 2485 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:58 vm00 bash[20770]: audit 2026-03-09T17:33:57.717512+0000 mon.c (mon.2) 589 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:58.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:58 vm00 bash[20770]: audit 2026-03-09T17:33:57.717512+0000 mon.c (mon.2) 589 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: cluster 2026-03-09T17:33:56.751488+0000 mgr.y (mgr.14505) 320 : cluster [DBG] pgmap v506: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: cluster 2026-03-09T17:33:56.751488+0000 mgr.y (mgr.14505) 320 : cluster [DBG] pgmap v506: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:56.996023+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:56.996023+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:56.996939+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:56.996939+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:56.997187+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:56.997187+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:56.998035+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:56.998035+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-51"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:57.713552+0000 mon.a (mon.0) 2485 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:57.713552+0000 mon.a (mon.0) 2485 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:57.717512+0000 mon.c (mon.2) 589 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:57 vm02 bash[23351]: audit 2026-03-09T17:33:57.717512+0000 mon.c (mon.2) 589 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:33:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:59 vm02 bash[23351]: cluster 2026-03-09T17:33:57.980796+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T17:33:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:33:59 vm02 bash[23351]: cluster 2026-03-09T17:33:57.980796+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T17:33:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:59 vm00 bash[28333]: cluster 2026-03-09T17:33:57.980796+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T17:33:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:33:59 vm00 bash[28333]: cluster 2026-03-09T17:33:57.980796+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T17:33:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:59 vm00 bash[20770]: cluster 2026-03-09T17:33:57.980796+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T17:33:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:33:59 vm00 bash[20770]: cluster 2026-03-09T17:33:57.980796+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: cluster 2026-03-09T17:33:58.751783+0000 mgr.y (mgr.14505) 321 : cluster [DBG] pgmap v509: 260 pgs: 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: cluster 2026-03-09T17:33:58.751783+0000 mgr.y (mgr.14505) 321 : cluster [DBG] pgmap v509: 260 pgs: 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: cluster 2026-03-09T17:33:58.992350+0000 mon.a (mon.0) 2487 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: cluster 2026-03-09T17:33:58.992350+0000 mon.a (mon.0) 2487 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: cluster 2026-03-09T17:33:59.069895+0000 mon.a (mon.0) 2488 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: cluster 2026-03-09T17:33:59.069895+0000 mon.a (mon.0) 2488 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: audit 2026-03-09T17:33:59.070143+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: audit 2026-03-09T17:33:59.070143+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: audit 2026-03-09T17:33:59.078425+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:00 vm00 bash[28333]: audit 2026-03-09T17:33:59.078425+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: cluster 2026-03-09T17:33:58.751783+0000 mgr.y (mgr.14505) 321 : cluster [DBG] pgmap v509: 260 pgs: 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: cluster 2026-03-09T17:33:58.751783+0000 mgr.y (mgr.14505) 321 : cluster [DBG] pgmap v509: 260 pgs: 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: cluster 2026-03-09T17:33:58.992350+0000 mon.a (mon.0) 2487 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: cluster 2026-03-09T17:33:58.992350+0000 mon.a (mon.0) 2487 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: cluster 2026-03-09T17:33:59.069895+0000 mon.a (mon.0) 2488 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: cluster 2026-03-09T17:33:59.069895+0000 mon.a (mon.0) 2488 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: audit 2026-03-09T17:33:59.070143+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: audit 2026-03-09T17:33:59.070143+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: audit 2026-03-09T17:33:59.078425+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:00 vm00 bash[20770]: audit 2026-03-09T17:33:59.078425+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: cluster 2026-03-09T17:33:58.751783+0000 mgr.y (mgr.14505) 321 : cluster [DBG] pgmap v509: 260 pgs: 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: cluster 2026-03-09T17:33:58.751783+0000 mgr.y (mgr.14505) 321 : cluster [DBG] pgmap v509: 260 pgs: 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: cluster 2026-03-09T17:33:58.992350+0000 mon.a (mon.0) 2487 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: cluster 2026-03-09T17:33:58.992350+0000 mon.a (mon.0) 2487 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: cluster 2026-03-09T17:33:59.069895+0000 mon.a (mon.0) 2488 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: cluster 2026-03-09T17:33:59.069895+0000 mon.a (mon.0) 2488 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: audit 2026-03-09T17:33:59.070143+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: audit 2026-03-09T17:33:59.070143+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: audit 2026-03-09T17:33:59.078425+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:00 vm02 bash[23351]: audit 2026-03-09T17:33:59.078425+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.145296+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.145296+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: cluster 2026-03-09T17:34:00.148411+0000 mon.a (mon.0) 2491 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: cluster 2026-03-09T17:34:00.148411+0000 mon.a (mon.0) 2491 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.204366+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.204366+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.205195+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.205195+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.205496+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.205496+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.206265+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:01 vm00 bash[28333]: audit 2026-03-09T17:34:00.206265+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.145296+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.145296+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: cluster 2026-03-09T17:34:00.148411+0000 mon.a (mon.0) 2491 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: cluster 2026-03-09T17:34:00.148411+0000 mon.a (mon.0) 2491 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.204366+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.204366+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.205195+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.205195+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.205496+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.205496+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.206265+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:01 vm00 bash[20770]: audit 2026-03-09T17:34:00.206265+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.145296+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.145296+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: cluster 2026-03-09T17:34:00.148411+0000 mon.a (mon.0) 2491 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: cluster 2026-03-09T17:34:00.148411+0000 mon.a (mon.0) 2491 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.204366+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.204366+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.205195+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.205195+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.205496+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.205496+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.206265+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:01 vm02 bash[23351]: audit 2026-03-09T17:34:00.206265+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-53"}]: dispatch 2026-03-09T17:34:02.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:34:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:34:02.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:02 vm00 bash[28333]: cluster 2026-03-09T17:34:00.752122+0000 mgr.y (mgr.14505) 322 : cluster [DBG] pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:34:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:02 vm00 bash[28333]: cluster 2026-03-09T17:34:00.752122+0000 mgr.y (mgr.14505) 322 : cluster [DBG] pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:34:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:02 vm00 bash[28333]: cluster 2026-03-09T17:34:01.182732+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T17:34:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:02 vm00 bash[28333]: cluster 2026-03-09T17:34:01.182732+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T17:34:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:02 vm00 bash[20770]: cluster 2026-03-09T17:34:00.752122+0000 mgr.y (mgr.14505) 322 : cluster [DBG] pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:34:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:02 vm00 bash[20770]: cluster 2026-03-09T17:34:00.752122+0000 mgr.y (mgr.14505) 322 : cluster [DBG] pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:34:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:02 vm00 bash[20770]: cluster 2026-03-09T17:34:01.182732+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T17:34:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:02 vm00 bash[20770]: cluster 2026-03-09T17:34:01.182732+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T17:34:02.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:02 vm02 bash[23351]: cluster 2026-03-09T17:34:00.752122+0000 mgr.y (mgr.14505) 322 : cluster [DBG] pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:34:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:02 vm02 bash[23351]: cluster 2026-03-09T17:34:00.752122+0000 mgr.y (mgr.14505) 322 : cluster [DBG] pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:34:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:02 vm02 bash[23351]: cluster 2026-03-09T17:34:01.182732+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T17:34:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:02 vm02 bash[23351]: cluster 2026-03-09T17:34:01.182732+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:03 vm00 bash[28333]: audit 2026-03-09T17:34:01.790017+0000 mgr.y (mgr.14505) 323 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:03 vm00 bash[28333]: audit 2026-03-09T17:34:01.790017+0000 mgr.y (mgr.14505) 323 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:03 vm00 bash[28333]: cluster 2026-03-09T17:34:02.211083+0000 mon.a (mon.0) 2495 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:03 vm00 bash[28333]: cluster 2026-03-09T17:34:02.211083+0000 mon.a (mon.0) 2495 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:03 vm00 bash[28333]: audit 2026-03-09T17:34:02.212922+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:03 vm00 bash[28333]: audit 2026-03-09T17:34:02.212922+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:03 vm00 bash[28333]: audit 2026-03-09T17:34:02.215698+0000 mon.a (mon.0) 2496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:03 vm00 bash[28333]: audit 2026-03-09T17:34:02.215698+0000 mon.a (mon.0) 2496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:03 vm00 bash[20770]: audit 2026-03-09T17:34:01.790017+0000 mgr.y (mgr.14505) 323 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:03 vm00 bash[20770]: audit 2026-03-09T17:34:01.790017+0000 mgr.y (mgr.14505) 323 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:03 vm00 bash[20770]: cluster 2026-03-09T17:34:02.211083+0000 mon.a (mon.0) 2495 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:03 vm00 bash[20770]: cluster 2026-03-09T17:34:02.211083+0000 mon.a (mon.0) 2495 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:03 vm00 bash[20770]: audit 2026-03-09T17:34:02.212922+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:03 vm00 bash[20770]: audit 2026-03-09T17:34:02.212922+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:03 vm00 bash[20770]: audit 2026-03-09T17:34:02.215698+0000 mon.a (mon.0) 2496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:03 vm00 bash[20770]: audit 2026-03-09T17:34:02.215698+0000 mon.a (mon.0) 2496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:03 vm02 bash[23351]: audit 2026-03-09T17:34:01.790017+0000 mgr.y (mgr.14505) 323 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:03 vm02 bash[23351]: audit 2026-03-09T17:34:01.790017+0000 mgr.y (mgr.14505) 323 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:03 vm02 bash[23351]: cluster 2026-03-09T17:34:02.211083+0000 mon.a (mon.0) 2495 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T17:34:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:03 vm02 bash[23351]: cluster 2026-03-09T17:34:02.211083+0000 mon.a (mon.0) 2495 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T17:34:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:03 vm02 bash[23351]: audit 2026-03-09T17:34:02.212922+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:03 vm02 bash[23351]: audit 2026-03-09T17:34:02.212922+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:03 vm02 bash[23351]: audit 2026-03-09T17:34:02.215698+0000 mon.a (mon.0) 2496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:03 vm02 bash[23351]: audit 2026-03-09T17:34:02.215698+0000 mon.a (mon.0) 2496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:04.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: cluster 2026-03-09T17:34:02.752505+0000 mgr.y (mgr.14505) 324 : cluster [DBG] pgmap v515: 292 pgs: 4 creating+peering, 28 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: cluster 2026-03-09T17:34:02.752505+0000 mgr.y (mgr.14505) 324 : cluster [DBG] pgmap v515: 292 pgs: 4 creating+peering, 28 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: cluster 2026-03-09T17:34:03.186024+0000 mon.a (mon.0) 2497 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: cluster 2026-03-09T17:34:03.186024+0000 mon.a (mon.0) 2497 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.197002+0000 mon.a (mon.0) 2498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.197002+0000 mon.a (mon.0) 2498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: cluster 2026-03-09T17:34:03.211297+0000 mon.a (mon.0) 2499 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: cluster 2026-03-09T17:34:03.211297+0000 mon.a (mon.0) 2499 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.215524+0000 mon.b (mon.1) 391 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.215524+0000 mon.b (mon.1) 391 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.266743+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.266743+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.267531+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.267531+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.267865+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.267865+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.268584+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:04 vm00 bash[28333]: audit 2026-03-09T17:34:03.268584+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: cluster 2026-03-09T17:34:02.752505+0000 mgr.y (mgr.14505) 324 : cluster [DBG] pgmap v515: 292 pgs: 4 creating+peering, 28 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: cluster 2026-03-09T17:34:02.752505+0000 mgr.y (mgr.14505) 324 : cluster [DBG] pgmap v515: 292 pgs: 4 creating+peering, 28 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: cluster 2026-03-09T17:34:03.186024+0000 mon.a (mon.0) 2497 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: cluster 2026-03-09T17:34:03.186024+0000 mon.a (mon.0) 2497 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.197002+0000 mon.a (mon.0) 2498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.197002+0000 mon.a (mon.0) 2498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: cluster 2026-03-09T17:34:03.211297+0000 mon.a (mon.0) 2499 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: cluster 2026-03-09T17:34:03.211297+0000 mon.a (mon.0) 2499 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.215524+0000 mon.b (mon.1) 391 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.215524+0000 mon.b (mon.1) 391 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.266743+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.266743+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.267531+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.267531+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.267865+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.267865+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.268584+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:04 vm00 bash[20770]: audit 2026-03-09T17:34:03.268584+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: cluster 2026-03-09T17:34:02.752505+0000 mgr.y (mgr.14505) 324 : cluster [DBG] pgmap v515: 292 pgs: 4 creating+peering, 28 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: cluster 2026-03-09T17:34:02.752505+0000 mgr.y (mgr.14505) 324 : cluster [DBG] pgmap v515: 292 pgs: 4 creating+peering, 28 unknown, 260 active+clean; 8.3 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: cluster 2026-03-09T17:34:03.186024+0000 mon.a (mon.0) 2497 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: cluster 2026-03-09T17:34:03.186024+0000 mon.a (mon.0) 2497 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.197002+0000 mon.a (mon.0) 2498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.197002+0000 mon.a (mon.0) 2498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: cluster 2026-03-09T17:34:03.211297+0000 mon.a (mon.0) 2499 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: cluster 2026-03-09T17:34:03.211297+0000 mon.a (mon.0) 2499 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.215524+0000 mon.b (mon.1) 391 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.215524+0000 mon.b (mon.1) 391 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.266743+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.266743+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.267531+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.267531+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.267865+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.267865+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.268584+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:04 vm02 bash[23351]: audit 2026-03-09T17:34:03.268584+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-55"}]: dispatch 2026-03-09T17:34:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:05 vm02 bash[23351]: cluster 2026-03-09T17:34:04.210115+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T17:34:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:05 vm02 bash[23351]: cluster 2026-03-09T17:34:04.210115+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T17:34:05.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:05 vm00 bash[28333]: cluster 2026-03-09T17:34:04.210115+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T17:34:05.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:05 vm00 bash[28333]: cluster 2026-03-09T17:34:04.210115+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T17:34:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:05 vm00 bash[20770]: cluster 2026-03-09T17:34:04.210115+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T17:34:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:05 vm00 bash[20770]: cluster 2026-03-09T17:34:04.210115+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T17:34:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:06 vm02 bash[23351]: cluster 2026-03-09T17:34:04.752827+0000 mgr.y (mgr.14505) 325 : cluster [DBG] pgmap v518: 260 pgs: 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:06 vm02 bash[23351]: cluster 2026-03-09T17:34:04.752827+0000 mgr.y (mgr.14505) 325 : cluster [DBG] pgmap v518: 260 pgs: 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:06 vm02 bash[23351]: cluster 2026-03-09T17:34:05.360849+0000 mon.a (mon.0) 2503 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T17:34:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:06 vm02 bash[23351]: cluster 2026-03-09T17:34:05.360849+0000 mon.a (mon.0) 2503 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T17:34:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:06 vm02 bash[23351]: audit 2026-03-09T17:34:05.366174+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:06 vm02 bash[23351]: audit 2026-03-09T17:34:05.366174+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:06 vm02 bash[23351]: audit 2026-03-09T17:34:05.367499+0000 mon.a (mon.0) 2504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:06 vm02 bash[23351]: audit 2026-03-09T17:34:05.367499+0000 mon.a (mon.0) 2504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:06 vm00 bash[28333]: cluster 2026-03-09T17:34:04.752827+0000 mgr.y (mgr.14505) 325 : cluster [DBG] pgmap v518: 260 pgs: 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:06 vm00 bash[28333]: cluster 2026-03-09T17:34:04.752827+0000 mgr.y (mgr.14505) 325 : cluster [DBG] pgmap v518: 260 pgs: 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:06 vm00 bash[28333]: cluster 2026-03-09T17:34:05.360849+0000 mon.a (mon.0) 2503 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T17:34:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:06 vm00 bash[28333]: cluster 2026-03-09T17:34:05.360849+0000 mon.a (mon.0) 2503 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T17:34:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:06 vm00 bash[28333]: audit 2026-03-09T17:34:05.366174+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:06 vm00 bash[28333]: audit 2026-03-09T17:34:05.366174+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:06 vm00 bash[28333]: audit 2026-03-09T17:34:05.367499+0000 mon.a (mon.0) 2504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:06 vm00 bash[28333]: audit 2026-03-09T17:34:05.367499+0000 mon.a (mon.0) 2504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:06 vm00 bash[20770]: cluster 2026-03-09T17:34:04.752827+0000 mgr.y (mgr.14505) 325 : cluster [DBG] pgmap v518: 260 pgs: 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:06 vm00 bash[20770]: cluster 2026-03-09T17:34:04.752827+0000 mgr.y (mgr.14505) 325 : cluster [DBG] pgmap v518: 260 pgs: 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:06 vm00 bash[20770]: cluster 2026-03-09T17:34:05.360849+0000 mon.a (mon.0) 2503 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:06 vm00 bash[20770]: cluster 2026-03-09T17:34:05.360849+0000 mon.a (mon.0) 2503 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:06 vm00 bash[20770]: audit 2026-03-09T17:34:05.366174+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:06 vm00 bash[20770]: audit 2026-03-09T17:34:05.366174+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:06 vm00 bash[20770]: audit 2026-03-09T17:34:05.367499+0000 mon.a (mon.0) 2504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:06 vm00 bash[20770]: audit 2026-03-09T17:34:05.367499+0000 mon.a (mon.0) 2504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:34:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:34:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.352419+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.352419+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: cluster 2026-03-09T17:34:06.365220+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: cluster 2026-03-09T17:34:06.365220+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.416267+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.416267+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.417138+0000 mon.b (mon.1) 396 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.417138+0000 mon.b (mon.1) 396 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.417510+0000 mon.a (mon.0) 2507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.417510+0000 mon.a (mon.0) 2507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.418199+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:07 vm02 bash[23351]: audit 2026-03-09T17:34:06.418199+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.352419+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.352419+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: cluster 2026-03-09T17:34:06.365220+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: cluster 2026-03-09T17:34:06.365220+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.416267+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.416267+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.417138+0000 mon.b (mon.1) 396 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.417138+0000 mon.b (mon.1) 396 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.417510+0000 mon.a (mon.0) 2507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.417510+0000 mon.a (mon.0) 2507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.418199+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:07 vm00 bash[28333]: audit 2026-03-09T17:34:06.418199+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.352419+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.352419+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: cluster 2026-03-09T17:34:06.365220+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: cluster 2026-03-09T17:34:06.365220+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.416267+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.416267+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.417138+0000 mon.b (mon.1) 396 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.417138+0000 mon.b (mon.1) 396 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.417510+0000 mon.a (mon.0) 2507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.417510+0000 mon.a (mon.0) 2507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.418199+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:07 vm00 bash[20770]: audit 2026-03-09T17:34:06.418199+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-57"}]: dispatch 2026-03-09T17:34:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:08 vm02 bash[23351]: cluster 2026-03-09T17:34:06.753131+0000 mgr.y (mgr.14505) 326 : cluster [DBG] pgmap v521: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:08 vm02 bash[23351]: cluster 2026-03-09T17:34:06.753131+0000 mgr.y (mgr.14505) 326 : cluster [DBG] pgmap v521: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:08 vm02 bash[23351]: cluster 2026-03-09T17:34:07.384753+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T17:34:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:08 vm02 bash[23351]: cluster 2026-03-09T17:34:07.384753+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T17:34:08.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:08 vm00 bash[28333]: cluster 2026-03-09T17:34:06.753131+0000 mgr.y (mgr.14505) 326 : cluster [DBG] pgmap v521: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:08.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:08 vm00 bash[28333]: cluster 2026-03-09T17:34:06.753131+0000 mgr.y (mgr.14505) 326 : cluster [DBG] pgmap v521: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:08.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:08 vm00 bash[28333]: cluster 2026-03-09T17:34:07.384753+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T17:34:08.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:08 vm00 bash[28333]: cluster 2026-03-09T17:34:07.384753+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T17:34:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:08 vm00 bash[20770]: cluster 2026-03-09T17:34:06.753131+0000 mgr.y (mgr.14505) 326 : cluster [DBG] pgmap v521: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:08 vm00 bash[20770]: cluster 2026-03-09T17:34:06.753131+0000 mgr.y (mgr.14505) 326 : cluster [DBG] pgmap v521: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:34:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:08 vm00 bash[20770]: cluster 2026-03-09T17:34:07.384753+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T17:34:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:08 vm00 bash[20770]: cluster 2026-03-09T17:34:07.384753+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T17:34:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:09 vm02 bash[23351]: cluster 2026-03-09T17:34:08.371445+0000 mon.a (mon.0) 2510 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T17:34:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:09 vm02 bash[23351]: cluster 2026-03-09T17:34:08.371445+0000 mon.a (mon.0) 2510 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T17:34:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:09 vm02 bash[23351]: audit 2026-03-09T17:34:08.371994+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:09 vm02 bash[23351]: audit 2026-03-09T17:34:08.371994+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:09 vm02 bash[23351]: audit 2026-03-09T17:34:08.373691+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:09 vm02 bash[23351]: audit 2026-03-09T17:34:08.373691+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:09 vm00 bash[28333]: cluster 2026-03-09T17:34:08.371445+0000 mon.a (mon.0) 2510 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T17:34:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:09 vm00 bash[28333]: cluster 2026-03-09T17:34:08.371445+0000 mon.a (mon.0) 2510 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T17:34:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:09 vm00 bash[28333]: audit 2026-03-09T17:34:08.371994+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:09 vm00 bash[28333]: audit 2026-03-09T17:34:08.371994+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:09 vm00 bash[28333]: audit 2026-03-09T17:34:08.373691+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:09 vm00 bash[28333]: audit 2026-03-09T17:34:08.373691+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:09 vm00 bash[20770]: cluster 2026-03-09T17:34:08.371445+0000 mon.a (mon.0) 2510 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T17:34:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:09 vm00 bash[20770]: cluster 2026-03-09T17:34:08.371445+0000 mon.a (mon.0) 2510 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T17:34:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:09 vm00 bash[20770]: audit 2026-03-09T17:34:08.371994+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:09 vm00 bash[20770]: audit 2026-03-09T17:34:08.371994+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:09 vm00 bash[20770]: audit 2026-03-09T17:34:08.373691+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:09 vm00 bash[20770]: audit 2026-03-09T17:34:08.373691+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: cluster 2026-03-09T17:34:08.753477+0000 mgr.y (mgr.14505) 327 : cluster [DBG] pgmap v524: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: cluster 2026-03-09T17:34:08.753477+0000 mgr.y (mgr.14505) 327 : cluster [DBG] pgmap v524: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: cluster 2026-03-09T17:34:09.372029+0000 mon.a (mon.0) 2512 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: cluster 2026-03-09T17:34:09.372029+0000 mon.a (mon.0) 2512 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.391168+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.391168+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.398308+0000 mon.b (mon.1) 398 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.398308+0000 mon.b (mon.1) 398 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: cluster 2026-03-09T17:34:09.407382+0000 mon.a (mon.0) 2514 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: cluster 2026-03-09T17:34:09.407382+0000 mon.a (mon.0) 2514 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.463393+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.463393+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.464059+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.464059+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.464551+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.464551+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.465061+0000 mon.a (mon.0) 2516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:10 vm02 bash[23351]: audit 2026-03-09T17:34:09.465061+0000 mon.a (mon.0) 2516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: cluster 2026-03-09T17:34:08.753477+0000 mgr.y (mgr.14505) 327 : cluster [DBG] pgmap v524: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: cluster 2026-03-09T17:34:08.753477+0000 mgr.y (mgr.14505) 327 : cluster [DBG] pgmap v524: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: cluster 2026-03-09T17:34:09.372029+0000 mon.a (mon.0) 2512 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: cluster 2026-03-09T17:34:09.372029+0000 mon.a (mon.0) 2512 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.391168+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.391168+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.398308+0000 mon.b (mon.1) 398 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.398308+0000 mon.b (mon.1) 398 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: cluster 2026-03-09T17:34:09.407382+0000 mon.a (mon.0) 2514 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: cluster 2026-03-09T17:34:09.407382+0000 mon.a (mon.0) 2514 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.463393+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.463393+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.464059+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.464059+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.464551+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.464551+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.465061+0000 mon.a (mon.0) 2516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:10 vm00 bash[28333]: audit 2026-03-09T17:34:09.465061+0000 mon.a (mon.0) 2516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: cluster 2026-03-09T17:34:08.753477+0000 mgr.y (mgr.14505) 327 : cluster [DBG] pgmap v524: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: cluster 2026-03-09T17:34:08.753477+0000 mgr.y (mgr.14505) 327 : cluster [DBG] pgmap v524: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: cluster 2026-03-09T17:34:09.372029+0000 mon.a (mon.0) 2512 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: cluster 2026-03-09T17:34:09.372029+0000 mon.a (mon.0) 2512 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.391168+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.391168+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.398308+0000 mon.b (mon.1) 398 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.398308+0000 mon.b (mon.1) 398 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: cluster 2026-03-09T17:34:09.407382+0000 mon.a (mon.0) 2514 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: cluster 2026-03-09T17:34:09.407382+0000 mon.a (mon.0) 2514 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.463393+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.463393+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.464059+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.464059+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.464551+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.464551+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.465061+0000 mon.a (mon.0) 2516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:10 vm00 bash[20770]: audit 2026-03-09T17:34:09.465061+0000 mon.a (mon.0) 2516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-59"}]: dispatch 2026-03-09T17:34:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:11 vm00 bash[28333]: cluster 2026-03-09T17:34:10.397477+0000 mon.a (mon.0) 2517 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T17:34:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:11 vm00 bash[28333]: cluster 2026-03-09T17:34:10.397477+0000 mon.a (mon.0) 2517 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T17:34:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:11 vm00 bash[28333]: cluster 2026-03-09T17:34:10.753763+0000 mgr.y (mgr.14505) 328 : cluster [DBG] pgmap v527: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:11 vm00 bash[28333]: cluster 2026-03-09T17:34:10.753763+0000 mgr.y (mgr.14505) 328 : cluster [DBG] pgmap v527: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:11 vm00 bash[20770]: cluster 2026-03-09T17:34:10.397477+0000 mon.a (mon.0) 2517 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T17:34:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:11 vm00 bash[20770]: cluster 2026-03-09T17:34:10.397477+0000 mon.a (mon.0) 2517 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T17:34:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:11 vm00 bash[20770]: cluster 2026-03-09T17:34:10.753763+0000 mgr.y (mgr.14505) 328 : cluster [DBG] pgmap v527: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:11 vm00 bash[20770]: cluster 2026-03-09T17:34:10.753763+0000 mgr.y (mgr.14505) 328 : cluster [DBG] pgmap v527: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:11.800 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:11 vm02 bash[23351]: cluster 2026-03-09T17:34:10.397477+0000 mon.a (mon.0) 2517 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T17:34:11.800 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:11 vm02 bash[23351]: cluster 2026-03-09T17:34:10.397477+0000 mon.a (mon.0) 2517 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T17:34:11.800 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:11 vm02 bash[23351]: cluster 2026-03-09T17:34:10.753763+0000 mgr.y (mgr.14505) 328 : cluster [DBG] pgmap v527: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:11.800 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:11 vm02 bash[23351]: cluster 2026-03-09T17:34:10.753763+0000 mgr.y (mgr.14505) 328 : cluster [DBG] pgmap v527: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:34:12.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:34:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:34:12.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:12 vm00 bash[28333]: cluster 2026-03-09T17:34:11.410931+0000 mon.a (mon.0) 2518 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T17:34:12.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:12 vm00 bash[28333]: cluster 2026-03-09T17:34:11.410931+0000 mon.a (mon.0) 2518 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T17:34:12.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:12 vm00 bash[28333]: audit 2026-03-09T17:34:11.416176+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:12 vm00 bash[28333]: audit 2026-03-09T17:34:11.416176+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:12 vm00 bash[28333]: audit 2026-03-09T17:34:11.421686+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:12 vm00 bash[28333]: audit 2026-03-09T17:34:11.421686+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:12 vm00 bash[28333]: audit 2026-03-09T17:34:11.800700+0000 mgr.y (mgr.14505) 329 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:12 vm00 bash[28333]: audit 2026-03-09T17:34:11.800700+0000 mgr.y (mgr.14505) 329 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:12 vm00 bash[20770]: cluster 2026-03-09T17:34:11.410931+0000 mon.a (mon.0) 2518 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:12 vm00 bash[20770]: cluster 2026-03-09T17:34:11.410931+0000 mon.a (mon.0) 2518 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:12 vm00 bash[20770]: audit 2026-03-09T17:34:11.416176+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:12 vm00 bash[20770]: audit 2026-03-09T17:34:11.416176+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:12 vm00 bash[20770]: audit 2026-03-09T17:34:11.421686+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:12 vm00 bash[20770]: audit 2026-03-09T17:34:11.421686+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:12 vm00 bash[20770]: audit 2026-03-09T17:34:11.800700+0000 mgr.y (mgr.14505) 329 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:12 vm00 bash[20770]: audit 2026-03-09T17:34:11.800700+0000 mgr.y (mgr.14505) 329 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:12 vm02 bash[23351]: cluster 2026-03-09T17:34:11.410931+0000 mon.a (mon.0) 2518 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T17:34:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:12 vm02 bash[23351]: cluster 2026-03-09T17:34:11.410931+0000 mon.a (mon.0) 2518 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T17:34:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:12 vm02 bash[23351]: audit 2026-03-09T17:34:11.416176+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:12 vm02 bash[23351]: audit 2026-03-09T17:34:11.416176+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:12 vm02 bash[23351]: audit 2026-03-09T17:34:11.421686+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:12 vm02 bash[23351]: audit 2026-03-09T17:34:11.421686+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:12 vm02 bash[23351]: audit 2026-03-09T17:34:11.800700+0000 mgr.y (mgr.14505) 329 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:12 vm02 bash[23351]: audit 2026-03-09T17:34:11.800700+0000 mgr.y (mgr.14505) 329 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.407691+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.407691+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.414216+0000 mon.b (mon.1) 402 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.414216+0000 mon.b (mon.1) 402 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.416486+0000 mon.b (mon.1) 403 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.416486+0000 mon.b (mon.1) 403 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: cluster 2026-03-09T17:34:12.424522+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: cluster 2026-03-09T17:34:12.424522+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.432685+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.432685+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.729696+0000 mon.a (mon.0) 2523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.729696+0000 mon.a (mon.0) 2523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.731606+0000 mon.c (mon.2) 590 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:12.731606+0000 mon.c (mon.2) 590 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: cluster 2026-03-09T17:34:12.754192+0000 mgr.y (mgr.14505) 330 : cluster [DBG] pgmap v530: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 751 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: cluster 2026-03-09T17:34:12.754192+0000 mgr.y (mgr.14505) 330 : cluster [DBG] pgmap v530: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 751 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:13.411190+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: audit 2026-03-09T17:34:13.411190+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: cluster 2026-03-09T17:34:13.417498+0000 mon.a (mon.0) 2525 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:13 vm00 bash[28333]: cluster 2026-03-09T17:34:13.417498+0000 mon.a (mon.0) 2525 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.407691+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.407691+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.414216+0000 mon.b (mon.1) 402 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.414216+0000 mon.b (mon.1) 402 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.416486+0000 mon.b (mon.1) 403 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.416486+0000 mon.b (mon.1) 403 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: cluster 2026-03-09T17:34:12.424522+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: cluster 2026-03-09T17:34:12.424522+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.432685+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.432685+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.729696+0000 mon.a (mon.0) 2523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.729696+0000 mon.a (mon.0) 2523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.731606+0000 mon.c (mon.2) 590 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:12.731606+0000 mon.c (mon.2) 590 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: cluster 2026-03-09T17:34:12.754192+0000 mgr.y (mgr.14505) 330 : cluster [DBG] pgmap v530: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 751 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: cluster 2026-03-09T17:34:12.754192+0000 mgr.y (mgr.14505) 330 : cluster [DBG] pgmap v530: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 751 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:13.411190+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: audit 2026-03-09T17:34:13.411190+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: cluster 2026-03-09T17:34:13.417498+0000 mon.a (mon.0) 2525 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T17:34:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:13 vm00 bash[20770]: cluster 2026-03-09T17:34:13.417498+0000 mon.a (mon.0) 2525 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.407691+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.407691+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.414216+0000 mon.b (mon.1) 402 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.414216+0000 mon.b (mon.1) 402 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.416486+0000 mon.b (mon.1) 403 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.416486+0000 mon.b (mon.1) 403 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: cluster 2026-03-09T17:34:12.424522+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: cluster 2026-03-09T17:34:12.424522+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.432685+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.432685+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.729696+0000 mon.a (mon.0) 2523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.729696+0000 mon.a (mon.0) 2523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.731606+0000 mon.c (mon.2) 590 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:12.731606+0000 mon.c (mon.2) 590 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: cluster 2026-03-09T17:34:12.754192+0000 mgr.y (mgr.14505) 330 : cluster [DBG] pgmap v530: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 751 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: cluster 2026-03-09T17:34:12.754192+0000 mgr.y (mgr.14505) 330 : cluster [DBG] pgmap v530: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 751 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:13.411190+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: audit 2026-03-09T17:34:13.411190+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: cluster 2026-03-09T17:34:13.417498+0000 mon.a (mon.0) 2525 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T17:34:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:13 vm02 bash[23351]: cluster 2026-03-09T17:34:13.417498+0000 mon.a (mon.0) 2525 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T17:34:14.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:14 vm00 bash[28333]: audit 2026-03-09T17:34:13.458230+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:14 vm00 bash[28333]: audit 2026-03-09T17:34:13.458230+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:14 vm00 bash[28333]: audit 2026-03-09T17:34:13.459240+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:14 vm00 bash[28333]: audit 2026-03-09T17:34:13.459240+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:14 vm00 bash[28333]: audit 2026-03-09T17:34:13.459502+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:14 vm00 bash[28333]: audit 2026-03-09T17:34:13.459502+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:14 vm00 bash[28333]: audit 2026-03-09T17:34:13.460334+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:14 vm00 bash[28333]: audit 2026-03-09T17:34:13.460334+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:14 vm00 bash[20770]: audit 2026-03-09T17:34:13.458230+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:14 vm00 bash[20770]: audit 2026-03-09T17:34:13.458230+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:14 vm00 bash[20770]: audit 2026-03-09T17:34:13.459240+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:14 vm00 bash[20770]: audit 2026-03-09T17:34:13.459240+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:14 vm00 bash[20770]: audit 2026-03-09T17:34:13.459502+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:14 vm00 bash[20770]: audit 2026-03-09T17:34:13.459502+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:14 vm00 bash[20770]: audit 2026-03-09T17:34:13.460334+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:14 vm00 bash[20770]: audit 2026-03-09T17:34:13.460334+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:14 vm02 bash[23351]: audit 2026-03-09T17:34:13.458230+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:14 vm02 bash[23351]: audit 2026-03-09T17:34:13.458230+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:14 vm02 bash[23351]: audit 2026-03-09T17:34:13.459240+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:14 vm02 bash[23351]: audit 2026-03-09T17:34:13.459240+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:14 vm02 bash[23351]: audit 2026-03-09T17:34:13.459502+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:14 vm02 bash[23351]: audit 2026-03-09T17:34:13.459502+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:14 vm02 bash[23351]: audit 2026-03-09T17:34:13.460334+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:14 vm02 bash[23351]: audit 2026-03-09T17:34:13.460334+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-61"}]: dispatch 2026-03-09T17:34:15.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:15 vm00 bash[28333]: cluster 2026-03-09T17:34:14.442041+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T17:34:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:15 vm00 bash[28333]: cluster 2026-03-09T17:34:14.442041+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T17:34:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:15 vm00 bash[28333]: cluster 2026-03-09T17:34:14.754513+0000 mgr.y (mgr.14505) 331 : cluster [DBG] pgmap v533: 260 pgs: 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:15 vm00 bash[28333]: cluster 2026-03-09T17:34:14.754513+0000 mgr.y (mgr.14505) 331 : cluster [DBG] pgmap v533: 260 pgs: 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:15 vm00 bash[20770]: cluster 2026-03-09T17:34:14.442041+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T17:34:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:15 vm00 bash[20770]: cluster 2026-03-09T17:34:14.442041+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T17:34:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:15 vm00 bash[20770]: cluster 2026-03-09T17:34:14.754513+0000 mgr.y (mgr.14505) 331 : cluster [DBG] pgmap v533: 260 pgs: 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:15 vm00 bash[20770]: cluster 2026-03-09T17:34:14.754513+0000 mgr.y (mgr.14505) 331 : cluster [DBG] pgmap v533: 260 pgs: 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:15.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:15 vm02 bash[23351]: cluster 2026-03-09T17:34:14.442041+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T17:34:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:15 vm02 bash[23351]: cluster 2026-03-09T17:34:14.442041+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T17:34:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:15 vm02 bash[23351]: cluster 2026-03-09T17:34:14.754513+0000 mgr.y (mgr.14505) 331 : cluster [DBG] pgmap v533: 260 pgs: 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:15 vm02 bash[23351]: cluster 2026-03-09T17:34:14.754513+0000 mgr.y (mgr.14505) 331 : cluster [DBG] pgmap v533: 260 pgs: 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:16.706 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:34:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:34:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:34:17.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:16 vm00 bash[28333]: cluster 2026-03-09T17:34:15.470459+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:16 vm00 bash[28333]: cluster 2026-03-09T17:34:15.470459+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:16 vm00 bash[28333]: audit 2026-03-09T17:34:15.471628+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:16 vm00 bash[28333]: audit 2026-03-09T17:34:15.471628+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:16 vm00 bash[28333]: audit 2026-03-09T17:34:15.473407+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:16 vm00 bash[28333]: audit 2026-03-09T17:34:15.473407+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:16 vm00 bash[20770]: cluster 2026-03-09T17:34:15.470459+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:16 vm00 bash[20770]: cluster 2026-03-09T17:34:15.470459+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:16 vm00 bash[20770]: audit 2026-03-09T17:34:15.471628+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:16 vm00 bash[20770]: audit 2026-03-09T17:34:15.471628+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:16 vm00 bash[20770]: audit 2026-03-09T17:34:15.473407+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:16 vm00 bash[20770]: audit 2026-03-09T17:34:15.473407+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:16 vm02 bash[23351]: cluster 2026-03-09T17:34:15.470459+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T17:34:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:16 vm02 bash[23351]: cluster 2026-03-09T17:34:15.470459+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T17:34:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:16 vm02 bash[23351]: audit 2026-03-09T17:34:15.471628+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:16 vm02 bash[23351]: audit 2026-03-09T17:34:15.471628+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:16 vm02 bash[23351]: audit 2026-03-09T17:34:15.473407+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:16 vm02 bash[23351]: audit 2026-03-09T17:34:15.473407+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:18.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: audit 2026-03-09T17:34:16.619973+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: audit 2026-03-09T17:34:16.619973+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: cluster 2026-03-09T17:34:16.677015+0000 mon.a (mon.0) 2532 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: cluster 2026-03-09T17:34:16.677015+0000 mon.a (mon.0) 2532 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: cluster 2026-03-09T17:34:16.699203+0000 mon.a (mon.0) 2533 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: cluster 2026-03-09T17:34:16.699203+0000 mon.a (mon.0) 2533 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: audit 2026-03-09T17:34:16.709529+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: audit 2026-03-09T17:34:16.709529+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: audit 2026-03-09T17:34:16.711836+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: audit 2026-03-09T17:34:16.711836+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: audit 2026-03-09T17:34:16.712841+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: audit 2026-03-09T17:34:16.712841+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: cluster 2026-03-09T17:34:16.754792+0000 mgr.y (mgr.14505) 332 : cluster [DBG] pgmap v536: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:17 vm00 bash[28333]: cluster 2026-03-09T17:34:16.754792+0000 mgr.y (mgr.14505) 332 : cluster [DBG] pgmap v536: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: audit 2026-03-09T17:34:16.619973+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: audit 2026-03-09T17:34:16.619973+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: cluster 2026-03-09T17:34:16.677015+0000 mon.a (mon.0) 2532 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: cluster 2026-03-09T17:34:16.677015+0000 mon.a (mon.0) 2532 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: cluster 2026-03-09T17:34:16.699203+0000 mon.a (mon.0) 2533 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: cluster 2026-03-09T17:34:16.699203+0000 mon.a (mon.0) 2533 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: audit 2026-03-09T17:34:16.709529+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: audit 2026-03-09T17:34:16.709529+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: audit 2026-03-09T17:34:16.711836+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: audit 2026-03-09T17:34:16.711836+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: audit 2026-03-09T17:34:16.712841+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: audit 2026-03-09T17:34:16.712841+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: cluster 2026-03-09T17:34:16.754792+0000 mgr.y (mgr.14505) 332 : cluster [DBG] pgmap v536: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:17 vm00 bash[20770]: cluster 2026-03-09T17:34:16.754792+0000 mgr.y (mgr.14505) 332 : cluster [DBG] pgmap v536: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: audit 2026-03-09T17:34:16.619973+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: audit 2026-03-09T17:34:16.619973+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: cluster 2026-03-09T17:34:16.677015+0000 mon.a (mon.0) 2532 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: cluster 2026-03-09T17:34:16.677015+0000 mon.a (mon.0) 2532 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: cluster 2026-03-09T17:34:16.699203+0000 mon.a (mon.0) 2533 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: cluster 2026-03-09T17:34:16.699203+0000 mon.a (mon.0) 2533 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: audit 2026-03-09T17:34:16.709529+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: audit 2026-03-09T17:34:16.709529+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: audit 2026-03-09T17:34:16.711836+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: audit 2026-03-09T17:34:16.711836+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: audit 2026-03-09T17:34:16.712841+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: audit 2026-03-09T17:34:16.712841+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: cluster 2026-03-09T17:34:16.754792+0000 mgr.y (mgr.14505) 332 : cluster [DBG] pgmap v536: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:17 vm02 bash[23351]: cluster 2026-03-09T17:34:16.754792+0000 mgr.y (mgr.14505) 332 : cluster [DBG] pgmap v536: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:34:19.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:18 vm00 bash[28333]: audit 2026-03-09T17:34:17.709354+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:19.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:18 vm00 bash[28333]: audit 2026-03-09T17:34:17.709354+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:19.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:18 vm00 bash[28333]: cluster 2026-03-09T17:34:17.712005+0000 mon.a (mon.0) 2536 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T17:34:19.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:18 vm00 bash[28333]: cluster 2026-03-09T17:34:17.712005+0000 mon.a (mon.0) 2536 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T17:34:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:18 vm00 bash[20770]: audit 2026-03-09T17:34:17.709354+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:18 vm00 bash[20770]: audit 2026-03-09T17:34:17.709354+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:18 vm00 bash[20770]: cluster 2026-03-09T17:34:17.712005+0000 mon.a (mon.0) 2536 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T17:34:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:18 vm00 bash[20770]: cluster 2026-03-09T17:34:17.712005+0000 mon.a (mon.0) 2536 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T17:34:19.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:18 vm02 bash[23351]: audit 2026-03-09T17:34:17.709354+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:18 vm02 bash[23351]: audit 2026-03-09T17:34:17.709354+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:18 vm02 bash[23351]: cluster 2026-03-09T17:34:17.712005+0000 mon.a (mon.0) 2536 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T17:34:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:18 vm02 bash[23351]: cluster 2026-03-09T17:34:17.712005+0000 mon.a (mon.0) 2536 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T17:34:20.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:19 vm00 bash[28333]: cluster 2026-03-09T17:34:18.729595+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T17:34:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:19 vm00 bash[28333]: cluster 2026-03-09T17:34:18.729595+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T17:34:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:19 vm00 bash[28333]: cluster 2026-03-09T17:34:18.755092+0000 mgr.y (mgr.14505) 333 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:34:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:19 vm00 bash[28333]: cluster 2026-03-09T17:34:18.755092+0000 mgr.y (mgr.14505) 333 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:34:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:19 vm00 bash[20770]: cluster 2026-03-09T17:34:18.729595+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T17:34:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:19 vm00 bash[20770]: cluster 2026-03-09T17:34:18.729595+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T17:34:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:19 vm00 bash[20770]: cluster 2026-03-09T17:34:18.755092+0000 mgr.y (mgr.14505) 333 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:34:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:19 vm00 bash[20770]: cluster 2026-03-09T17:34:18.755092+0000 mgr.y (mgr.14505) 333 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:34:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:19 vm02 bash[23351]: cluster 2026-03-09T17:34:18.729595+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T17:34:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:19 vm02 bash[23351]: cluster 2026-03-09T17:34:18.729595+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T17:34:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:19 vm02 bash[23351]: cluster 2026-03-09T17:34:18.755092+0000 mgr.y (mgr.14505) 333 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:34:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:19 vm02 bash[23351]: cluster 2026-03-09T17:34:18.755092+0000 mgr.y (mgr.14505) 333 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:34:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:20 vm02 bash[23351]: cluster 2026-03-09T17:34:19.747779+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T17:34:21.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:20 vm02 bash[23351]: cluster 2026-03-09T17:34:19.747779+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T17:34:21.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:20 vm00 bash[28333]: cluster 2026-03-09T17:34:19.747779+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T17:34:21.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:20 vm00 bash[28333]: cluster 2026-03-09T17:34:19.747779+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T17:34:21.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:20 vm00 bash[20770]: cluster 2026-03-09T17:34:19.747779+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T17:34:21.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:20 vm00 bash[20770]: cluster 2026-03-09T17:34:19.747779+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T17:34:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:21 vm02 bash[23351]: cluster 2026-03-09T17:34:20.755445+0000 mgr.y (mgr.14505) 334 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 756 B/s wr, 1 op/s 2026-03-09T17:34:22.144 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:21 vm02 bash[23351]: cluster 2026-03-09T17:34:20.755445+0000 mgr.y (mgr.14505) 334 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 756 B/s wr, 1 op/s 2026-03-09T17:34:22.144 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:21 vm02 bash[23351]: cluster 2026-03-09T17:34:20.802248+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T17:34:22.144 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:21 vm02 bash[23351]: cluster 2026-03-09T17:34:20.802248+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T17:34:22.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:34:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:34:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:21 vm00 bash[28333]: cluster 2026-03-09T17:34:20.755445+0000 mgr.y (mgr.14505) 334 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 756 B/s wr, 1 op/s 2026-03-09T17:34:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:21 vm00 bash[28333]: cluster 2026-03-09T17:34:20.755445+0000 mgr.y (mgr.14505) 334 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 756 B/s wr, 1 op/s 2026-03-09T17:34:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:21 vm00 bash[28333]: cluster 2026-03-09T17:34:20.802248+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T17:34:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:21 vm00 bash[28333]: cluster 2026-03-09T17:34:20.802248+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T17:34:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:21 vm00 bash[20770]: cluster 2026-03-09T17:34:20.755445+0000 mgr.y (mgr.14505) 334 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 756 B/s wr, 1 op/s 2026-03-09T17:34:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:21 vm00 bash[20770]: cluster 2026-03-09T17:34:20.755445+0000 mgr.y (mgr.14505) 334 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 756 B/s wr, 1 op/s 2026-03-09T17:34:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:21 vm00 bash[20770]: cluster 2026-03-09T17:34:20.802248+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T17:34:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:21 vm00 bash[20770]: cluster 2026-03-09T17:34:20.802248+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T17:34:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:22 vm02 bash[23351]: cluster 2026-03-09T17:34:21.804983+0000 mon.a (mon.0) 2540 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T17:34:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:22 vm02 bash[23351]: cluster 2026-03-09T17:34:21.804983+0000 mon.a (mon.0) 2540 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T17:34:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:22 vm02 bash[23351]: audit 2026-03-09T17:34:21.810959+0000 mgr.y (mgr.14505) 335 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:22 vm02 bash[23351]: audit 2026-03-09T17:34:21.810959+0000 mgr.y (mgr.14505) 335 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:23.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:22 vm00 bash[28333]: cluster 2026-03-09T17:34:21.804983+0000 mon.a (mon.0) 2540 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T17:34:23.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:22 vm00 bash[28333]: cluster 2026-03-09T17:34:21.804983+0000 mon.a (mon.0) 2540 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T17:34:23.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:22 vm00 bash[28333]: audit 2026-03-09T17:34:21.810959+0000 mgr.y (mgr.14505) 335 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:23.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:22 vm00 bash[28333]: audit 2026-03-09T17:34:21.810959+0000 mgr.y (mgr.14505) 335 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:23.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:22 vm00 bash[20770]: cluster 2026-03-09T17:34:21.804983+0000 mon.a (mon.0) 2540 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T17:34:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:22 vm00 bash[20770]: cluster 2026-03-09T17:34:21.804983+0000 mon.a (mon.0) 2540 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T17:34:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:22 vm00 bash[20770]: audit 2026-03-09T17:34:21.810959+0000 mgr.y (mgr.14505) 335 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:22 vm00 bash[20770]: audit 2026-03-09T17:34:21.810959+0000 mgr.y (mgr.14505) 335 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:23 vm02 bash[23351]: cluster 2026-03-09T17:34:22.755825+0000 mgr.y (mgr.14505) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 2 op/s 2026-03-09T17:34:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:23 vm02 bash[23351]: cluster 2026-03-09T17:34:22.755825+0000 mgr.y (mgr.14505) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 2 op/s 2026-03-09T17:34:24.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:23 vm00 bash[28333]: cluster 2026-03-09T17:34:22.755825+0000 mgr.y (mgr.14505) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 2 op/s 2026-03-09T17:34:24.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:23 vm00 bash[28333]: cluster 2026-03-09T17:34:22.755825+0000 mgr.y (mgr.14505) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 2 op/s 2026-03-09T17:34:24.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:23 vm00 bash[20770]: cluster 2026-03-09T17:34:22.755825+0000 mgr.y (mgr.14505) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 2 op/s 2026-03-09T17:34:24.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:23 vm00 bash[20770]: cluster 2026-03-09T17:34:22.755825+0000 mgr.y (mgr.14505) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 2 op/s 2026-03-09T17:34:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:25 vm02 bash[23351]: cluster 2026-03-09T17:34:24.756479+0000 mgr.y (mgr.14505) 337 : cluster [DBG] pgmap v545: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:34:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:25 vm02 bash[23351]: cluster 2026-03-09T17:34:24.756479+0000 mgr.y (mgr.14505) 337 : cluster [DBG] pgmap v545: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:34:26.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:25 vm00 bash[28333]: cluster 2026-03-09T17:34:24.756479+0000 mgr.y (mgr.14505) 337 : cluster [DBG] pgmap v545: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:34:26.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:25 vm00 bash[28333]: cluster 2026-03-09T17:34:24.756479+0000 mgr.y (mgr.14505) 337 : cluster [DBG] pgmap v545: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:34:26.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:25 vm00 bash[20770]: cluster 2026-03-09T17:34:24.756479+0000 mgr.y (mgr.14505) 337 : cluster [DBG] pgmap v545: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:34:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:25 vm00 bash[20770]: cluster 2026-03-09T17:34:24.756479+0000 mgr.y (mgr.14505) 337 : cluster [DBG] pgmap v545: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:34:26.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:34:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:34:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:34:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:27 vm02 bash[23351]: cluster 2026-03-09T17:34:26.756748+0000 mgr.y (mgr.14505) 338 : cluster [DBG] pgmap v546: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:34:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:27 vm02 bash[23351]: cluster 2026-03-09T17:34:26.756748+0000 mgr.y (mgr.14505) 338 : cluster [DBG] pgmap v546: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:34:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:27 vm02 bash[23351]: audit 2026-03-09T17:34:27.738462+0000 mon.c (mon.2) 591 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:27 vm02 bash[23351]: audit 2026-03-09T17:34:27.738462+0000 mon.c (mon.2) 591 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:28.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:27 vm00 bash[28333]: cluster 2026-03-09T17:34:26.756748+0000 mgr.y (mgr.14505) 338 : cluster [DBG] pgmap v546: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:34:28.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:27 vm00 bash[28333]: cluster 2026-03-09T17:34:26.756748+0000 mgr.y (mgr.14505) 338 : cluster [DBG] pgmap v546: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:34:28.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:27 vm00 bash[28333]: audit 2026-03-09T17:34:27.738462+0000 mon.c (mon.2) 591 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:28.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:27 vm00 bash[28333]: audit 2026-03-09T17:34:27.738462+0000 mon.c (mon.2) 591 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:28.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:27 vm00 bash[20770]: cluster 2026-03-09T17:34:26.756748+0000 mgr.y (mgr.14505) 338 : cluster [DBG] pgmap v546: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:34:28.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:27 vm00 bash[20770]: cluster 2026-03-09T17:34:26.756748+0000 mgr.y (mgr.14505) 338 : cluster [DBG] pgmap v546: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:34:28.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:27 vm00 bash[20770]: audit 2026-03-09T17:34:27.738462+0000 mon.c (mon.2) 591 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:28.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:27 vm00 bash[20770]: audit 2026-03-09T17:34:27.738462+0000 mon.c (mon.2) 591 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:30.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:29 vm02 bash[23351]: cluster 2026-03-09T17:34:28.757662+0000 mgr.y (mgr.14505) 339 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:34:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:29 vm02 bash[23351]: cluster 2026-03-09T17:34:28.757662+0000 mgr.y (mgr.14505) 339 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:34:30.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:29 vm00 bash[28333]: cluster 2026-03-09T17:34:28.757662+0000 mgr.y (mgr.14505) 339 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:34:30.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:29 vm00 bash[28333]: cluster 2026-03-09T17:34:28.757662+0000 mgr.y (mgr.14505) 339 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:34:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:29 vm00 bash[20770]: cluster 2026-03-09T17:34:28.757662+0000 mgr.y (mgr.14505) 339 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:34:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:29 vm00 bash[20770]: cluster 2026-03-09T17:34:28.757662+0000 mgr.y (mgr.14505) 339 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:34:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:31 vm02 bash[23351]: cluster 2026-03-09T17:34:30.757978+0000 mgr.y (mgr.14505) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-09T17:34:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:31 vm02 bash[23351]: cluster 2026-03-09T17:34:30.757978+0000 mgr.y (mgr.14505) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-09T17:34:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:31 vm02 bash[23351]: cluster 2026-03-09T17:34:31.711058+0000 mon.a (mon.0) 2541 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T17:34:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:31 vm02 bash[23351]: cluster 2026-03-09T17:34:31.711058+0000 mon.a (mon.0) 2541 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T17:34:32.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:34:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:34:32.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:31 vm00 bash[28333]: cluster 2026-03-09T17:34:30.757978+0000 mgr.y (mgr.14505) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-09T17:34:32.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:31 vm00 bash[28333]: cluster 2026-03-09T17:34:30.757978+0000 mgr.y (mgr.14505) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-09T17:34:32.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:31 vm00 bash[28333]: cluster 2026-03-09T17:34:31.711058+0000 mon.a (mon.0) 2541 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T17:34:32.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:31 vm00 bash[28333]: cluster 2026-03-09T17:34:31.711058+0000 mon.a (mon.0) 2541 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T17:34:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:31 vm00 bash[20770]: cluster 2026-03-09T17:34:30.757978+0000 mgr.y (mgr.14505) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-09T17:34:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:31 vm00 bash[20770]: cluster 2026-03-09T17:34:30.757978+0000 mgr.y (mgr.14505) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-09T17:34:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:31 vm00 bash[20770]: cluster 2026-03-09T17:34:31.711058+0000 mon.a (mon.0) 2541 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T17:34:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:31 vm00 bash[20770]: cluster 2026-03-09T17:34:31.711058+0000 mon.a (mon.0) 2541 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T17:34:33.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:32 vm00 bash[28333]: audit 2026-03-09T17:34:31.819015+0000 mgr.y (mgr.14505) 341 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:33.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:32 vm00 bash[28333]: audit 2026-03-09T17:34:31.819015+0000 mgr.y (mgr.14505) 341 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:33.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:32 vm00 bash[28333]: cluster 2026-03-09T17:34:32.710453+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T17:34:33.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:32 vm00 bash[28333]: cluster 2026-03-09T17:34:32.710453+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T17:34:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:32 vm00 bash[20770]: audit 2026-03-09T17:34:31.819015+0000 mgr.y (mgr.14505) 341 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:32 vm00 bash[20770]: audit 2026-03-09T17:34:31.819015+0000 mgr.y (mgr.14505) 341 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:32 vm00 bash[20770]: cluster 2026-03-09T17:34:32.710453+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T17:34:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:32 vm00 bash[20770]: cluster 2026-03-09T17:34:32.710453+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T17:34:33.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:32 vm02 bash[23351]: audit 2026-03-09T17:34:31.819015+0000 mgr.y (mgr.14505) 341 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:32 vm02 bash[23351]: audit 2026-03-09T17:34:31.819015+0000 mgr.y (mgr.14505) 341 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:32 vm02 bash[23351]: cluster 2026-03-09T17:34:32.710453+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T17:34:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:32 vm02 bash[23351]: cluster 2026-03-09T17:34:32.710453+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T17:34:34.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:33 vm00 bash[28333]: cluster 2026-03-09T17:34:32.758447+0000 mgr.y (mgr.14505) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:34:34.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:33 vm00 bash[28333]: cluster 2026-03-09T17:34:32.758447+0000 mgr.y (mgr.14505) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:34:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:33 vm00 bash[20770]: cluster 2026-03-09T17:34:32.758447+0000 mgr.y (mgr.14505) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:34:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:33 vm00 bash[20770]: cluster 2026-03-09T17:34:32.758447+0000 mgr.y (mgr.14505) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:34:34.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:33 vm02 bash[23351]: cluster 2026-03-09T17:34:32.758447+0000 mgr.y (mgr.14505) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:34:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:33 vm02 bash[23351]: cluster 2026-03-09T17:34:32.758447+0000 mgr.y (mgr.14505) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:34:36.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:35 vm00 bash[28333]: cluster 2026-03-09T17:34:34.759066+0000 mgr.y (mgr.14505) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:36.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:35 vm00 bash[28333]: cluster 2026-03-09T17:34:34.759066+0000 mgr.y (mgr.14505) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:35 vm00 bash[20770]: cluster 2026-03-09T17:34:34.759066+0000 mgr.y (mgr.14505) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:35 vm00 bash[20770]: cluster 2026-03-09T17:34:34.759066+0000 mgr.y (mgr.14505) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:35 vm02 bash[23351]: cluster 2026-03-09T17:34:34.759066+0000 mgr.y (mgr.14505) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:35 vm02 bash[23351]: cluster 2026-03-09T17:34:34.759066+0000 mgr.y (mgr.14505) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:36.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:34:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:34:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:34:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:37 vm00 bash[28333]: cluster 2026-03-09T17:34:36.759332+0000 mgr.y (mgr.14505) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:37 vm00 bash[28333]: cluster 2026-03-09T17:34:36.759332+0000 mgr.y (mgr.14505) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:37 vm00 bash[20770]: cluster 2026-03-09T17:34:36.759332+0000 mgr.y (mgr.14505) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:37 vm00 bash[20770]: cluster 2026-03-09T17:34:36.759332+0000 mgr.y (mgr.14505) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:37 vm02 bash[23351]: cluster 2026-03-09T17:34:36.759332+0000 mgr.y (mgr.14505) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:37 vm02 bash[23351]: cluster 2026-03-09T17:34:36.759332+0000 mgr.y (mgr.14505) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:40.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:39 vm00 bash[28333]: cluster 2026-03-09T17:34:38.760060+0000 mgr.y (mgr.14505) 345 : cluster [DBG] pgmap v554: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:39 vm00 bash[28333]: cluster 2026-03-09T17:34:38.760060+0000 mgr.y (mgr.14505) 345 : cluster [DBG] pgmap v554: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:39 vm00 bash[20770]: cluster 2026-03-09T17:34:38.760060+0000 mgr.y (mgr.14505) 345 : cluster [DBG] pgmap v554: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:39 vm00 bash[20770]: cluster 2026-03-09T17:34:38.760060+0000 mgr.y (mgr.14505) 345 : cluster [DBG] pgmap v554: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:39 vm02 bash[23351]: cluster 2026-03-09T17:34:38.760060+0000 mgr.y (mgr.14505) 345 : cluster [DBG] pgmap v554: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:39 vm02 bash[23351]: cluster 2026-03-09T17:34:38.760060+0000 mgr.y (mgr.14505) 345 : cluster [DBG] pgmap v554: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:41 vm02 bash[23351]: cluster 2026-03-09T17:34:40.760384+0000 mgr.y (mgr.14505) 346 : cluster [DBG] pgmap v555: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:41 vm02 bash[23351]: cluster 2026-03-09T17:34:40.760384+0000 mgr.y (mgr.14505) 346 : cluster [DBG] pgmap v555: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:42.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:34:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:34:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:41 vm00 bash[28333]: cluster 2026-03-09T17:34:40.760384+0000 mgr.y (mgr.14505) 346 : cluster [DBG] pgmap v555: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:41 vm00 bash[28333]: cluster 2026-03-09T17:34:40.760384+0000 mgr.y (mgr.14505) 346 : cluster [DBG] pgmap v555: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:41 vm00 bash[20770]: cluster 2026-03-09T17:34:40.760384+0000 mgr.y (mgr.14505) 346 : cluster [DBG] pgmap v555: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:41 vm00 bash[20770]: cluster 2026-03-09T17:34:40.760384+0000 mgr.y (mgr.14505) 346 : cluster [DBG] pgmap v555: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:43 vm00 bash[28333]: audit 2026-03-09T17:34:41.827339+0000 mgr.y (mgr.14505) 347 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:43 vm00 bash[28333]: audit 2026-03-09T17:34:41.827339+0000 mgr.y (mgr.14505) 347 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:43 vm00 bash[28333]: audit 2026-03-09T17:34:42.745933+0000 mon.c (mon.2) 592 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:43 vm00 bash[28333]: audit 2026-03-09T17:34:42.745933+0000 mon.c (mon.2) 592 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:43 vm00 bash[20770]: audit 2026-03-09T17:34:41.827339+0000 mgr.y (mgr.14505) 347 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:43 vm00 bash[20770]: audit 2026-03-09T17:34:41.827339+0000 mgr.y (mgr.14505) 347 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:43 vm00 bash[20770]: audit 2026-03-09T17:34:42.745933+0000 mon.c (mon.2) 592 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:43 vm00 bash[20770]: audit 2026-03-09T17:34:42.745933+0000 mon.c (mon.2) 592 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:43 vm02 bash[23351]: audit 2026-03-09T17:34:41.827339+0000 mgr.y (mgr.14505) 347 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:43 vm02 bash[23351]: audit 2026-03-09T17:34:41.827339+0000 mgr.y (mgr.14505) 347 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:43 vm02 bash[23351]: audit 2026-03-09T17:34:42.745933+0000 mon.c (mon.2) 592 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:43 vm02 bash[23351]: audit 2026-03-09T17:34:42.745933+0000 mon.c (mon.2) 592 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:44.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:44 vm00 bash[28333]: cluster 2026-03-09T17:34:42.762959+0000 mgr.y (mgr.14505) 348 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:44 vm00 bash[28333]: cluster 2026-03-09T17:34:42.762959+0000 mgr.y (mgr.14505) 348 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:44 vm00 bash[28333]: cluster 2026-03-09T17:34:43.012255+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T17:34:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:44 vm00 bash[28333]: cluster 2026-03-09T17:34:43.012255+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T17:34:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:44 vm00 bash[20770]: cluster 2026-03-09T17:34:42.762959+0000 mgr.y (mgr.14505) 348 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:44 vm00 bash[20770]: cluster 2026-03-09T17:34:42.762959+0000 mgr.y (mgr.14505) 348 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:44 vm00 bash[20770]: cluster 2026-03-09T17:34:43.012255+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T17:34:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:44 vm00 bash[20770]: cluster 2026-03-09T17:34:43.012255+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T17:34:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:44 vm02 bash[23351]: cluster 2026-03-09T17:34:42.762959+0000 mgr.y (mgr.14505) 348 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:44 vm02 bash[23351]: cluster 2026-03-09T17:34:42.762959+0000 mgr.y (mgr.14505) 348 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:44 vm02 bash[23351]: cluster 2026-03-09T17:34:43.012255+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T17:34:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:44 vm02 bash[23351]: cluster 2026-03-09T17:34:43.012255+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T17:34:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:46 vm00 bash[28333]: cluster 2026-03-09T17:34:44.763266+0000 mgr.y (mgr.14505) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:46 vm00 bash[28333]: cluster 2026-03-09T17:34:44.763266+0000 mgr.y (mgr.14505) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:46 vm00 bash[20770]: cluster 2026-03-09T17:34:44.763266+0000 mgr.y (mgr.14505) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:46 vm00 bash[20770]: cluster 2026-03-09T17:34:44.763266+0000 mgr.y (mgr.14505) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:34:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:34:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:34:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:46 vm02 bash[23351]: cluster 2026-03-09T17:34:44.763266+0000 mgr.y (mgr.14505) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:46 vm02 bash[23351]: cluster 2026-03-09T17:34:44.763266+0000 mgr.y (mgr.14505) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:34:48.037 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:47 vm00 bash[28333]: cluster 2026-03-09T17:34:46.711077+0000 mon.a (mon.0) 2544 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T17:34:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:47 vm00 bash[28333]: cluster 2026-03-09T17:34:46.711077+0000 mon.a (mon.0) 2544 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T17:34:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:47 vm00 bash[28333]: cluster 2026-03-09T17:34:46.763635+0000 mgr.y (mgr.14505) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:48.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:47 vm00 bash[28333]: cluster 2026-03-09T17:34:46.763635+0000 mgr.y (mgr.14505) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:47 vm00 bash[20770]: cluster 2026-03-09T17:34:46.711077+0000 mon.a (mon.0) 2544 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T17:34:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:47 vm00 bash[20770]: cluster 2026-03-09T17:34:46.711077+0000 mon.a (mon.0) 2544 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T17:34:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:47 vm00 bash[20770]: cluster 2026-03-09T17:34:46.763635+0000 mgr.y (mgr.14505) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:48.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:47 vm00 bash[20770]: cluster 2026-03-09T17:34:46.763635+0000 mgr.y (mgr.14505) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:47 vm02 bash[23351]: cluster 2026-03-09T17:34:46.711077+0000 mon.a (mon.0) 2544 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T17:34:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:47 vm02 bash[23351]: cluster 2026-03-09T17:34:46.711077+0000 mon.a (mon.0) 2544 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T17:34:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:47 vm02 bash[23351]: cluster 2026-03-09T17:34:46.763635+0000 mgr.y (mgr.14505) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:48.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:47 vm02 bash[23351]: cluster 2026-03-09T17:34:46.763635+0000 mgr.y (mgr.14505) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:48 vm00 bash[20770]: audit 2026-03-09T17:34:47.932867+0000 mon.c (mon.2) 593 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:48 vm00 bash[20770]: audit 2026-03-09T17:34:47.932867+0000 mon.c (mon.2) 593 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:48 vm00 bash[20770]: audit 2026-03-09T17:34:48.280440+0000 mon.c (mon.2) 594 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:48 vm00 bash[20770]: audit 2026-03-09T17:34:48.280440+0000 mon.c (mon.2) 594 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:48 vm00 bash[20770]: audit 2026-03-09T17:34:48.281620+0000 mon.c (mon.2) 595 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:48 vm00 bash[20770]: audit 2026-03-09T17:34:48.281620+0000 mon.c (mon.2) 595 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:48 vm00 bash[20770]: audit 2026-03-09T17:34:48.288385+0000 mon.a (mon.0) 2545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:48 vm00 bash[20770]: audit 2026-03-09T17:34:48.288385+0000 mon.a (mon.0) 2545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:48 vm00 bash[28333]: audit 2026-03-09T17:34:47.932867+0000 mon.c (mon.2) 593 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:48 vm00 bash[28333]: audit 2026-03-09T17:34:47.932867+0000 mon.c (mon.2) 593 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:48 vm00 bash[28333]: audit 2026-03-09T17:34:48.280440+0000 mon.c (mon.2) 594 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:48 vm00 bash[28333]: audit 2026-03-09T17:34:48.280440+0000 mon.c (mon.2) 594 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:48 vm00 bash[28333]: audit 2026-03-09T17:34:48.281620+0000 mon.c (mon.2) 595 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:48 vm00 bash[28333]: audit 2026-03-09T17:34:48.281620+0000 mon.c (mon.2) 595 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:48 vm00 bash[28333]: audit 2026-03-09T17:34:48.288385+0000 mon.a (mon.0) 2545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:49.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:48 vm00 bash[28333]: audit 2026-03-09T17:34:48.288385+0000 mon.a (mon.0) 2545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:48 vm02 bash[23351]: audit 2026-03-09T17:34:47.932867+0000 mon.c (mon.2) 593 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:34:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:48 vm02 bash[23351]: audit 2026-03-09T17:34:47.932867+0000 mon.c (mon.2) 593 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:34:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:48 vm02 bash[23351]: audit 2026-03-09T17:34:48.280440+0000 mon.c (mon.2) 594 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:34:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:48 vm02 bash[23351]: audit 2026-03-09T17:34:48.280440+0000 mon.c (mon.2) 594 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:34:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:48 vm02 bash[23351]: audit 2026-03-09T17:34:48.281620+0000 mon.c (mon.2) 595 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:34:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:48 vm02 bash[23351]: audit 2026-03-09T17:34:48.281620+0000 mon.c (mon.2) 595 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:34:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:48 vm02 bash[23351]: audit 2026-03-09T17:34:48.288385+0000 mon.a (mon.0) 2545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:49.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:48 vm02 bash[23351]: audit 2026-03-09T17:34:48.288385+0000 mon.a (mon.0) 2545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:34:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:49 vm02 bash[23351]: cluster 2026-03-09T17:34:48.764444+0000 mgr.y (mgr.14505) 351 : cluster [DBG] pgmap v561: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:50.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:49 vm02 bash[23351]: cluster 2026-03-09T17:34:48.764444+0000 mgr.y (mgr.14505) 351 : cluster [DBG] pgmap v561: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:49 vm00 bash[28333]: cluster 2026-03-09T17:34:48.764444+0000 mgr.y (mgr.14505) 351 : cluster [DBG] pgmap v561: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:49 vm00 bash[28333]: cluster 2026-03-09T17:34:48.764444+0000 mgr.y (mgr.14505) 351 : cluster [DBG] pgmap v561: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:49 vm00 bash[20770]: cluster 2026-03-09T17:34:48.764444+0000 mgr.y (mgr.14505) 351 : cluster [DBG] pgmap v561: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:49 vm00 bash[20770]: cluster 2026-03-09T17:34:48.764444+0000 mgr.y (mgr.14505) 351 : cluster [DBG] pgmap v561: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:34:52.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:51 vm02 bash[23351]: cluster 2026-03-09T17:34:50.764829+0000 mgr.y (mgr.14505) 352 : cluster [DBG] pgmap v562: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:34:52.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:51 vm02 bash[23351]: cluster 2026-03-09T17:34:50.764829+0000 mgr.y (mgr.14505) 352 : cluster [DBG] pgmap v562: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:34:52.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:34:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:34:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:51 vm00 bash[28333]: cluster 2026-03-09T17:34:50.764829+0000 mgr.y (mgr.14505) 352 : cluster [DBG] pgmap v562: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:34:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:51 vm00 bash[28333]: cluster 2026-03-09T17:34:50.764829+0000 mgr.y (mgr.14505) 352 : cluster [DBG] pgmap v562: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:34:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:51 vm00 bash[20770]: cluster 2026-03-09T17:34:50.764829+0000 mgr.y (mgr.14505) 352 : cluster [DBG] pgmap v562: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:34:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:51 vm00 bash[20770]: cluster 2026-03-09T17:34:50.764829+0000 mgr.y (mgr.14505) 352 : cluster [DBG] pgmap v562: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:34:53.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:52 vm02 bash[23351]: audit 2026-03-09T17:34:51.835442+0000 mgr.y (mgr.14505) 353 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:53.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:52 vm02 bash[23351]: audit 2026-03-09T17:34:51.835442+0000 mgr.y (mgr.14505) 353 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:53.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:52 vm00 bash[28333]: audit 2026-03-09T17:34:51.835442+0000 mgr.y (mgr.14505) 353 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:53.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:52 vm00 bash[28333]: audit 2026-03-09T17:34:51.835442+0000 mgr.y (mgr.14505) 353 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:53.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:52 vm00 bash[20770]: audit 2026-03-09T17:34:51.835442+0000 mgr.y (mgr.14505) 353 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:53.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:52 vm00 bash[20770]: audit 2026-03-09T17:34:51.835442+0000 mgr.y (mgr.14505) 353 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: cluster 2026-03-09T17:34:52.765370+0000 mgr.y (mgr.14505) 354 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 525 B/s rd, 0 op/s 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: cluster 2026-03-09T17:34:52.765370+0000 mgr.y (mgr.14505) 354 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 525 B/s rd, 0 op/s 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: audit 2026-03-09T17:34:53.023338+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: audit 2026-03-09T17:34:53.023338+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: audit 2026-03-09T17:34:53.024262+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: audit 2026-03-09T17:34:53.024262+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: audit 2026-03-09T17:34:53.024540+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: audit 2026-03-09T17:34:53.024540+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: audit 2026-03-09T17:34:53.025286+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:53 vm00 bash[28333]: audit 2026-03-09T17:34:53.025286+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: cluster 2026-03-09T17:34:52.765370+0000 mgr.y (mgr.14505) 354 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 525 B/s rd, 0 op/s 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: cluster 2026-03-09T17:34:52.765370+0000 mgr.y (mgr.14505) 354 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 525 B/s rd, 0 op/s 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: audit 2026-03-09T17:34:53.023338+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: audit 2026-03-09T17:34:53.023338+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: audit 2026-03-09T17:34:53.024262+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: audit 2026-03-09T17:34:53.024262+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: audit 2026-03-09T17:34:53.024540+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: audit 2026-03-09T17:34:53.024540+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: audit 2026-03-09T17:34:53.025286+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:53 vm00 bash[20770]: audit 2026-03-09T17:34:53.025286+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: cluster 2026-03-09T17:34:52.765370+0000 mgr.y (mgr.14505) 354 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 525 B/s rd, 0 op/s 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: cluster 2026-03-09T17:34:52.765370+0000 mgr.y (mgr.14505) 354 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 525 B/s rd, 0 op/s 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: audit 2026-03-09T17:34:53.023338+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: audit 2026-03-09T17:34:53.023338+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: audit 2026-03-09T17:34:53.024262+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: audit 2026-03-09T17:34:53.024262+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: audit 2026-03-09T17:34:53.024540+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: audit 2026-03-09T17:34:53.024540+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: audit 2026-03-09T17:34:53.025286+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:53 vm02 bash[23351]: audit 2026-03-09T17:34:53.025286+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-63"}]: dispatch 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:54 vm00 bash[28333]: cluster 2026-03-09T17:34:53.879781+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:54 vm00 bash[28333]: cluster 2026-03-09T17:34:53.879781+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:54 vm00 bash[28333]: cluster 2026-03-09T17:34:54.890374+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:54 vm00 bash[28333]: cluster 2026-03-09T17:34:54.890374+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:54 vm00 bash[28333]: audit 2026-03-09T17:34:54.899789+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:54 vm00 bash[28333]: audit 2026-03-09T17:34:54.899789+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:54 vm00 bash[20770]: cluster 2026-03-09T17:34:53.879781+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:54 vm00 bash[20770]: cluster 2026-03-09T17:34:53.879781+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:54 vm00 bash[20770]: cluster 2026-03-09T17:34:54.890374+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:54 vm00 bash[20770]: cluster 2026-03-09T17:34:54.890374+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:54 vm00 bash[20770]: audit 2026-03-09T17:34:54.899789+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:55.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:54 vm00 bash[20770]: audit 2026-03-09T17:34:54.899789+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:54 vm02 bash[23351]: cluster 2026-03-09T17:34:53.879781+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T17:34:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:54 vm02 bash[23351]: cluster 2026-03-09T17:34:53.879781+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T17:34:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:54 vm02 bash[23351]: cluster 2026-03-09T17:34:54.890374+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T17:34:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:54 vm02 bash[23351]: cluster 2026-03-09T17:34:54.890374+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T17:34:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:54 vm02 bash[23351]: audit 2026-03-09T17:34:54.899789+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:54 vm02 bash[23351]: audit 2026-03-09T17:34:54.899789+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: cluster 2026-03-09T17:34:54.765808+0000 mgr.y (mgr.14505) 355 : cluster [DBG] pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: cluster 2026-03-09T17:34:54.765808+0000 mgr.y (mgr.14505) 355 : cluster [DBG] pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:54.906670+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:54.906670+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:55.889833+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:55.889833+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:55.900043+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:55.900043+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: cluster 2026-03-09T17:34:55.901838+0000 mon.a (mon.0) 2552 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: cluster 2026-03-09T17:34:55.901838+0000 mon.a (mon.0) 2552 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:55.902035+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:55.902035+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:55.910776+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:55 vm00 bash[28333]: audit 2026-03-09T17:34:55.910776+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: cluster 2026-03-09T17:34:54.765808+0000 mgr.y (mgr.14505) 355 : cluster [DBG] pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: cluster 2026-03-09T17:34:54.765808+0000 mgr.y (mgr.14505) 355 : cluster [DBG] pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:54.906670+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:54.906670+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:55.889833+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:55.889833+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:55.900043+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:55.900043+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: cluster 2026-03-09T17:34:55.901838+0000 mon.a (mon.0) 2552 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: cluster 2026-03-09T17:34:55.901838+0000 mon.a (mon.0) 2552 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:55.902035+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:55.902035+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:55.910776+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:55 vm00 bash[20770]: audit 2026-03-09T17:34:55.910776+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: cluster 2026-03-09T17:34:54.765808+0000 mgr.y (mgr.14505) 355 : cluster [DBG] pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: cluster 2026-03-09T17:34:54.765808+0000 mgr.y (mgr.14505) 355 : cluster [DBG] pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:54.906670+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:54.906670+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:55.889833+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:55.889833+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:55.900043+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:55.900043+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: cluster 2026-03-09T17:34:55.901838+0000 mon.a (mon.0) 2552 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: cluster 2026-03-09T17:34:55.901838+0000 mon.a (mon.0) 2552 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:55.902035+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:55.902035+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:55.910776+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:55 vm02 bash[23351]: audit 2026-03-09T17:34:55.910776+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:34:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:34:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:34:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:57 vm00 bash[28333]: audit 2026-03-09T17:34:56.751146+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:57 vm00 bash[28333]: audit 2026-03-09T17:34:56.751146+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:57 vm00 bash[28333]: cluster 2026-03-09T17:34:56.754081+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:57 vm00 bash[28333]: cluster 2026-03-09T17:34:56.754081+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:57 vm00 bash[28333]: cluster 2026-03-09T17:34:56.766099+0000 mgr.y (mgr.14505) 356 : cluster [DBG] pgmap v569: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:57 vm00 bash[28333]: cluster 2026-03-09T17:34:56.766099+0000 mgr.y (mgr.14505) 356 : cluster [DBG] pgmap v569: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:57 vm00 bash[20770]: audit 2026-03-09T17:34:56.751146+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:57 vm00 bash[20770]: audit 2026-03-09T17:34:56.751146+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:57 vm00 bash[20770]: cluster 2026-03-09T17:34:56.754081+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:57 vm00 bash[20770]: cluster 2026-03-09T17:34:56.754081+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:57 vm00 bash[20770]: cluster 2026-03-09T17:34:56.766099+0000 mgr.y (mgr.14505) 356 : cluster [DBG] pgmap v569: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:57 vm00 bash[20770]: cluster 2026-03-09T17:34:56.766099+0000 mgr.y (mgr.14505) 356 : cluster [DBG] pgmap v569: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:57 vm02 bash[23351]: audit 2026-03-09T17:34:56.751146+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:57 vm02 bash[23351]: audit 2026-03-09T17:34:56.751146+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:34:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:57 vm02 bash[23351]: cluster 2026-03-09T17:34:56.754081+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T17:34:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:57 vm02 bash[23351]: cluster 2026-03-09T17:34:56.754081+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T17:34:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:57 vm02 bash[23351]: cluster 2026-03-09T17:34:56.766099+0000 mgr.y (mgr.14505) 356 : cluster [DBG] pgmap v569: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:57 vm02 bash[23351]: cluster 2026-03-09T17:34:56.766099+0000 mgr.y (mgr.14505) 356 : cluster [DBG] pgmap v569: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:34:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:58 vm00 bash[28333]: audit 2026-03-09T17:34:57.754443+0000 mon.c (mon.2) 596 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:58 vm00 bash[28333]: audit 2026-03-09T17:34:57.754443+0000 mon.c (mon.2) 596 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:58 vm00 bash[28333]: cluster 2026-03-09T17:34:57.776563+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T17:34:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:58 vm00 bash[28333]: cluster 2026-03-09T17:34:57.776563+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T17:34:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:58 vm00 bash[20770]: audit 2026-03-09T17:34:57.754443+0000 mon.c (mon.2) 596 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:58 vm00 bash[20770]: audit 2026-03-09T17:34:57.754443+0000 mon.c (mon.2) 596 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:58 vm00 bash[20770]: cluster 2026-03-09T17:34:57.776563+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T17:34:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:58 vm00 bash[20770]: cluster 2026-03-09T17:34:57.776563+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T17:34:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:58 vm02 bash[23351]: audit 2026-03-09T17:34:57.754443+0000 mon.c (mon.2) 596 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:58 vm02 bash[23351]: audit 2026-03-09T17:34:57.754443+0000 mon.c (mon.2) 596 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:34:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:58 vm02 bash[23351]: cluster 2026-03-09T17:34:57.776563+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T17:34:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:58 vm02 bash[23351]: cluster 2026-03-09T17:34:57.776563+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T17:35:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:59 vm00 bash[28333]: cluster 2026-03-09T17:34:58.766610+0000 mgr.y (mgr.14505) 357 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:59 vm00 bash[28333]: cluster 2026-03-09T17:34:58.766610+0000 mgr.y (mgr.14505) 357 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:59 vm00 bash[28333]: cluster 2026-03-09T17:34:58.782126+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T17:35:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:34:59 vm00 bash[28333]: cluster 2026-03-09T17:34:58.782126+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T17:35:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:59 vm00 bash[20770]: cluster 2026-03-09T17:34:58.766610+0000 mgr.y (mgr.14505) 357 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:59 vm00 bash[20770]: cluster 2026-03-09T17:34:58.766610+0000 mgr.y (mgr.14505) 357 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:59 vm00 bash[20770]: cluster 2026-03-09T17:34:58.782126+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T17:35:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:34:59 vm00 bash[20770]: cluster 2026-03-09T17:34:58.782126+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T17:35:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:59 vm02 bash[23351]: cluster 2026-03-09T17:34:58.766610+0000 mgr.y (mgr.14505) 357 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:59 vm02 bash[23351]: cluster 2026-03-09T17:34:58.766610+0000 mgr.y (mgr.14505) 357 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:59 vm02 bash[23351]: cluster 2026-03-09T17:34:58.782126+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T17:35:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:34:59 vm02 bash[23351]: cluster 2026-03-09T17:34:58.782126+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T17:35:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:00 vm02 bash[23351]: cluster 2026-03-09T17:34:59.789309+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T17:35:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:00 vm02 bash[23351]: cluster 2026-03-09T17:34:59.789309+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T17:35:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:00 vm00 bash[28333]: cluster 2026-03-09T17:34:59.789309+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T17:35:01.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:00 vm00 bash[28333]: cluster 2026-03-09T17:34:59.789309+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T17:35:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:00 vm00 bash[20770]: cluster 2026-03-09T17:34:59.789309+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T17:35:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:00 vm00 bash[20770]: cluster 2026-03-09T17:34:59.789309+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T17:35:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:01 vm02 bash[23351]: cluster 2026-03-09T17:35:00.766904+0000 mgr.y (mgr.14505) 358 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:01 vm02 bash[23351]: cluster 2026-03-09T17:35:00.766904+0000 mgr.y (mgr.14505) 358 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:02.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:35:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:35:02.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:01 vm00 bash[28333]: cluster 2026-03-09T17:35:00.766904+0000 mgr.y (mgr.14505) 358 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:02.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:01 vm00 bash[28333]: cluster 2026-03-09T17:35:00.766904+0000 mgr.y (mgr.14505) 358 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:02.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:01 vm00 bash[20770]: cluster 2026-03-09T17:35:00.766904+0000 mgr.y (mgr.14505) 358 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:02.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:01 vm00 bash[20770]: cluster 2026-03-09T17:35:00.766904+0000 mgr.y (mgr.14505) 358 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:35:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:02 vm02 bash[23351]: audit 2026-03-09T17:35:01.841933+0000 mgr.y (mgr.14505) 359 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:02 vm02 bash[23351]: audit 2026-03-09T17:35:01.841933+0000 mgr.y (mgr.14505) 359 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:03.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:02 vm00 bash[28333]: audit 2026-03-09T17:35:01.841933+0000 mgr.y (mgr.14505) 359 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:02 vm00 bash[28333]: audit 2026-03-09T17:35:01.841933+0000 mgr.y (mgr.14505) 359 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:02 vm00 bash[20770]: audit 2026-03-09T17:35:01.841933+0000 mgr.y (mgr.14505) 359 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:02 vm00 bash[20770]: audit 2026-03-09T17:35:01.841933+0000 mgr.y (mgr.14505) 359 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:03 vm02 bash[23351]: cluster 2026-03-09T17:35:02.767574+0000 mgr.y (mgr.14505) 360 : cluster [DBG] pgmap v575: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:35:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:03 vm02 bash[23351]: cluster 2026-03-09T17:35:02.767574+0000 mgr.y (mgr.14505) 360 : cluster [DBG] pgmap v575: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:35:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:03 vm00 bash[20770]: cluster 2026-03-09T17:35:02.767574+0000 mgr.y (mgr.14505) 360 : cluster [DBG] pgmap v575: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:35:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:03 vm00 bash[20770]: cluster 2026-03-09T17:35:02.767574+0000 mgr.y (mgr.14505) 360 : cluster [DBG] pgmap v575: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:35:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:03 vm00 bash[28333]: cluster 2026-03-09T17:35:02.767574+0000 mgr.y (mgr.14505) 360 : cluster [DBG] pgmap v575: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:35:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:03 vm00 bash[28333]: cluster 2026-03-09T17:35:02.767574+0000 mgr.y (mgr.14505) 360 : cluster [DBG] pgmap v575: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T17:35:06.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:05 vm00 bash[28333]: cluster 2026-03-09T17:35:04.768087+0000 mgr.y (mgr.14505) 361 : cluster [DBG] pgmap v576: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 732 B/s wr, 4 op/s 2026-03-09T17:35:06.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:05 vm00 bash[28333]: cluster 2026-03-09T17:35:04.768087+0000 mgr.y (mgr.14505) 361 : cluster [DBG] pgmap v576: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 732 B/s wr, 4 op/s 2026-03-09T17:35:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:05 vm00 bash[20770]: cluster 2026-03-09T17:35:04.768087+0000 mgr.y (mgr.14505) 361 : cluster [DBG] pgmap v576: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 732 B/s wr, 4 op/s 2026-03-09T17:35:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:05 vm00 bash[20770]: cluster 2026-03-09T17:35:04.768087+0000 mgr.y (mgr.14505) 361 : cluster [DBG] pgmap v576: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 732 B/s wr, 4 op/s 2026-03-09T17:35:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:05 vm02 bash[23351]: cluster 2026-03-09T17:35:04.768087+0000 mgr.y (mgr.14505) 361 : cluster [DBG] pgmap v576: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 732 B/s wr, 4 op/s 2026-03-09T17:35:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:05 vm02 bash[23351]: cluster 2026-03-09T17:35:04.768087+0000 mgr.y (mgr.14505) 361 : cluster [DBG] pgmap v576: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 732 B/s wr, 4 op/s 2026-03-09T17:35:06.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:35:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:35:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:35:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:07 vm00 bash[28333]: cluster 2026-03-09T17:35:06.768390+0000 mgr.y (mgr.14505) 362 : cluster [DBG] pgmap v577: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:35:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:07 vm00 bash[28333]: cluster 2026-03-09T17:35:06.768390+0000 mgr.y (mgr.14505) 362 : cluster [DBG] pgmap v577: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:35:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:07 vm00 bash[20770]: cluster 2026-03-09T17:35:06.768390+0000 mgr.y (mgr.14505) 362 : cluster [DBG] pgmap v577: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:35:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:07 vm00 bash[20770]: cluster 2026-03-09T17:35:06.768390+0000 mgr.y (mgr.14505) 362 : cluster [DBG] pgmap v577: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:35:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:07 vm02 bash[23351]: cluster 2026-03-09T17:35:06.768390+0000 mgr.y (mgr.14505) 362 : cluster [DBG] pgmap v577: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:35:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:07 vm02 bash[23351]: cluster 2026-03-09T17:35:06.768390+0000 mgr.y (mgr.14505) 362 : cluster [DBG] pgmap v577: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: cluster 2026-03-09T17:35:08.769089+0000 mgr.y (mgr.14505) 363 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 410 B/s wr, 3 op/s 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: cluster 2026-03-09T17:35:08.769089+0000 mgr.y (mgr.14505) 363 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 410 B/s wr, 3 op/s 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: audit 2026-03-09T17:35:09.799718+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: audit 2026-03-09T17:35:09.799718+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: audit 2026-03-09T17:35:09.800351+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: audit 2026-03-09T17:35:09.800351+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: audit 2026-03-09T17:35:09.800739+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: audit 2026-03-09T17:35:09.800739+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: audit 2026-03-09T17:35:09.801291+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:10 vm00 bash[28333]: audit 2026-03-09T17:35:09.801291+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: cluster 2026-03-09T17:35:08.769089+0000 mgr.y (mgr.14505) 363 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 410 B/s wr, 3 op/s 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: cluster 2026-03-09T17:35:08.769089+0000 mgr.y (mgr.14505) 363 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 410 B/s wr, 3 op/s 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: audit 2026-03-09T17:35:09.799718+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: audit 2026-03-09T17:35:09.799718+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: audit 2026-03-09T17:35:09.800351+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: audit 2026-03-09T17:35:09.800351+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: audit 2026-03-09T17:35:09.800739+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: audit 2026-03-09T17:35:09.800739+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: audit 2026-03-09T17:35:09.801291+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:10 vm00 bash[20770]: audit 2026-03-09T17:35:09.801291+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: cluster 2026-03-09T17:35:08.769089+0000 mgr.y (mgr.14505) 363 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 410 B/s wr, 3 op/s 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: cluster 2026-03-09T17:35:08.769089+0000 mgr.y (mgr.14505) 363 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 410 B/s wr, 3 op/s 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: audit 2026-03-09T17:35:09.799718+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: audit 2026-03-09T17:35:09.799718+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: audit 2026-03-09T17:35:09.800351+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: audit 2026-03-09T17:35:09.800351+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: audit 2026-03-09T17:35:09.800739+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: audit 2026-03-09T17:35:09.800739+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: audit 2026-03-09T17:35:09.801291+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:10 vm02 bash[23351]: audit 2026-03-09T17:35:09.801291+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-65"}]: dispatch 2026-03-09T17:35:11.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:11 vm00 bash[28333]: cluster 2026-03-09T17:35:10.162371+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T17:35:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:11 vm00 bash[28333]: cluster 2026-03-09T17:35:10.162371+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T17:35:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:11 vm00 bash[20770]: cluster 2026-03-09T17:35:10.162371+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T17:35:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:11 vm00 bash[20770]: cluster 2026-03-09T17:35:10.162371+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T17:35:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:11 vm02 bash[23351]: cluster 2026-03-09T17:35:10.162371+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T17:35:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:11 vm02 bash[23351]: cluster 2026-03-09T17:35:10.162371+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T17:35:12.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:35:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: cluster 2026-03-09T17:35:10.769447+0000 mgr.y (mgr.14505) 364 : cluster [DBG] pgmap v580: 260 pgs: 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: cluster 2026-03-09T17:35:10.769447+0000 mgr.y (mgr.14505) 364 : cluster [DBG] pgmap v580: 260 pgs: 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: cluster 2026-03-09T17:35:11.180626+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: cluster 2026-03-09T17:35:11.180626+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: audit 2026-03-09T17:35:11.191508+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: audit 2026-03-09T17:35:11.191508+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: audit 2026-03-09T17:35:11.192558+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: audit 2026-03-09T17:35:11.192558+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: audit 2026-03-09T17:35:11.741189+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: audit 2026-03-09T17:35:11.741189+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: cluster 2026-03-09T17:35:11.745839+0000 mon.a (mon.0) 2565 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: cluster 2026-03-09T17:35:11.745839+0000 mon.a (mon.0) 2565 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: audit 2026-03-09T17:35:11.746482+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:12 vm00 bash[28333]: audit 2026-03-09T17:35:11.746482+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: cluster 2026-03-09T17:35:10.769447+0000 mgr.y (mgr.14505) 364 : cluster [DBG] pgmap v580: 260 pgs: 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: cluster 2026-03-09T17:35:10.769447+0000 mgr.y (mgr.14505) 364 : cluster [DBG] pgmap v580: 260 pgs: 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: cluster 2026-03-09T17:35:11.180626+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: cluster 2026-03-09T17:35:11.180626+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: audit 2026-03-09T17:35:11.191508+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: audit 2026-03-09T17:35:11.191508+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: audit 2026-03-09T17:35:11.192558+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: audit 2026-03-09T17:35:11.192558+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: audit 2026-03-09T17:35:11.741189+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:12.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: audit 2026-03-09T17:35:11.741189+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:12.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: cluster 2026-03-09T17:35:11.745839+0000 mon.a (mon.0) 2565 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T17:35:12.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: cluster 2026-03-09T17:35:11.745839+0000 mon.a (mon.0) 2565 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T17:35:12.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: audit 2026-03-09T17:35:11.746482+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:12.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:12 vm00 bash[20770]: audit 2026-03-09T17:35:11.746482+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: cluster 2026-03-09T17:35:10.769447+0000 mgr.y (mgr.14505) 364 : cluster [DBG] pgmap v580: 260 pgs: 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: cluster 2026-03-09T17:35:10.769447+0000 mgr.y (mgr.14505) 364 : cluster [DBG] pgmap v580: 260 pgs: 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: cluster 2026-03-09T17:35:11.180626+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: cluster 2026-03-09T17:35:11.180626+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: audit 2026-03-09T17:35:11.191508+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: audit 2026-03-09T17:35:11.191508+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: audit 2026-03-09T17:35:11.192558+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: audit 2026-03-09T17:35:11.192558+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: audit 2026-03-09T17:35:11.741189+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: audit 2026-03-09T17:35:11.741189+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: cluster 2026-03-09T17:35:11.745839+0000 mon.a (mon.0) 2565 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: cluster 2026-03-09T17:35:11.745839+0000 mon.a (mon.0) 2565 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: audit 2026-03-09T17:35:11.746482+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:12 vm02 bash[23351]: audit 2026-03-09T17:35:11.746482+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:11.850141+0000 mgr.y (mgr.14505) 365 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:11.850141+0000 mgr.y (mgr.14505) 365 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: cluster 2026-03-09T17:35:12.752199+0000 mon.a (mon.0) 2566 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: cluster 2026-03-09T17:35:12.752199+0000 mon.a (mon.0) 2566 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.788104+0000 mon.c (mon.2) 597 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.788104+0000 mon.c (mon.2) 597 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.800932+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.800932+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.801549+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.801549+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.801994+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.801994+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.802567+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:13 vm00 bash[20770]: audit 2026-03-09T17:35:12.802567+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:11.850141+0000 mgr.y (mgr.14505) 365 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:11.850141+0000 mgr.y (mgr.14505) 365 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: cluster 2026-03-09T17:35:12.752199+0000 mon.a (mon.0) 2566 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: cluster 2026-03-09T17:35:12.752199+0000 mon.a (mon.0) 2566 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.788104+0000 mon.c (mon.2) 597 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.788104+0000 mon.c (mon.2) 597 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.800932+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.800932+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.801549+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.801549+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.801994+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.801994+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.802567+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:13 vm00 bash[28333]: audit 2026-03-09T17:35:12.802567+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:11.850141+0000 mgr.y (mgr.14505) 365 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:11.850141+0000 mgr.y (mgr.14505) 365 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: cluster 2026-03-09T17:35:12.752199+0000 mon.a (mon.0) 2566 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: cluster 2026-03-09T17:35:12.752199+0000 mon.a (mon.0) 2566 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.788104+0000 mon.c (mon.2) 597 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.788104+0000 mon.c (mon.2) 597 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.800932+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.800932+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.801549+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.801549+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.801994+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.801994+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.802567+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:13 vm02 bash[23351]: audit 2026-03-09T17:35:12.802567+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-67"}]: dispatch 2026-03-09T17:35:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:14 vm00 bash[28333]: cluster 2026-03-09T17:35:12.769957+0000 mgr.y (mgr.14505) 366 : cluster [DBG] pgmap v584: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:14 vm00 bash[28333]: cluster 2026-03-09T17:35:12.769957+0000 mgr.y (mgr.14505) 366 : cluster [DBG] pgmap v584: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:14 vm00 bash[28333]: cluster 2026-03-09T17:35:13.755853+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T17:35:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:14 vm00 bash[28333]: cluster 2026-03-09T17:35:13.755853+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T17:35:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:14 vm00 bash[20770]: cluster 2026-03-09T17:35:12.769957+0000 mgr.y (mgr.14505) 366 : cluster [DBG] pgmap v584: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:14 vm00 bash[20770]: cluster 2026-03-09T17:35:12.769957+0000 mgr.y (mgr.14505) 366 : cluster [DBG] pgmap v584: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:14 vm00 bash[20770]: cluster 2026-03-09T17:35:13.755853+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T17:35:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:14 vm00 bash[20770]: cluster 2026-03-09T17:35:13.755853+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T17:35:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:14 vm02 bash[23351]: cluster 2026-03-09T17:35:12.769957+0000 mgr.y (mgr.14505) 366 : cluster [DBG] pgmap v584: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:14 vm02 bash[23351]: cluster 2026-03-09T17:35:12.769957+0000 mgr.y (mgr.14505) 366 : cluster [DBG] pgmap v584: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:14 vm02 bash[23351]: cluster 2026-03-09T17:35:13.755853+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T17:35:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:14 vm02 bash[23351]: cluster 2026-03-09T17:35:13.755853+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:15 vm00 bash[28333]: cluster 2026-03-09T17:35:14.759667+0000 mon.a (mon.0) 2570 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:15 vm00 bash[28333]: cluster 2026-03-09T17:35:14.759667+0000 mon.a (mon.0) 2570 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:15 vm00 bash[28333]: cluster 2026-03-09T17:35:14.770284+0000 mgr.y (mgr.14505) 367 : cluster [DBG] pgmap v587: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 571 B/s wr, 2 op/s 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:15 vm00 bash[28333]: cluster 2026-03-09T17:35:14.770284+0000 mgr.y (mgr.14505) 367 : cluster [DBG] pgmap v587: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 571 B/s wr, 2 op/s 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:15 vm00 bash[28333]: audit 2026-03-09T17:35:14.776716+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:15 vm00 bash[28333]: audit 2026-03-09T17:35:14.776716+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:15 vm00 bash[28333]: audit 2026-03-09T17:35:14.778040+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:15 vm00 bash[28333]: audit 2026-03-09T17:35:14.778040+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:15 vm00 bash[20770]: cluster 2026-03-09T17:35:14.759667+0000 mon.a (mon.0) 2570 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:15 vm00 bash[20770]: cluster 2026-03-09T17:35:14.759667+0000 mon.a (mon.0) 2570 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:15 vm00 bash[20770]: cluster 2026-03-09T17:35:14.770284+0000 mgr.y (mgr.14505) 367 : cluster [DBG] pgmap v587: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 571 B/s wr, 2 op/s 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:15 vm00 bash[20770]: cluster 2026-03-09T17:35:14.770284+0000 mgr.y (mgr.14505) 367 : cluster [DBG] pgmap v587: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 571 B/s wr, 2 op/s 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:15 vm00 bash[20770]: audit 2026-03-09T17:35:14.776716+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:15 vm00 bash[20770]: audit 2026-03-09T17:35:14.776716+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:15 vm00 bash[20770]: audit 2026-03-09T17:35:14.778040+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:15 vm00 bash[20770]: audit 2026-03-09T17:35:14.778040+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:15 vm02 bash[23351]: cluster 2026-03-09T17:35:14.759667+0000 mon.a (mon.0) 2570 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T17:35:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:15 vm02 bash[23351]: cluster 2026-03-09T17:35:14.759667+0000 mon.a (mon.0) 2570 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T17:35:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:15 vm02 bash[23351]: cluster 2026-03-09T17:35:14.770284+0000 mgr.y (mgr.14505) 367 : cluster [DBG] pgmap v587: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 571 B/s wr, 2 op/s 2026-03-09T17:35:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:15 vm02 bash[23351]: cluster 2026-03-09T17:35:14.770284+0000 mgr.y (mgr.14505) 367 : cluster [DBG] pgmap v587: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 571 B/s wr, 2 op/s 2026-03-09T17:35:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:15 vm02 bash[23351]: audit 2026-03-09T17:35:14.776716+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:15 vm02 bash[23351]: audit 2026-03-09T17:35:14.776716+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:15 vm02 bash[23351]: audit 2026-03-09T17:35:14.778040+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:15 vm02 bash[23351]: audit 2026-03-09T17:35:14.778040+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:35:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:35:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: cluster 2026-03-09T17:35:15.755534+0000 mon.a (mon.0) 2572 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: cluster 2026-03-09T17:35:15.755534+0000 mon.a (mon.0) 2572 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.782027+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.782027+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.795215+0000 mon.b (mon.1) 421 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.795215+0000 mon.b (mon.1) 421 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: cluster 2026-03-09T17:35:15.797248+0000 mon.a (mon.0) 2574 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: cluster 2026-03-09T17:35:15.797248+0000 mon.a (mon.0) 2574 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.855565+0000 mon.b (mon.1) 422 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.855565+0000 mon.b (mon.1) 422 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.856558+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.856558+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.856807+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.856807+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.857588+0000 mon.a (mon.0) 2576 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:17 vm00 bash[20770]: audit 2026-03-09T17:35:15.857588+0000 mon.a (mon.0) 2576 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: cluster 2026-03-09T17:35:15.755534+0000 mon.a (mon.0) 2572 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: cluster 2026-03-09T17:35:15.755534+0000 mon.a (mon.0) 2572 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.782027+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.782027+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.795215+0000 mon.b (mon.1) 421 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.795215+0000 mon.b (mon.1) 421 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: cluster 2026-03-09T17:35:15.797248+0000 mon.a (mon.0) 2574 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: cluster 2026-03-09T17:35:15.797248+0000 mon.a (mon.0) 2574 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.855565+0000 mon.b (mon.1) 422 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.855565+0000 mon.b (mon.1) 422 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.856558+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.856558+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.856807+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.856807+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.857588+0000 mon.a (mon.0) 2576 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:17 vm00 bash[28333]: audit 2026-03-09T17:35:15.857588+0000 mon.a (mon.0) 2576 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: cluster 2026-03-09T17:35:15.755534+0000 mon.a (mon.0) 2572 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: cluster 2026-03-09T17:35:15.755534+0000 mon.a (mon.0) 2572 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.782027+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.782027+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.795215+0000 mon.b (mon.1) 421 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.795215+0000 mon.b (mon.1) 421 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: cluster 2026-03-09T17:35:15.797248+0000 mon.a (mon.0) 2574 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: cluster 2026-03-09T17:35:15.797248+0000 mon.a (mon.0) 2574 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.855565+0000 mon.b (mon.1) 422 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.855565+0000 mon.b (mon.1) 422 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.856558+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.856558+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.856807+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.856807+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.857588+0000 mon.a (mon.0) 2576 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:17 vm02 bash[23351]: audit 2026-03-09T17:35:15.857588+0000 mon.a (mon.0) 2576 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-69"}]: dispatch 2026-03-09T17:35:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:18 vm00 bash[28333]: cluster 2026-03-09T17:35:16.770645+0000 mgr.y (mgr.14505) 368 : cluster [DBG] pgmap v589: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-09T17:35:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:18 vm00 bash[28333]: cluster 2026-03-09T17:35:16.770645+0000 mgr.y (mgr.14505) 368 : cluster [DBG] pgmap v589: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-09T17:35:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:18 vm00 bash[28333]: cluster 2026-03-09T17:35:17.583374+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T17:35:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:18 vm00 bash[28333]: cluster 2026-03-09T17:35:17.583374+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T17:35:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:18 vm00 bash[20770]: cluster 2026-03-09T17:35:16.770645+0000 mgr.y (mgr.14505) 368 : cluster [DBG] pgmap v589: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-09T17:35:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:18 vm00 bash[20770]: cluster 2026-03-09T17:35:16.770645+0000 mgr.y (mgr.14505) 368 : cluster [DBG] pgmap v589: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-09T17:35:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:18 vm00 bash[20770]: cluster 2026-03-09T17:35:17.583374+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T17:35:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:18 vm00 bash[20770]: cluster 2026-03-09T17:35:17.583374+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T17:35:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:18 vm02 bash[23351]: cluster 2026-03-09T17:35:16.770645+0000 mgr.y (mgr.14505) 368 : cluster [DBG] pgmap v589: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-09T17:35:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:18 vm02 bash[23351]: cluster 2026-03-09T17:35:16.770645+0000 mgr.y (mgr.14505) 368 : cluster [DBG] pgmap v589: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 817 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-09T17:35:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:18 vm02 bash[23351]: cluster 2026-03-09T17:35:17.583374+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T17:35:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:18 vm02 bash[23351]: cluster 2026-03-09T17:35:17.583374+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:19 vm00 bash[28333]: cluster 2026-03-09T17:35:18.507273+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:19 vm00 bash[28333]: cluster 2026-03-09T17:35:18.507273+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:19 vm00 bash[28333]: audit 2026-03-09T17:35:18.512859+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:19 vm00 bash[28333]: audit 2026-03-09T17:35:18.512859+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:19 vm00 bash[28333]: audit 2026-03-09T17:35:18.516158+0000 mon.a (mon.0) 2579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:19 vm00 bash[28333]: audit 2026-03-09T17:35:18.516158+0000 mon.a (mon.0) 2579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:19 vm00 bash[28333]: cluster 2026-03-09T17:35:18.770984+0000 mgr.y (mgr.14505) 369 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 765 B/s wr, 2 op/s 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:19 vm00 bash[28333]: cluster 2026-03-09T17:35:18.770984+0000 mgr.y (mgr.14505) 369 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 765 B/s wr, 2 op/s 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:19 vm00 bash[20770]: cluster 2026-03-09T17:35:18.507273+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:19 vm00 bash[20770]: cluster 2026-03-09T17:35:18.507273+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:19 vm00 bash[20770]: audit 2026-03-09T17:35:18.512859+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:19 vm00 bash[20770]: audit 2026-03-09T17:35:18.512859+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:19 vm00 bash[20770]: audit 2026-03-09T17:35:18.516158+0000 mon.a (mon.0) 2579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:19 vm00 bash[20770]: audit 2026-03-09T17:35:18.516158+0000 mon.a (mon.0) 2579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:19 vm00 bash[20770]: cluster 2026-03-09T17:35:18.770984+0000 mgr.y (mgr.14505) 369 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 765 B/s wr, 2 op/s 2026-03-09T17:35:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:19 vm00 bash[20770]: cluster 2026-03-09T17:35:18.770984+0000 mgr.y (mgr.14505) 369 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 765 B/s wr, 2 op/s 2026-03-09T17:35:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:19 vm02 bash[23351]: cluster 2026-03-09T17:35:18.507273+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T17:35:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:19 vm02 bash[23351]: cluster 2026-03-09T17:35:18.507273+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T17:35:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:19 vm02 bash[23351]: audit 2026-03-09T17:35:18.512859+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:19 vm02 bash[23351]: audit 2026-03-09T17:35:18.512859+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:19 vm02 bash[23351]: audit 2026-03-09T17:35:18.516158+0000 mon.a (mon.0) 2579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:19 vm02 bash[23351]: audit 2026-03-09T17:35:18.516158+0000 mon.a (mon.0) 2579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:19 vm02 bash[23351]: cluster 2026-03-09T17:35:18.770984+0000 mgr.y (mgr.14505) 369 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 765 B/s wr, 2 op/s 2026-03-09T17:35:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:19 vm02 bash[23351]: cluster 2026-03-09T17:35:18.770984+0000 mgr.y (mgr.14505) 369 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 765 B/s wr, 2 op/s 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: audit 2026-03-09T17:35:19.511240+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: audit 2026-03-09T17:35:19.511240+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: cluster 2026-03-09T17:35:19.524885+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: cluster 2026-03-09T17:35:19.524885+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: audit 2026-03-09T17:35:19.528699+0000 mon.b (mon.1) 425 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: audit 2026-03-09T17:35:19.528699+0000 mon.b (mon.1) 425 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: audit 2026-03-09T17:35:19.539804+0000 mon.b (mon.1) 426 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: audit 2026-03-09T17:35:19.539804+0000 mon.b (mon.1) 426 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: audit 2026-03-09T17:35:19.544116+0000 mon.a (mon.0) 2582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:20 vm00 bash[28333]: audit 2026-03-09T17:35:19.544116+0000 mon.a (mon.0) 2582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: audit 2026-03-09T17:35:19.511240+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: audit 2026-03-09T17:35:19.511240+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: cluster 2026-03-09T17:35:19.524885+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: cluster 2026-03-09T17:35:19.524885+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: audit 2026-03-09T17:35:19.528699+0000 mon.b (mon.1) 425 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: audit 2026-03-09T17:35:19.528699+0000 mon.b (mon.1) 425 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: audit 2026-03-09T17:35:19.539804+0000 mon.b (mon.1) 426 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: audit 2026-03-09T17:35:19.539804+0000 mon.b (mon.1) 426 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: audit 2026-03-09T17:35:19.544116+0000 mon.a (mon.0) 2582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:20 vm00 bash[20770]: audit 2026-03-09T17:35:19.544116+0000 mon.a (mon.0) 2582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: audit 2026-03-09T17:35:19.511240+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: audit 2026-03-09T17:35:19.511240+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: cluster 2026-03-09T17:35:19.524885+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: cluster 2026-03-09T17:35:19.524885+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: audit 2026-03-09T17:35:19.528699+0000 mon.b (mon.1) 425 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: audit 2026-03-09T17:35:19.528699+0000 mon.b (mon.1) 425 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: audit 2026-03-09T17:35:19.539804+0000 mon.b (mon.1) 426 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: audit 2026-03-09T17:35:19.539804+0000 mon.b (mon.1) 426 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: audit 2026-03-09T17:35:19.544116+0000 mon.a (mon.0) 2582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:20 vm02 bash[23351]: audit 2026-03-09T17:35:19.544116+0000 mon.a (mon.0) 2582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:21 vm00 bash[28333]: audit 2026-03-09T17:35:20.521139+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:21 vm00 bash[28333]: audit 2026-03-09T17:35:20.521139+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:21 vm00 bash[28333]: cluster 2026-03-09T17:35:20.523767+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:21 vm00 bash[28333]: cluster 2026-03-09T17:35:20.523767+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:21 vm00 bash[28333]: cluster 2026-03-09T17:35:20.771318+0000 mgr.y (mgr.14505) 370 : cluster [DBG] pgmap v595: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:21 vm00 bash[28333]: cluster 2026-03-09T17:35:20.771318+0000 mgr.y (mgr.14505) 370 : cluster [DBG] pgmap v595: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:21 vm00 bash[20770]: audit 2026-03-09T17:35:20.521139+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:21 vm00 bash[20770]: audit 2026-03-09T17:35:20.521139+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:21 vm00 bash[20770]: cluster 2026-03-09T17:35:20.523767+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:21 vm00 bash[20770]: cluster 2026-03-09T17:35:20.523767+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:21 vm00 bash[20770]: cluster 2026-03-09T17:35:20.771318+0000 mgr.y (mgr.14505) 370 : cluster [DBG] pgmap v595: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:35:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:21 vm00 bash[20770]: cluster 2026-03-09T17:35:20.771318+0000 mgr.y (mgr.14505) 370 : cluster [DBG] pgmap v595: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:35:21.851 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:21 vm02 bash[23351]: audit 2026-03-09T17:35:20.521139+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:35:21.851 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:21 vm02 bash[23351]: audit 2026-03-09T17:35:20.521139+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:35:21.852 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:21 vm02 bash[23351]: cluster 2026-03-09T17:35:20.523767+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T17:35:21.852 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:21 vm02 bash[23351]: cluster 2026-03-09T17:35:20.523767+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T17:35:21.852 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:21 vm02 bash[23351]: cluster 2026-03-09T17:35:20.771318+0000 mgr.y (mgr.14505) 370 : cluster [DBG] pgmap v595: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:35:21.852 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:21 vm02 bash[23351]: cluster 2026-03-09T17:35:20.771318+0000 mgr.y (mgr.14505) 370 : cluster [DBG] pgmap v595: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:35:22.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:35:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:22 vm00 bash[28333]: cluster 2026-03-09T17:35:21.521161+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:22 vm00 bash[28333]: cluster 2026-03-09T17:35:21.521161+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:22 vm00 bash[28333]: cluster 2026-03-09T17:35:21.541387+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:22 vm00 bash[28333]: cluster 2026-03-09T17:35:21.541387+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:22 vm00 bash[28333]: audit 2026-03-09T17:35:21.851458+0000 mgr.y (mgr.14505) 371 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:22 vm00 bash[28333]: audit 2026-03-09T17:35:21.851458+0000 mgr.y (mgr.14505) 371 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:22 vm00 bash[20770]: cluster 2026-03-09T17:35:21.521161+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:22 vm00 bash[20770]: cluster 2026-03-09T17:35:21.521161+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:22 vm00 bash[20770]: cluster 2026-03-09T17:35:21.541387+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:22 vm00 bash[20770]: cluster 2026-03-09T17:35:21.541387+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:22 vm00 bash[20770]: audit 2026-03-09T17:35:21.851458+0000 mgr.y (mgr.14505) 371 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:22 vm00 bash[20770]: audit 2026-03-09T17:35:21.851458+0000 mgr.y (mgr.14505) 371 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:22 vm02 bash[23351]: cluster 2026-03-09T17:35:21.521161+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:22 vm02 bash[23351]: cluster 2026-03-09T17:35:21.521161+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:35:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:22 vm02 bash[23351]: cluster 2026-03-09T17:35:21.541387+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T17:35:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:22 vm02 bash[23351]: cluster 2026-03-09T17:35:21.541387+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T17:35:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:22 vm02 bash[23351]: audit 2026-03-09T17:35:21.851458+0000 mgr.y (mgr.14505) 371 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:22 vm02 bash[23351]: audit 2026-03-09T17:35:21.851458+0000 mgr.y (mgr.14505) 371 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:23 vm02 bash[23351]: cluster 2026-03-09T17:35:22.772177+0000 mgr.y (mgr.14505) 372 : cluster [DBG] pgmap v597: 292 pgs: 9 unknown, 283 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 240 B/s rd, 2.8 KiB/s wr, 5 op/s 2026-03-09T17:35:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:23 vm02 bash[23351]: cluster 2026-03-09T17:35:22.772177+0000 mgr.y (mgr.14505) 372 : cluster [DBG] pgmap v597: 292 pgs: 9 unknown, 283 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 240 B/s rd, 2.8 KiB/s wr, 5 op/s 2026-03-09T17:35:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:23 vm00 bash[28333]: cluster 2026-03-09T17:35:22.772177+0000 mgr.y (mgr.14505) 372 : cluster [DBG] pgmap v597: 292 pgs: 9 unknown, 283 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 240 B/s rd, 2.8 KiB/s wr, 5 op/s 2026-03-09T17:35:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:23 vm00 bash[28333]: cluster 2026-03-09T17:35:22.772177+0000 mgr.y (mgr.14505) 372 : cluster [DBG] pgmap v597: 292 pgs: 9 unknown, 283 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 240 B/s rd, 2.8 KiB/s wr, 5 op/s 2026-03-09T17:35:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:23 vm00 bash[20770]: cluster 2026-03-09T17:35:22.772177+0000 mgr.y (mgr.14505) 372 : cluster [DBG] pgmap v597: 292 pgs: 9 unknown, 283 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 240 B/s rd, 2.8 KiB/s wr, 5 op/s 2026-03-09T17:35:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:23 vm00 bash[20770]: cluster 2026-03-09T17:35:22.772177+0000 mgr.y (mgr.14505) 372 : cluster [DBG] pgmap v597: 292 pgs: 9 unknown, 283 active+clean; 8.3 MiB data, 818 MiB used, 159 GiB / 160 GiB avail; 240 B/s rd, 2.8 KiB/s wr, 5 op/s 2026-03-09T17:35:26.300 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:26 vm00 bash[28333]: cluster 2026-03-09T17:35:24.772708+0000 mgr.y (mgr.14505) 373 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-09T17:35:26.300 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:26 vm00 bash[28333]: cluster 2026-03-09T17:35:24.772708+0000 mgr.y (mgr.14505) 373 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-09T17:35:26.300 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:26 vm00 bash[20770]: cluster 2026-03-09T17:35:24.772708+0000 mgr.y (mgr.14505) 373 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-09T17:35:26.300 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:26 vm00 bash[20770]: cluster 2026-03-09T17:35:24.772708+0000 mgr.y (mgr.14505) 373 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-09T17:35:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:26 vm02 bash[23351]: cluster 2026-03-09T17:35:24.772708+0000 mgr.y (mgr.14505) 373 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-09T17:35:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:26 vm02 bash[23351]: cluster 2026-03-09T17:35:24.772708+0000 mgr.y (mgr.14505) 373 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-09T17:35:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:35:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:35:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:35:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:28 vm02 bash[23351]: cluster 2026-03-09T17:35:26.773034+0000 mgr.y (mgr.14505) 374 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 846 B/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-09T17:35:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:28 vm02 bash[23351]: cluster 2026-03-09T17:35:26.773034+0000 mgr.y (mgr.14505) 374 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 846 B/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-09T17:35:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:28 vm02 bash[23351]: audit 2026-03-09T17:35:27.796264+0000 mon.c (mon.2) 598 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:28 vm02 bash[23351]: audit 2026-03-09T17:35:27.796264+0000 mon.c (mon.2) 598 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:28 vm00 bash[28333]: cluster 2026-03-09T17:35:26.773034+0000 mgr.y (mgr.14505) 374 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 846 B/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-09T17:35:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:28 vm00 bash[28333]: cluster 2026-03-09T17:35:26.773034+0000 mgr.y (mgr.14505) 374 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 846 B/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-09T17:35:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:28 vm00 bash[28333]: audit 2026-03-09T17:35:27.796264+0000 mon.c (mon.2) 598 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:28 vm00 bash[28333]: audit 2026-03-09T17:35:27.796264+0000 mon.c (mon.2) 598 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:28 vm00 bash[20770]: cluster 2026-03-09T17:35:26.773034+0000 mgr.y (mgr.14505) 374 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 846 B/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-09T17:35:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:28 vm00 bash[20770]: cluster 2026-03-09T17:35:26.773034+0000 mgr.y (mgr.14505) 374 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 846 B/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-09T17:35:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:28 vm00 bash[20770]: audit 2026-03-09T17:35:27.796264+0000 mon.c (mon.2) 598 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:28 vm00 bash[20770]: audit 2026-03-09T17:35:27.796264+0000 mon.c (mon.2) 598 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:30 vm02 bash[23351]: cluster 2026-03-09T17:35:28.773730+0000 mgr.y (mgr.14505) 375 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-09T17:35:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:30 vm02 bash[23351]: cluster 2026-03-09T17:35:28.773730+0000 mgr.y (mgr.14505) 375 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-09T17:35:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:30 vm00 bash[28333]: cluster 2026-03-09T17:35:28.773730+0000 mgr.y (mgr.14505) 375 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-09T17:35:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:30 vm00 bash[28333]: cluster 2026-03-09T17:35:28.773730+0000 mgr.y (mgr.14505) 375 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-09T17:35:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:30 vm00 bash[20770]: cluster 2026-03-09T17:35:28.773730+0000 mgr.y (mgr.14505) 375 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-09T17:35:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:30 vm00 bash[20770]: cluster 2026-03-09T17:35:28.773730+0000 mgr.y (mgr.14505) 375 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-09T17:35:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:32 vm02 bash[23351]: cluster 2026-03-09T17:35:30.774086+0000 mgr.y (mgr.14505) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:35:32.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:32 vm02 bash[23351]: cluster 2026-03-09T17:35:30.774086+0000 mgr.y (mgr.14505) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:35:32.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:35:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:35:32.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:32 vm00 bash[28333]: cluster 2026-03-09T17:35:30.774086+0000 mgr.y (mgr.14505) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:35:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:32 vm00 bash[28333]: cluster 2026-03-09T17:35:30.774086+0000 mgr.y (mgr.14505) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:35:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:32 vm00 bash[20770]: cluster 2026-03-09T17:35:30.774086+0000 mgr.y (mgr.14505) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:35:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:32 vm00 bash[20770]: cluster 2026-03-09T17:35:30.774086+0000 mgr.y (mgr.14505) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:35:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:33 vm02 bash[23351]: audit 2026-03-09T17:35:31.862176+0000 mgr.y (mgr.14505) 377 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:33 vm02 bash[23351]: audit 2026-03-09T17:35:31.862176+0000 mgr.y (mgr.14505) 377 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:33.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:33 vm00 bash[28333]: audit 2026-03-09T17:35:31.862176+0000 mgr.y (mgr.14505) 377 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:33 vm00 bash[28333]: audit 2026-03-09T17:35:31.862176+0000 mgr.y (mgr.14505) 377 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:33 vm00 bash[20770]: audit 2026-03-09T17:35:31.862176+0000 mgr.y (mgr.14505) 377 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:33 vm00 bash[20770]: audit 2026-03-09T17:35:31.862176+0000 mgr.y (mgr.14505) 377 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:34 vm02 bash[23351]: cluster 2026-03-09T17:35:32.774652+0000 mgr.y (mgr.14505) 378 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.4 KiB/s wr, 5 op/s 2026-03-09T17:35:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:34 vm02 bash[23351]: cluster 2026-03-09T17:35:32.774652+0000 mgr.y (mgr.14505) 378 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.4 KiB/s wr, 5 op/s 2026-03-09T17:35:34.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:34 vm00 bash[28333]: cluster 2026-03-09T17:35:32.774652+0000 mgr.y (mgr.14505) 378 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.4 KiB/s wr, 5 op/s 2026-03-09T17:35:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:34 vm00 bash[28333]: cluster 2026-03-09T17:35:32.774652+0000 mgr.y (mgr.14505) 378 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.4 KiB/s wr, 5 op/s 2026-03-09T17:35:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:34 vm00 bash[20770]: cluster 2026-03-09T17:35:32.774652+0000 mgr.y (mgr.14505) 378 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.4 KiB/s wr, 5 op/s 2026-03-09T17:35:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:34 vm00 bash[20770]: cluster 2026-03-09T17:35:32.774652+0000 mgr.y (mgr.14505) 378 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.4 KiB/s wr, 5 op/s 2026-03-09T17:35:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:36 vm02 bash[23351]: cluster 2026-03-09T17:35:34.775194+0000 mgr.y (mgr.14505) 379 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T17:35:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:36 vm02 bash[23351]: cluster 2026-03-09T17:35:34.775194+0000 mgr.y (mgr.14505) 379 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T17:35:36.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:36 vm00 bash[28333]: cluster 2026-03-09T17:35:34.775194+0000 mgr.y (mgr.14505) 379 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T17:35:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:36 vm00 bash[28333]: cluster 2026-03-09T17:35:34.775194+0000 mgr.y (mgr.14505) 379 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T17:35:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:36 vm00 bash[20770]: cluster 2026-03-09T17:35:34.775194+0000 mgr.y (mgr.14505) 379 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T17:35:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:36 vm00 bash[20770]: cluster 2026-03-09T17:35:34.775194+0000 mgr.y (mgr.14505) 379 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T17:35:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:35:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:35:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:35:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:38 vm02 bash[23351]: cluster 2026-03-09T17:35:36.775509+0000 mgr.y (mgr.14505) 380 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:38 vm02 bash[23351]: cluster 2026-03-09T17:35:36.775509+0000 mgr.y (mgr.14505) 380 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:38.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:38 vm00 bash[28333]: cluster 2026-03-09T17:35:36.775509+0000 mgr.y (mgr.14505) 380 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:38 vm00 bash[28333]: cluster 2026-03-09T17:35:36.775509+0000 mgr.y (mgr.14505) 380 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:38 vm00 bash[20770]: cluster 2026-03-09T17:35:36.775509+0000 mgr.y (mgr.14505) 380 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:38 vm00 bash[20770]: cluster 2026-03-09T17:35:36.775509+0000 mgr.y (mgr.14505) 380 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:40 vm02 bash[23351]: cluster 2026-03-09T17:35:38.776235+0000 mgr.y (mgr.14505) 381 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:40 vm02 bash[23351]: cluster 2026-03-09T17:35:38.776235+0000 mgr.y (mgr.14505) 381 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:40.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:40 vm00 bash[28333]: cluster 2026-03-09T17:35:38.776235+0000 mgr.y (mgr.14505) 381 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:40.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:40 vm00 bash[28333]: cluster 2026-03-09T17:35:38.776235+0000 mgr.y (mgr.14505) 381 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:40 vm00 bash[20770]: cluster 2026-03-09T17:35:38.776235+0000 mgr.y (mgr.14505) 381 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:40 vm00 bash[20770]: cluster 2026-03-09T17:35:38.776235+0000 mgr.y (mgr.14505) 381 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: cluster 2026-03-09T17:35:40.776569+0000 mgr.y (mgr.14505) 382 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: cluster 2026-03-09T17:35:40.776569+0000 mgr.y (mgr.14505) 382 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: audit 2026-03-09T17:35:41.585241+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: audit 2026-03-09T17:35:41.585241+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: audit 2026-03-09T17:35:41.585912+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: audit 2026-03-09T17:35:41.585912+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: audit 2026-03-09T17:35:41.586359+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: audit 2026-03-09T17:35:41.586359+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: audit 2026-03-09T17:35:41.586881+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:42 vm02 bash[23351]: audit 2026-03-09T17:35:41.586881+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:35:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: cluster 2026-03-09T17:35:40.776569+0000 mgr.y (mgr.14505) 382 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: cluster 2026-03-09T17:35:40.776569+0000 mgr.y (mgr.14505) 382 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: audit 2026-03-09T17:35:41.585241+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: audit 2026-03-09T17:35:41.585241+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: audit 2026-03-09T17:35:41.585912+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: audit 2026-03-09T17:35:41.585912+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: audit 2026-03-09T17:35:41.586359+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: audit 2026-03-09T17:35:41.586359+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: audit 2026-03-09T17:35:41.586881+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:42 vm00 bash[28333]: audit 2026-03-09T17:35:41.586881+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: cluster 2026-03-09T17:35:40.776569+0000 mgr.y (mgr.14505) 382 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: cluster 2026-03-09T17:35:40.776569+0000 mgr.y (mgr.14505) 382 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: audit 2026-03-09T17:35:41.585241+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: audit 2026-03-09T17:35:41.585241+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: audit 2026-03-09T17:35:41.585912+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: audit 2026-03-09T17:35:41.585912+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: audit 2026-03-09T17:35:41.586359+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: audit 2026-03-09T17:35:41.586359+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: audit 2026-03-09T17:35:41.586881+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:42 vm00 bash[20770]: audit 2026-03-09T17:35:41.586881+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-71"}]: dispatch 2026-03-09T17:35:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:43 vm02 bash[23351]: audit 2026-03-09T17:35:41.868024+0000 mgr.y (mgr.14505) 383 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:43 vm02 bash[23351]: audit 2026-03-09T17:35:41.868024+0000 mgr.y (mgr.14505) 383 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:43 vm02 bash[23351]: cluster 2026-03-09T17:35:42.102812+0000 mon.a (mon.0) 2589 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T17:35:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:43 vm02 bash[23351]: cluster 2026-03-09T17:35:42.102812+0000 mon.a (mon.0) 2589 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T17:35:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:43 vm02 bash[23351]: audit 2026-03-09T17:35:42.802169+0000 mon.c (mon.2) 599 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:43 vm02 bash[23351]: audit 2026-03-09T17:35:42.802169+0000 mon.c (mon.2) 599 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:43.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:43 vm00 bash[28333]: audit 2026-03-09T17:35:41.868024+0000 mgr.y (mgr.14505) 383 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:43 vm00 bash[28333]: audit 2026-03-09T17:35:41.868024+0000 mgr.y (mgr.14505) 383 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:43 vm00 bash[28333]: cluster 2026-03-09T17:35:42.102812+0000 mon.a (mon.0) 2589 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:43 vm00 bash[28333]: cluster 2026-03-09T17:35:42.102812+0000 mon.a (mon.0) 2589 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:43 vm00 bash[28333]: audit 2026-03-09T17:35:42.802169+0000 mon.c (mon.2) 599 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:43 vm00 bash[28333]: audit 2026-03-09T17:35:42.802169+0000 mon.c (mon.2) 599 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:43 vm00 bash[20770]: audit 2026-03-09T17:35:41.868024+0000 mgr.y (mgr.14505) 383 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:43 vm00 bash[20770]: audit 2026-03-09T17:35:41.868024+0000 mgr.y (mgr.14505) 383 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:43 vm00 bash[20770]: cluster 2026-03-09T17:35:42.102812+0000 mon.a (mon.0) 2589 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:43 vm00 bash[20770]: cluster 2026-03-09T17:35:42.102812+0000 mon.a (mon.0) 2589 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:43 vm00 bash[20770]: audit 2026-03-09T17:35:42.802169+0000 mon.c (mon.2) 599 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:43 vm00 bash[20770]: audit 2026-03-09T17:35:42.802169+0000 mon.c (mon.2) 599 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:44.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:44 vm00 bash[28333]: cluster 2026-03-09T17:35:42.777067+0000 mgr.y (mgr.14505) 384 : cluster [DBG] pgmap v608: 260 pgs: 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:44 vm00 bash[28333]: cluster 2026-03-09T17:35:42.777067+0000 mgr.y (mgr.14505) 384 : cluster [DBG] pgmap v608: 260 pgs: 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:44 vm00 bash[28333]: cluster 2026-03-09T17:35:43.133198+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:44 vm00 bash[28333]: cluster 2026-03-09T17:35:43.133198+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:44 vm00 bash[28333]: audit 2026-03-09T17:35:43.135591+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:44 vm00 bash[28333]: audit 2026-03-09T17:35:43.135591+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:44 vm00 bash[28333]: audit 2026-03-09T17:35:43.143603+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:44 vm00 bash[28333]: audit 2026-03-09T17:35:43.143603+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:44 vm00 bash[20770]: cluster 2026-03-09T17:35:42.777067+0000 mgr.y (mgr.14505) 384 : cluster [DBG] pgmap v608: 260 pgs: 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:44 vm00 bash[20770]: cluster 2026-03-09T17:35:42.777067+0000 mgr.y (mgr.14505) 384 : cluster [DBG] pgmap v608: 260 pgs: 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:44 vm00 bash[20770]: cluster 2026-03-09T17:35:43.133198+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:44 vm00 bash[20770]: cluster 2026-03-09T17:35:43.133198+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:44 vm00 bash[20770]: audit 2026-03-09T17:35:43.135591+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:44 vm00 bash[20770]: audit 2026-03-09T17:35:43.135591+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:44 vm00 bash[20770]: audit 2026-03-09T17:35:43.143603+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:44 vm00 bash[20770]: audit 2026-03-09T17:35:43.143603+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:44 vm02 bash[23351]: cluster 2026-03-09T17:35:42.777067+0000 mgr.y (mgr.14505) 384 : cluster [DBG] pgmap v608: 260 pgs: 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:44 vm02 bash[23351]: cluster 2026-03-09T17:35:42.777067+0000 mgr.y (mgr.14505) 384 : cluster [DBG] pgmap v608: 260 pgs: 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:44 vm02 bash[23351]: cluster 2026-03-09T17:35:43.133198+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T17:35:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:44 vm02 bash[23351]: cluster 2026-03-09T17:35:43.133198+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T17:35:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:44 vm02 bash[23351]: audit 2026-03-09T17:35:43.135591+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:44 vm02 bash[23351]: audit 2026-03-09T17:35:43.135591+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:44 vm02 bash[23351]: audit 2026-03-09T17:35:43.143603+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:44 vm02 bash[23351]: audit 2026-03-09T17:35:43.143603+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:45.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:45 vm00 bash[28333]: audit 2026-03-09T17:35:44.123191+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:45 vm00 bash[28333]: audit 2026-03-09T17:35:44.123191+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:45 vm00 bash[28333]: audit 2026-03-09T17:35:44.130399+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:45 vm00 bash[28333]: audit 2026-03-09T17:35:44.130399+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:45 vm00 bash[28333]: cluster 2026-03-09T17:35:44.130514+0000 mon.a (mon.0) 2593 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:45 vm00 bash[28333]: cluster 2026-03-09T17:35:44.130514+0000 mon.a (mon.0) 2593 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:45 vm00 bash[20770]: audit 2026-03-09T17:35:44.123191+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:45 vm00 bash[20770]: audit 2026-03-09T17:35:44.123191+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:45 vm00 bash[20770]: audit 2026-03-09T17:35:44.130399+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:45 vm00 bash[20770]: audit 2026-03-09T17:35:44.130399+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:45 vm00 bash[20770]: cluster 2026-03-09T17:35:44.130514+0000 mon.a (mon.0) 2593 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T17:35:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:45 vm00 bash[20770]: cluster 2026-03-09T17:35:44.130514+0000 mon.a (mon.0) 2593 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T17:35:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:45 vm02 bash[23351]: audit 2026-03-09T17:35:44.123191+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:45 vm02 bash[23351]: audit 2026-03-09T17:35:44.123191+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:45 vm02 bash[23351]: audit 2026-03-09T17:35:44.130399+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:45 vm02 bash[23351]: audit 2026-03-09T17:35:44.130399+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:45 vm02 bash[23351]: cluster 2026-03-09T17:35:44.130514+0000 mon.a (mon.0) 2593 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T17:35:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:45 vm02 bash[23351]: cluster 2026-03-09T17:35:44.130514+0000 mon.a (mon.0) 2593 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T17:35:46.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:46 vm00 bash[28333]: cluster 2026-03-09T17:35:44.777342+0000 mgr.y (mgr.14505) 385 : cluster [DBG] pgmap v611: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:46 vm00 bash[28333]: cluster 2026-03-09T17:35:44.777342+0000 mgr.y (mgr.14505) 385 : cluster [DBG] pgmap v611: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:46 vm00 bash[28333]: cluster 2026-03-09T17:35:45.162214+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T17:35:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:46 vm00 bash[28333]: cluster 2026-03-09T17:35:45.162214+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T17:35:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:46 vm00 bash[20770]: cluster 2026-03-09T17:35:44.777342+0000 mgr.y (mgr.14505) 385 : cluster [DBG] pgmap v611: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:46 vm00 bash[20770]: cluster 2026-03-09T17:35:44.777342+0000 mgr.y (mgr.14505) 385 : cluster [DBG] pgmap v611: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:46 vm00 bash[20770]: cluster 2026-03-09T17:35:45.162214+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T17:35:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:46 vm00 bash[20770]: cluster 2026-03-09T17:35:45.162214+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T17:35:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:35:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:35:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:35:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:46 vm02 bash[23351]: cluster 2026-03-09T17:35:44.777342+0000 mgr.y (mgr.14505) 385 : cluster [DBG] pgmap v611: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:46 vm02 bash[23351]: cluster 2026-03-09T17:35:44.777342+0000 mgr.y (mgr.14505) 385 : cluster [DBG] pgmap v611: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:46 vm02 bash[23351]: cluster 2026-03-09T17:35:45.162214+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T17:35:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:46 vm02 bash[23351]: cluster 2026-03-09T17:35:45.162214+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T17:35:47.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:47 vm00 bash[28333]: cluster 2026-03-09T17:35:46.166086+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T17:35:47.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:47 vm00 bash[28333]: cluster 2026-03-09T17:35:46.166086+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T17:35:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:47 vm00 bash[20770]: cluster 2026-03-09T17:35:46.166086+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T17:35:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:47 vm00 bash[20770]: cluster 2026-03-09T17:35:46.166086+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T17:35:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:47 vm02 bash[23351]: cluster 2026-03-09T17:35:46.166086+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T17:35:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:47 vm02 bash[23351]: cluster 2026-03-09T17:35:46.166086+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T17:35:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:48 vm00 bash[20770]: cluster 2026-03-09T17:35:46.777688+0000 mgr.y (mgr.14505) 386 : cluster [DBG] pgmap v614: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:48 vm00 bash[20770]: cluster 2026-03-09T17:35:46.777688+0000 mgr.y (mgr.14505) 386 : cluster [DBG] pgmap v614: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:48 vm00 bash[28333]: cluster 2026-03-09T17:35:46.777688+0000 mgr.y (mgr.14505) 386 : cluster [DBG] pgmap v614: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:48 vm00 bash[28333]: cluster 2026-03-09T17:35:46.777688+0000 mgr.y (mgr.14505) 386 : cluster [DBG] pgmap v614: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:48 vm02 bash[23351]: cluster 2026-03-09T17:35:46.777688+0000 mgr.y (mgr.14505) 386 : cluster [DBG] pgmap v614: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:48 vm02 bash[23351]: cluster 2026-03-09T17:35:46.777688+0000 mgr.y (mgr.14505) 386 : cluster [DBG] pgmap v614: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:49 vm00 bash[28333]: audit 2026-03-09T17:35:48.333560+0000 mon.c (mon.2) 600 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:49 vm00 bash[28333]: audit 2026-03-09T17:35:48.333560+0000 mon.c (mon.2) 600 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:49 vm00 bash[28333]: audit 2026-03-09T17:35:48.657063+0000 mon.c (mon.2) 601 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:49 vm00 bash[28333]: audit 2026-03-09T17:35:48.657063+0000 mon.c (mon.2) 601 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:49 vm00 bash[28333]: audit 2026-03-09T17:35:48.658065+0000 mon.c (mon.2) 602 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:49 vm00 bash[28333]: audit 2026-03-09T17:35:48.658065+0000 mon.c (mon.2) 602 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:49 vm00 bash[28333]: audit 2026-03-09T17:35:48.664170+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:49 vm00 bash[28333]: audit 2026-03-09T17:35:48.664170+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:49 vm00 bash[20770]: audit 2026-03-09T17:35:48.333560+0000 mon.c (mon.2) 600 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:49 vm00 bash[20770]: audit 2026-03-09T17:35:48.333560+0000 mon.c (mon.2) 600 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:49 vm00 bash[20770]: audit 2026-03-09T17:35:48.657063+0000 mon.c (mon.2) 601 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:49 vm00 bash[20770]: audit 2026-03-09T17:35:48.657063+0000 mon.c (mon.2) 601 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:49 vm00 bash[20770]: audit 2026-03-09T17:35:48.658065+0000 mon.c (mon.2) 602 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:49 vm00 bash[20770]: audit 2026-03-09T17:35:48.658065+0000 mon.c (mon.2) 602 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:49 vm00 bash[20770]: audit 2026-03-09T17:35:48.664170+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:35:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:49 vm00 bash[20770]: audit 2026-03-09T17:35:48.664170+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:35:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:49 vm02 bash[23351]: audit 2026-03-09T17:35:48.333560+0000 mon.c (mon.2) 600 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:35:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:49 vm02 bash[23351]: audit 2026-03-09T17:35:48.333560+0000 mon.c (mon.2) 600 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:35:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:49 vm02 bash[23351]: audit 2026-03-09T17:35:48.657063+0000 mon.c (mon.2) 601 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:35:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:49 vm02 bash[23351]: audit 2026-03-09T17:35:48.657063+0000 mon.c (mon.2) 601 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:35:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:49 vm02 bash[23351]: audit 2026-03-09T17:35:48.658065+0000 mon.c (mon.2) 602 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:35:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:49 vm02 bash[23351]: audit 2026-03-09T17:35:48.658065+0000 mon.c (mon.2) 602 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:35:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:49 vm02 bash[23351]: audit 2026-03-09T17:35:48.664170+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:35:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:49 vm02 bash[23351]: audit 2026-03-09T17:35:48.664170+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:35:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:50 vm00 bash[28333]: cluster 2026-03-09T17:35:48.778428+0000 mgr.y (mgr.14505) 387 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:35:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:50 vm00 bash[28333]: cluster 2026-03-09T17:35:48.778428+0000 mgr.y (mgr.14505) 387 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:35:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:50 vm00 bash[20770]: cluster 2026-03-09T17:35:48.778428+0000 mgr.y (mgr.14505) 387 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:35:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:50 vm00 bash[20770]: cluster 2026-03-09T17:35:48.778428+0000 mgr.y (mgr.14505) 387 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:35:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:50 vm02 bash[23351]: cluster 2026-03-09T17:35:48.778428+0000 mgr.y (mgr.14505) 387 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:35:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:50 vm02 bash[23351]: cluster 2026-03-09T17:35:48.778428+0000 mgr.y (mgr.14505) 387 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T17:35:52.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:35:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:35:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:52 vm02 bash[23351]: cluster 2026-03-09T17:35:50.778755+0000 mgr.y (mgr.14505) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:35:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:52 vm02 bash[23351]: cluster 2026-03-09T17:35:50.778755+0000 mgr.y (mgr.14505) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:35:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:52 vm00 bash[28333]: cluster 2026-03-09T17:35:50.778755+0000 mgr.y (mgr.14505) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:35:52.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:52 vm00 bash[28333]: cluster 2026-03-09T17:35:50.778755+0000 mgr.y (mgr.14505) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:35:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:52 vm00 bash[20770]: cluster 2026-03-09T17:35:50.778755+0000 mgr.y (mgr.14505) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:35:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:52 vm00 bash[20770]: cluster 2026-03-09T17:35:50.778755+0000 mgr.y (mgr.14505) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:35:53.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:53 vm02 bash[23351]: audit 2026-03-09T17:35:51.878503+0000 mgr.y (mgr.14505) 389 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:53 vm02 bash[23351]: audit 2026-03-09T17:35:51.878503+0000 mgr.y (mgr.14505) 389 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:53.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:53 vm00 bash[28333]: audit 2026-03-09T17:35:51.878503+0000 mgr.y (mgr.14505) 389 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:53 vm00 bash[28333]: audit 2026-03-09T17:35:51.878503+0000 mgr.y (mgr.14505) 389 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:53 vm00 bash[20770]: audit 2026-03-09T17:35:51.878503+0000 mgr.y (mgr.14505) 389 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:53 vm00 bash[20770]: audit 2026-03-09T17:35:51.878503+0000 mgr.y (mgr.14505) 389 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:35:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:54 vm02 bash[23351]: cluster 2026-03-09T17:35:52.779366+0000 mgr.y (mgr.14505) 390 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-09T17:35:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:54 vm02 bash[23351]: cluster 2026-03-09T17:35:52.779366+0000 mgr.y (mgr.14505) 390 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-09T17:35:54.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:54 vm00 bash[28333]: cluster 2026-03-09T17:35:52.779366+0000 mgr.y (mgr.14505) 390 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-09T17:35:54.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:54 vm00 bash[28333]: cluster 2026-03-09T17:35:52.779366+0000 mgr.y (mgr.14505) 390 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-09T17:35:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:54 vm00 bash[20770]: cluster 2026-03-09T17:35:52.779366+0000 mgr.y (mgr.14505) 390 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-09T17:35:54.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:54 vm00 bash[20770]: cluster 2026-03-09T17:35:52.779366+0000 mgr.y (mgr.14505) 390 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: cluster 2026-03-09T17:35:54.779882+0000 mgr.y (mgr.14505) 391 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 745 B/s wr, 3 op/s 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: cluster 2026-03-09T17:35:54.779882+0000 mgr.y (mgr.14505) 391 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 745 B/s wr, 3 op/s 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: audit 2026-03-09T17:35:56.224378+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: audit 2026-03-09T17:35:56.224378+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: audit 2026-03-09T17:35:56.225328+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: audit 2026-03-09T17:35:56.225328+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: audit 2026-03-09T17:35:56.225534+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: audit 2026-03-09T17:35:56.225534+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: audit 2026-03-09T17:35:56.226333+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:56 vm02 bash[23351]: audit 2026-03-09T17:35:56.226333+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: cluster 2026-03-09T17:35:54.779882+0000 mgr.y (mgr.14505) 391 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 745 B/s wr, 3 op/s 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: cluster 2026-03-09T17:35:54.779882+0000 mgr.y (mgr.14505) 391 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 745 B/s wr, 3 op/s 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: audit 2026-03-09T17:35:56.224378+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: audit 2026-03-09T17:35:56.224378+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: audit 2026-03-09T17:35:56.225328+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: audit 2026-03-09T17:35:56.225328+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: audit 2026-03-09T17:35:56.225534+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: audit 2026-03-09T17:35:56.225534+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: audit 2026-03-09T17:35:56.226333+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:56 vm00 bash[28333]: audit 2026-03-09T17:35:56.226333+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: cluster 2026-03-09T17:35:54.779882+0000 mgr.y (mgr.14505) 391 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 745 B/s wr, 3 op/s 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: cluster 2026-03-09T17:35:54.779882+0000 mgr.y (mgr.14505) 391 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 745 B/s wr, 3 op/s 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: audit 2026-03-09T17:35:56.224378+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: audit 2026-03-09T17:35:56.224378+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: audit 2026-03-09T17:35:56.225328+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: audit 2026-03-09T17:35:56.225328+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: audit 2026-03-09T17:35:56.225534+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: audit 2026-03-09T17:35:56.225534+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: audit 2026-03-09T17:35:56.226333+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:56 vm00 bash[20770]: audit 2026-03-09T17:35:56.226333+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-73"}]: dispatch 2026-03-09T17:35:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:35:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:35:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:35:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:57 vm02 bash[23351]: cluster 2026-03-09T17:35:56.322458+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T17:35:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:57 vm02 bash[23351]: cluster 2026-03-09T17:35:56.322458+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T17:35:57.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:57 vm00 bash[28333]: cluster 2026-03-09T17:35:56.322458+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T17:35:57.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:57 vm00 bash[28333]: cluster 2026-03-09T17:35:56.322458+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T17:35:57.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:57 vm00 bash[20770]: cluster 2026-03-09T17:35:56.322458+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T17:35:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:57 vm00 bash[20770]: cluster 2026-03-09T17:35:56.322458+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T17:35:58.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: cluster 2026-03-09T17:35:56.780188+0000 mgr.y (mgr.14505) 392 : cluster [DBG] pgmap v620: 260 pgs: 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: cluster 2026-03-09T17:35:56.780188+0000 mgr.y (mgr.14505) 392 : cluster [DBG] pgmap v620: 260 pgs: 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: cluster 2026-03-09T17:35:57.361770+0000 mon.a (mon.0) 2600 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: cluster 2026-03-09T17:35:57.361770+0000 mon.a (mon.0) 2600 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: audit 2026-03-09T17:35:57.369913+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: audit 2026-03-09T17:35:57.369913+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: audit 2026-03-09T17:35:57.371995+0000 mon.a (mon.0) 2601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: audit 2026-03-09T17:35:57.371995+0000 mon.a (mon.0) 2601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: audit 2026-03-09T17:35:57.808628+0000 mon.c (mon.2) 603 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:58 vm00 bash[28333]: audit 2026-03-09T17:35:57.808628+0000 mon.c (mon.2) 603 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: cluster 2026-03-09T17:35:56.780188+0000 mgr.y (mgr.14505) 392 : cluster [DBG] pgmap v620: 260 pgs: 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: cluster 2026-03-09T17:35:56.780188+0000 mgr.y (mgr.14505) 392 : cluster [DBG] pgmap v620: 260 pgs: 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: cluster 2026-03-09T17:35:57.361770+0000 mon.a (mon.0) 2600 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: cluster 2026-03-09T17:35:57.361770+0000 mon.a (mon.0) 2600 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: audit 2026-03-09T17:35:57.369913+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: audit 2026-03-09T17:35:57.369913+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: audit 2026-03-09T17:35:57.371995+0000 mon.a (mon.0) 2601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: audit 2026-03-09T17:35:57.371995+0000 mon.a (mon.0) 2601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: audit 2026-03-09T17:35:57.808628+0000 mon.c (mon.2) 603 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:58 vm00 bash[20770]: audit 2026-03-09T17:35:57.808628+0000 mon.c (mon.2) 603 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: cluster 2026-03-09T17:35:56.780188+0000 mgr.y (mgr.14505) 392 : cluster [DBG] pgmap v620: 260 pgs: 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: cluster 2026-03-09T17:35:56.780188+0000 mgr.y (mgr.14505) 392 : cluster [DBG] pgmap v620: 260 pgs: 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: cluster 2026-03-09T17:35:57.361770+0000 mon.a (mon.0) 2600 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: cluster 2026-03-09T17:35:57.361770+0000 mon.a (mon.0) 2600 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: audit 2026-03-09T17:35:57.369913+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: audit 2026-03-09T17:35:57.369913+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: audit 2026-03-09T17:35:57.371995+0000 mon.a (mon.0) 2601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: audit 2026-03-09T17:35:57.371995+0000 mon.a (mon.0) 2601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: audit 2026-03-09T17:35:57.808628+0000 mon.c (mon.2) 603 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:58 vm02 bash[23351]: audit 2026-03-09T17:35:57.808628+0000 mon.c (mon.2) 603 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: audit 2026-03-09T17:35:58.353329+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: audit 2026-03-09T17:35:58.353329+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: audit 2026-03-09T17:35:58.363961+0000 mon.b (mon.1) 434 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: audit 2026-03-09T17:35:58.363961+0000 mon.b (mon.1) 434 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: cluster 2026-03-09T17:35:58.364786+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: cluster 2026-03-09T17:35:58.364786+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: cluster 2026-03-09T17:35:58.780504+0000 mgr.y (mgr.14505) 393 : cluster [DBG] pgmap v623: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: cluster 2026-03-09T17:35:58.780504+0000 mgr.y (mgr.14505) 393 : cluster [DBG] pgmap v623: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: cluster 2026-03-09T17:35:59.358950+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:35:59 vm00 bash[28333]: cluster 2026-03-09T17:35:59.358950+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: audit 2026-03-09T17:35:58.353329+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: audit 2026-03-09T17:35:58.353329+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: audit 2026-03-09T17:35:58.363961+0000 mon.b (mon.1) 434 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: audit 2026-03-09T17:35:58.363961+0000 mon.b (mon.1) 434 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: cluster 2026-03-09T17:35:58.364786+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: cluster 2026-03-09T17:35:58.364786+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: cluster 2026-03-09T17:35:58.780504+0000 mgr.y (mgr.14505) 393 : cluster [DBG] pgmap v623: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: cluster 2026-03-09T17:35:58.780504+0000 mgr.y (mgr.14505) 393 : cluster [DBG] pgmap v623: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: cluster 2026-03-09T17:35:59.358950+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T17:35:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:35:59 vm00 bash[20770]: cluster 2026-03-09T17:35:59.358950+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: audit 2026-03-09T17:35:58.353329+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: audit 2026-03-09T17:35:58.353329+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: audit 2026-03-09T17:35:58.363961+0000 mon.b (mon.1) 434 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: audit 2026-03-09T17:35:58.363961+0000 mon.b (mon.1) 434 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: cluster 2026-03-09T17:35:58.364786+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: cluster 2026-03-09T17:35:58.364786+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: cluster 2026-03-09T17:35:58.780504+0000 mgr.y (mgr.14505) 393 : cluster [DBG] pgmap v623: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: cluster 2026-03-09T17:35:58.780504+0000 mgr.y (mgr.14505) 393 : cluster [DBG] pgmap v623: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: cluster 2026-03-09T17:35:59.358950+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T17:35:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:35:59 vm02 bash[23351]: cluster 2026-03-09T17:35:59.358950+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: cluster 2026-03-09T17:36:00.368901+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: cluster 2026-03-09T17:36:00.368901+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: audit 2026-03-09T17:36:00.414118+0000 mon.b (mon.1) 435 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: audit 2026-03-09T17:36:00.414118+0000 mon.b (mon.1) 435 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: audit 2026-03-09T17:36:00.415081+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: audit 2026-03-09T17:36:00.415081+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: audit 2026-03-09T17:36:00.415286+0000 mon.a (mon.0) 2606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: audit 2026-03-09T17:36:00.415286+0000 mon.a (mon.0) 2606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: audit 2026-03-09T17:36:00.416069+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:01 vm02 bash[23351]: audit 2026-03-09T17:36:00.416069+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: cluster 2026-03-09T17:36:00.368901+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: cluster 2026-03-09T17:36:00.368901+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: audit 2026-03-09T17:36:00.414118+0000 mon.b (mon.1) 435 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: audit 2026-03-09T17:36:00.414118+0000 mon.b (mon.1) 435 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: audit 2026-03-09T17:36:00.415081+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: audit 2026-03-09T17:36:00.415081+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: audit 2026-03-09T17:36:00.415286+0000 mon.a (mon.0) 2606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: audit 2026-03-09T17:36:00.415286+0000 mon.a (mon.0) 2606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: audit 2026-03-09T17:36:00.416069+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:01 vm00 bash[28333]: audit 2026-03-09T17:36:00.416069+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: cluster 2026-03-09T17:36:00.368901+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: cluster 2026-03-09T17:36:00.368901+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: audit 2026-03-09T17:36:00.414118+0000 mon.b (mon.1) 435 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: audit 2026-03-09T17:36:00.414118+0000 mon.b (mon.1) 435 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: audit 2026-03-09T17:36:00.415081+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: audit 2026-03-09T17:36:00.415081+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: audit 2026-03-09T17:36:00.415286+0000 mon.a (mon.0) 2606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: audit 2026-03-09T17:36:00.415286+0000 mon.a (mon.0) 2606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: audit 2026-03-09T17:36:00.416069+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:01 vm00 bash[20770]: audit 2026-03-09T17:36:00.416069+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-75"}]: dispatch 2026-03-09T17:36:02.384 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:36:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:36:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:02 vm02 bash[23351]: cluster 2026-03-09T17:36:00.780801+0000 mgr.y (mgr.14505) 394 : cluster [DBG] pgmap v626: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:02 vm02 bash[23351]: cluster 2026-03-09T17:36:00.780801+0000 mgr.y (mgr.14505) 394 : cluster [DBG] pgmap v626: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:02 vm02 bash[23351]: cluster 2026-03-09T17:36:01.379816+0000 mon.a (mon.0) 2608 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T17:36:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:02 vm02 bash[23351]: cluster 2026-03-09T17:36:01.379816+0000 mon.a (mon.0) 2608 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T17:36:02.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:02 vm00 bash[28333]: cluster 2026-03-09T17:36:00.780801+0000 mgr.y (mgr.14505) 394 : cluster [DBG] pgmap v626: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:02.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:02 vm00 bash[28333]: cluster 2026-03-09T17:36:00.780801+0000 mgr.y (mgr.14505) 394 : cluster [DBG] pgmap v626: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:02.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:02 vm00 bash[28333]: cluster 2026-03-09T17:36:01.379816+0000 mon.a (mon.0) 2608 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T17:36:02.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:02 vm00 bash[28333]: cluster 2026-03-09T17:36:01.379816+0000 mon.a (mon.0) 2608 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T17:36:02.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:02 vm00 bash[20770]: cluster 2026-03-09T17:36:00.780801+0000 mgr.y (mgr.14505) 394 : cluster [DBG] pgmap v626: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:02 vm00 bash[20770]: cluster 2026-03-09T17:36:00.780801+0000 mgr.y (mgr.14505) 394 : cluster [DBG] pgmap v626: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 872 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:02 vm00 bash[20770]: cluster 2026-03-09T17:36:01.379816+0000 mon.a (mon.0) 2608 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T17:36:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:02 vm00 bash[20770]: cluster 2026-03-09T17:36:01.379816+0000 mon.a (mon.0) 2608 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:01.886586+0000 mgr.y (mgr.14505) 395 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:01.886586+0000 mgr.y (mgr.14505) 395 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:02.374944+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:02.374944+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: cluster 2026-03-09T17:36:02.375088+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: cluster 2026-03-09T17:36:02.375088+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:02.382442+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:02.382442+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:03.374526+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:03.374526+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:03.381170+0000 mon.b (mon.1) 438 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: audit 2026-03-09T17:36:03.381170+0000 mon.b (mon.1) 438 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: cluster 2026-03-09T17:36:03.382564+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:03 vm00 bash[28333]: cluster 2026-03-09T17:36:03.382564+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:01.886586+0000 mgr.y (mgr.14505) 395 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:01.886586+0000 mgr.y (mgr.14505) 395 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:02.374944+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:02.374944+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: cluster 2026-03-09T17:36:02.375088+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: cluster 2026-03-09T17:36:02.375088+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:02.382442+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:02.382442+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:03.374526+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:03.374526+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:03.381170+0000 mon.b (mon.1) 438 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: audit 2026-03-09T17:36:03.381170+0000 mon.b (mon.1) 438 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: cluster 2026-03-09T17:36:03.382564+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T17:36:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:03 vm00 bash[20770]: cluster 2026-03-09T17:36:03.382564+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:01.886586+0000 mgr.y (mgr.14505) 395 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:01.886586+0000 mgr.y (mgr.14505) 395 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:02.374944+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:02.374944+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: cluster 2026-03-09T17:36:02.375088+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: cluster 2026-03-09T17:36:02.375088+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:02.382442+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:02.382442+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:03.374526+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:03.374526+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:03.381170+0000 mon.b (mon.1) 438 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: audit 2026-03-09T17:36:03.381170+0000 mon.b (mon.1) 438 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: cluster 2026-03-09T17:36:03.382564+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T17:36:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:03 vm02 bash[23351]: cluster 2026-03-09T17:36:03.382564+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T17:36:04.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:04 vm00 bash[28333]: cluster 2026-03-09T17:36:02.781337+0000 mgr.y (mgr.14505) 396 : cluster [DBG] pgmap v629: 292 pgs: 26 creating+peering, 6 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:04 vm00 bash[28333]: cluster 2026-03-09T17:36:02.781337+0000 mgr.y (mgr.14505) 396 : cluster [DBG] pgmap v629: 292 pgs: 26 creating+peering, 6 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:04 vm00 bash[28333]: cluster 2026-03-09T17:36:03.385092+0000 mon.a (mon.0) 2613 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:04 vm00 bash[28333]: cluster 2026-03-09T17:36:03.385092+0000 mon.a (mon.0) 2613 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:04 vm00 bash[28333]: cluster 2026-03-09T17:36:04.380997+0000 mon.a (mon.0) 2614 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:04 vm00 bash[28333]: cluster 2026-03-09T17:36:04.380997+0000 mon.a (mon.0) 2614 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:04 vm00 bash[20770]: cluster 2026-03-09T17:36:02.781337+0000 mgr.y (mgr.14505) 396 : cluster [DBG] pgmap v629: 292 pgs: 26 creating+peering, 6 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:04 vm00 bash[20770]: cluster 2026-03-09T17:36:02.781337+0000 mgr.y (mgr.14505) 396 : cluster [DBG] pgmap v629: 292 pgs: 26 creating+peering, 6 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:04 vm00 bash[20770]: cluster 2026-03-09T17:36:03.385092+0000 mon.a (mon.0) 2613 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:04 vm00 bash[20770]: cluster 2026-03-09T17:36:03.385092+0000 mon.a (mon.0) 2613 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:04 vm00 bash[20770]: cluster 2026-03-09T17:36:04.380997+0000 mon.a (mon.0) 2614 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T17:36:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:04 vm00 bash[20770]: cluster 2026-03-09T17:36:04.380997+0000 mon.a (mon.0) 2614 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T17:36:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:04 vm02 bash[23351]: cluster 2026-03-09T17:36:02.781337+0000 mgr.y (mgr.14505) 396 : cluster [DBG] pgmap v629: 292 pgs: 26 creating+peering, 6 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:04 vm02 bash[23351]: cluster 2026-03-09T17:36:02.781337+0000 mgr.y (mgr.14505) 396 : cluster [DBG] pgmap v629: 292 pgs: 26 creating+peering, 6 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:04 vm02 bash[23351]: cluster 2026-03-09T17:36:03.385092+0000 mon.a (mon.0) 2613 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:04 vm02 bash[23351]: cluster 2026-03-09T17:36:03.385092+0000 mon.a (mon.0) 2613 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:04 vm02 bash[23351]: cluster 2026-03-09T17:36:04.380997+0000 mon.a (mon.0) 2614 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T17:36:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:04 vm02 bash[23351]: cluster 2026-03-09T17:36:04.380997+0000 mon.a (mon.0) 2614 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: audit 2026-03-09T17:36:04.423887+0000 mon.b (mon.1) 439 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: audit 2026-03-09T17:36:04.423887+0000 mon.b (mon.1) 439 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: audit 2026-03-09T17:36:04.424590+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: audit 2026-03-09T17:36:04.424590+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: audit 2026-03-09T17:36:04.424983+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: audit 2026-03-09T17:36:04.424983+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: audit 2026-03-09T17:36:04.425556+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: audit 2026-03-09T17:36:04.425556+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: cluster 2026-03-09T17:36:04.781667+0000 mgr.y (mgr.14505) 397 : cluster [DBG] pgmap v632: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:05 vm00 bash[28333]: cluster 2026-03-09T17:36:04.781667+0000 mgr.y (mgr.14505) 397 : cluster [DBG] pgmap v632: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: audit 2026-03-09T17:36:04.423887+0000 mon.b (mon.1) 439 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: audit 2026-03-09T17:36:04.423887+0000 mon.b (mon.1) 439 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: audit 2026-03-09T17:36:04.424590+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: audit 2026-03-09T17:36:04.424590+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: audit 2026-03-09T17:36:04.424983+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: audit 2026-03-09T17:36:04.424983+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: audit 2026-03-09T17:36:04.425556+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: audit 2026-03-09T17:36:04.425556+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: cluster 2026-03-09T17:36:04.781667+0000 mgr.y (mgr.14505) 397 : cluster [DBG] pgmap v632: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:36:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:05 vm00 bash[20770]: cluster 2026-03-09T17:36:04.781667+0000 mgr.y (mgr.14505) 397 : cluster [DBG] pgmap v632: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: audit 2026-03-09T17:36:04.423887+0000 mon.b (mon.1) 439 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: audit 2026-03-09T17:36:04.423887+0000 mon.b (mon.1) 439 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: audit 2026-03-09T17:36:04.424590+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: audit 2026-03-09T17:36:04.424590+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: audit 2026-03-09T17:36:04.424983+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: audit 2026-03-09T17:36:04.424983+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: audit 2026-03-09T17:36:04.425556+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: audit 2026-03-09T17:36:04.425556+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-77"}]: dispatch 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: cluster 2026-03-09T17:36:04.781667+0000 mgr.y (mgr.14505) 397 : cluster [DBG] pgmap v632: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:36:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:05 vm02 bash[23351]: cluster 2026-03-09T17:36:04.781667+0000 mgr.y (mgr.14505) 397 : cluster [DBG] pgmap v632: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:36:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:06 vm00 bash[28333]: cluster 2026-03-09T17:36:05.435991+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T17:36:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:06 vm00 bash[28333]: cluster 2026-03-09T17:36:05.435991+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T17:36:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:06 vm00 bash[20770]: cluster 2026-03-09T17:36:05.435991+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T17:36:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:06 vm00 bash[20770]: cluster 2026-03-09T17:36:05.435991+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T17:36:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:36:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:36:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:36:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:06 vm02 bash[23351]: cluster 2026-03-09T17:36:05.435991+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T17:36:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:06 vm02 bash[23351]: cluster 2026-03-09T17:36:05.435991+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:07 vm00 bash[28333]: cluster 2026-03-09T17:36:06.432502+0000 mon.a (mon.0) 2618 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:07 vm00 bash[28333]: cluster 2026-03-09T17:36:06.432502+0000 mon.a (mon.0) 2618 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:07 vm00 bash[28333]: audit 2026-03-09T17:36:06.438252+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:07 vm00 bash[28333]: audit 2026-03-09T17:36:06.438252+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:07 vm00 bash[28333]: audit 2026-03-09T17:36:06.440556+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:07 vm00 bash[28333]: audit 2026-03-09T17:36:06.440556+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:07 vm00 bash[28333]: cluster 2026-03-09T17:36:06.781987+0000 mgr.y (mgr.14505) 398 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:07 vm00 bash[28333]: cluster 2026-03-09T17:36:06.781987+0000 mgr.y (mgr.14505) 398 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:07 vm00 bash[20770]: cluster 2026-03-09T17:36:06.432502+0000 mon.a (mon.0) 2618 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:07 vm00 bash[20770]: cluster 2026-03-09T17:36:06.432502+0000 mon.a (mon.0) 2618 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:07 vm00 bash[20770]: audit 2026-03-09T17:36:06.438252+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:07 vm00 bash[20770]: audit 2026-03-09T17:36:06.438252+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:07 vm00 bash[20770]: audit 2026-03-09T17:36:06.440556+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:07 vm00 bash[20770]: audit 2026-03-09T17:36:06.440556+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:07 vm00 bash[20770]: cluster 2026-03-09T17:36:06.781987+0000 mgr.y (mgr.14505) 398 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:36:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:07 vm00 bash[20770]: cluster 2026-03-09T17:36:06.781987+0000 mgr.y (mgr.14505) 398 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:36:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:07 vm02 bash[23351]: cluster 2026-03-09T17:36:06.432502+0000 mon.a (mon.0) 2618 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T17:36:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:07 vm02 bash[23351]: cluster 2026-03-09T17:36:06.432502+0000 mon.a (mon.0) 2618 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T17:36:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:07 vm02 bash[23351]: audit 2026-03-09T17:36:06.438252+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:07 vm02 bash[23351]: audit 2026-03-09T17:36:06.438252+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:07 vm02 bash[23351]: audit 2026-03-09T17:36:06.440556+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:07 vm02 bash[23351]: audit 2026-03-09T17:36:06.440556+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:07 vm02 bash[23351]: cluster 2026-03-09T17:36:06.781987+0000 mgr.y (mgr.14505) 398 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:36:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:07 vm02 bash[23351]: cluster 2026-03-09T17:36:06.781987+0000 mgr.y (mgr.14505) 398 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 908 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:08 vm00 bash[28333]: audit 2026-03-09T17:36:07.428648+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:08 vm00 bash[28333]: audit 2026-03-09T17:36:07.428648+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:08 vm00 bash[28333]: audit 2026-03-09T17:36:07.451430+0000 mon.b (mon.1) 442 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:08 vm00 bash[28333]: audit 2026-03-09T17:36:07.451430+0000 mon.b (mon.1) 442 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:08 vm00 bash[28333]: cluster 2026-03-09T17:36:07.453925+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:08 vm00 bash[28333]: cluster 2026-03-09T17:36:07.453925+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:08 vm00 bash[20770]: audit 2026-03-09T17:36:07.428648+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:08 vm00 bash[20770]: audit 2026-03-09T17:36:07.428648+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:08 vm00 bash[20770]: audit 2026-03-09T17:36:07.451430+0000 mon.b (mon.1) 442 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:08 vm00 bash[20770]: audit 2026-03-09T17:36:07.451430+0000 mon.b (mon.1) 442 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:08 vm00 bash[20770]: cluster 2026-03-09T17:36:07.453925+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T17:36:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:08 vm00 bash[20770]: cluster 2026-03-09T17:36:07.453925+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T17:36:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:08 vm02 bash[23351]: audit 2026-03-09T17:36:07.428648+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:08 vm02 bash[23351]: audit 2026-03-09T17:36:07.428648+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:08 vm02 bash[23351]: audit 2026-03-09T17:36:07.451430+0000 mon.b (mon.1) 442 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:08 vm02 bash[23351]: audit 2026-03-09T17:36:07.451430+0000 mon.b (mon.1) 442 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:08 vm02 bash[23351]: cluster 2026-03-09T17:36:07.453925+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T17:36:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:08 vm02 bash[23351]: cluster 2026-03-09T17:36:07.453925+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T17:36:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:09 vm00 bash[28333]: cluster 2026-03-09T17:36:08.438014+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T17:36:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:09 vm00 bash[28333]: cluster 2026-03-09T17:36:08.438014+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T17:36:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:09 vm00 bash[28333]: cluster 2026-03-09T17:36:08.782295+0000 mgr.y (mgr.14505) 399 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:09 vm00 bash[28333]: cluster 2026-03-09T17:36:08.782295+0000 mgr.y (mgr.14505) 399 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:09 vm00 bash[20770]: cluster 2026-03-09T17:36:08.438014+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T17:36:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:09 vm00 bash[20770]: cluster 2026-03-09T17:36:08.438014+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T17:36:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:09 vm00 bash[20770]: cluster 2026-03-09T17:36:08.782295+0000 mgr.y (mgr.14505) 399 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:09 vm00 bash[20770]: cluster 2026-03-09T17:36:08.782295+0000 mgr.y (mgr.14505) 399 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:09 vm02 bash[23351]: cluster 2026-03-09T17:36:08.438014+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T17:36:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:09 vm02 bash[23351]: cluster 2026-03-09T17:36:08.438014+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T17:36:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:09 vm02 bash[23351]: cluster 2026-03-09T17:36:08.782295+0000 mgr.y (mgr.14505) 399 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:09 vm02 bash[23351]: cluster 2026-03-09T17:36:08.782295+0000 mgr.y (mgr.14505) 399 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:10.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: cluster 2026-03-09T17:36:09.455391+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: cluster 2026-03-09T17:36:09.455391+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: cluster 2026-03-09T17:36:09.488495+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: cluster 2026-03-09T17:36:09.488495+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: audit 2026-03-09T17:36:09.509326+0000 mon.b (mon.1) 443 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: audit 2026-03-09T17:36:09.509326+0000 mon.b (mon.1) 443 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: audit 2026-03-09T17:36:09.510300+0000 mgr.y (mgr.14505) 400 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: audit 2026-03-09T17:36:09.510300+0000 mgr.y (mgr.14505) 400 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: cluster 2026-03-09T17:36:10.315688+0000 osd.6 (osd.6) 19 : cluster [DBG] 288.6 deep-scrub starts 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: cluster 2026-03-09T17:36:10.315688+0000 osd.6 (osd.6) 19 : cluster [DBG] 288.6 deep-scrub starts 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: cluster 2026-03-09T17:36:10.316964+0000 osd.6 (osd.6) 20 : cluster [DBG] 288.6 deep-scrub ok 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:10 vm00 bash[28333]: cluster 2026-03-09T17:36:10.316964+0000 osd.6 (osd.6) 20 : cluster [DBG] 288.6 deep-scrub ok 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: cluster 2026-03-09T17:36:09.455391+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: cluster 2026-03-09T17:36:09.455391+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: cluster 2026-03-09T17:36:09.488495+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: cluster 2026-03-09T17:36:09.488495+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: audit 2026-03-09T17:36:09.509326+0000 mon.b (mon.1) 443 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: audit 2026-03-09T17:36:09.509326+0000 mon.b (mon.1) 443 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: audit 2026-03-09T17:36:09.510300+0000 mgr.y (mgr.14505) 400 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: audit 2026-03-09T17:36:09.510300+0000 mgr.y (mgr.14505) 400 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: cluster 2026-03-09T17:36:10.315688+0000 osd.6 (osd.6) 19 : cluster [DBG] 288.6 deep-scrub starts 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: cluster 2026-03-09T17:36:10.315688+0000 osd.6 (osd.6) 19 : cluster [DBG] 288.6 deep-scrub starts 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: cluster 2026-03-09T17:36:10.316964+0000 osd.6 (osd.6) 20 : cluster [DBG] 288.6 deep-scrub ok 2026-03-09T17:36:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:10 vm00 bash[20770]: cluster 2026-03-09T17:36:10.316964+0000 osd.6 (osd.6) 20 : cluster [DBG] 288.6 deep-scrub ok 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: cluster 2026-03-09T17:36:09.455391+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: cluster 2026-03-09T17:36:09.455391+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: cluster 2026-03-09T17:36:09.488495+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: cluster 2026-03-09T17:36:09.488495+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: audit 2026-03-09T17:36:09.509326+0000 mon.b (mon.1) 443 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: audit 2026-03-09T17:36:09.509326+0000 mon.b (mon.1) 443 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: audit 2026-03-09T17:36:09.510300+0000 mgr.y (mgr.14505) 400 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: audit 2026-03-09T17:36:09.510300+0000 mgr.y (mgr.14505) 400 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "288.6"}]: dispatch 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: cluster 2026-03-09T17:36:10.315688+0000 osd.6 (osd.6) 19 : cluster [DBG] 288.6 deep-scrub starts 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: cluster 2026-03-09T17:36:10.315688+0000 osd.6 (osd.6) 19 : cluster [DBG] 288.6 deep-scrub starts 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: cluster 2026-03-09T17:36:10.316964+0000 osd.6 (osd.6) 20 : cluster [DBG] 288.6 deep-scrub ok 2026-03-09T17:36:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:10 vm02 bash[23351]: cluster 2026-03-09T17:36:10.316964+0000 osd.6 (osd.6) 20 : cluster [DBG] 288.6 deep-scrub ok 2026-03-09T17:36:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:11 vm00 bash[28333]: cluster 2026-03-09T17:36:10.782659+0000 mgr.y (mgr.14505) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T17:36:11.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:11 vm00 bash[28333]: cluster 2026-03-09T17:36:10.782659+0000 mgr.y (mgr.14505) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T17:36:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:11 vm00 bash[20770]: cluster 2026-03-09T17:36:10.782659+0000 mgr.y (mgr.14505) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T17:36:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:11 vm00 bash[20770]: cluster 2026-03-09T17:36:10.782659+0000 mgr.y (mgr.14505) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T17:36:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:11 vm02 bash[23351]: cluster 2026-03-09T17:36:10.782659+0000 mgr.y (mgr.14505) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T17:36:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:11 vm02 bash[23351]: cluster 2026-03-09T17:36:10.782659+0000 mgr.y (mgr.14505) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T17:36:12.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:36:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:36:12.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:12 vm00 bash[28333]: audit 2026-03-09T17:36:11.894038+0000 mgr.y (mgr.14505) 402 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:12.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:12 vm00 bash[28333]: audit 2026-03-09T17:36:11.894038+0000 mgr.y (mgr.14505) 402 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:12.787 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:12 vm00 bash[20770]: audit 2026-03-09T17:36:11.894038+0000 mgr.y (mgr.14505) 402 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:12 vm00 bash[20770]: audit 2026-03-09T17:36:11.894038+0000 mgr.y (mgr.14505) 402 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:12.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:12 vm02 bash[23351]: audit 2026-03-09T17:36:11.894038+0000 mgr.y (mgr.14505) 402 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:12.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:12 vm02 bash[23351]: audit 2026-03-09T17:36:11.894038+0000 mgr.y (mgr.14505) 402 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:13 vm02 bash[23351]: cluster 2026-03-09T17:36:12.783184+0000 mgr.y (mgr.14505) 403 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:13 vm02 bash[23351]: cluster 2026-03-09T17:36:12.783184+0000 mgr.y (mgr.14505) 403 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:13 vm02 bash[23351]: audit 2026-03-09T17:36:12.815365+0000 mon.c (mon.2) 604 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:13 vm02 bash[23351]: audit 2026-03-09T17:36:12.815365+0000 mon.c (mon.2) 604 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:13 vm00 bash[28333]: cluster 2026-03-09T17:36:12.783184+0000 mgr.y (mgr.14505) 403 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:13 vm00 bash[28333]: cluster 2026-03-09T17:36:12.783184+0000 mgr.y (mgr.14505) 403 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:13 vm00 bash[28333]: audit 2026-03-09T17:36:12.815365+0000 mon.c (mon.2) 604 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:13 vm00 bash[28333]: audit 2026-03-09T17:36:12.815365+0000 mon.c (mon.2) 604 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:13 vm00 bash[20770]: cluster 2026-03-09T17:36:12.783184+0000 mgr.y (mgr.14505) 403 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:13 vm00 bash[20770]: cluster 2026-03-09T17:36:12.783184+0000 mgr.y (mgr.14505) 403 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:36:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:13 vm00 bash[20770]: audit 2026-03-09T17:36:12.815365+0000 mon.c (mon.2) 604 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:13 vm00 bash[20770]: audit 2026-03-09T17:36:12.815365+0000 mon.c (mon.2) 604 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:15 vm02 bash[23351]: cluster 2026-03-09T17:36:14.783688+0000 mgr.y (mgr.14505) 404 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 975 B/s wr, 3 op/s 2026-03-09T17:36:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:15 vm02 bash[23351]: cluster 2026-03-09T17:36:14.783688+0000 mgr.y (mgr.14505) 404 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 975 B/s wr, 3 op/s 2026-03-09T17:36:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:15 vm00 bash[28333]: cluster 2026-03-09T17:36:14.783688+0000 mgr.y (mgr.14505) 404 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 975 B/s wr, 3 op/s 2026-03-09T17:36:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:15 vm00 bash[28333]: cluster 2026-03-09T17:36:14.783688+0000 mgr.y (mgr.14505) 404 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 975 B/s wr, 3 op/s 2026-03-09T17:36:16.287 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:15 vm00 bash[20770]: cluster 2026-03-09T17:36:14.783688+0000 mgr.y (mgr.14505) 404 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 975 B/s wr, 3 op/s 2026-03-09T17:36:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:15 vm00 bash[20770]: cluster 2026-03-09T17:36:14.783688+0000 mgr.y (mgr.14505) 404 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 975 B/s wr, 3 op/s 2026-03-09T17:36:16.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:36:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:36:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:36:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:17 vm00 bash[28333]: cluster 2026-03-09T17:36:16.784005+0000 mgr.y (mgr.14505) 405 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 613 B/s rd, 245 B/s wr, 1 op/s 2026-03-09T17:36:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:17 vm00 bash[28333]: cluster 2026-03-09T17:36:16.784005+0000 mgr.y (mgr.14505) 405 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 613 B/s rd, 245 B/s wr, 1 op/s 2026-03-09T17:36:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:17 vm00 bash[20770]: cluster 2026-03-09T17:36:16.784005+0000 mgr.y (mgr.14505) 405 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 613 B/s rd, 245 B/s wr, 1 op/s 2026-03-09T17:36:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:17 vm00 bash[20770]: cluster 2026-03-09T17:36:16.784005+0000 mgr.y (mgr.14505) 405 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 613 B/s rd, 245 B/s wr, 1 op/s 2026-03-09T17:36:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:17 vm02 bash[23351]: cluster 2026-03-09T17:36:16.784005+0000 mgr.y (mgr.14505) 405 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 613 B/s rd, 245 B/s wr, 1 op/s 2026-03-09T17:36:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:17 vm02 bash[23351]: cluster 2026-03-09T17:36:16.784005+0000 mgr.y (mgr.14505) 405 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 613 B/s rd, 245 B/s wr, 1 op/s 2026-03-09T17:36:20.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:19 vm00 bash[28333]: cluster 2026-03-09T17:36:18.784831+0000 mgr.y (mgr.14505) 406 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T17:36:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:19 vm00 bash[28333]: cluster 2026-03-09T17:36:18.784831+0000 mgr.y (mgr.14505) 406 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T17:36:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:19 vm00 bash[20770]: cluster 2026-03-09T17:36:18.784831+0000 mgr.y (mgr.14505) 406 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T17:36:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:19 vm00 bash[20770]: cluster 2026-03-09T17:36:18.784831+0000 mgr.y (mgr.14505) 406 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T17:36:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:19 vm02 bash[23351]: cluster 2026-03-09T17:36:18.784831+0000 mgr.y (mgr.14505) 406 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T17:36:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:19 vm02 bash[23351]: cluster 2026-03-09T17:36:18.784831+0000 mgr.y (mgr.14505) 406 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T17:36:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:21 vm00 bash[28333]: cluster 2026-03-09T17:36:20.785111+0000 mgr.y (mgr.14505) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 906 B/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:36:22.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:21 vm00 bash[28333]: cluster 2026-03-09T17:36:20.785111+0000 mgr.y (mgr.14505) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 906 B/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:36:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:21 vm00 bash[20770]: cluster 2026-03-09T17:36:20.785111+0000 mgr.y (mgr.14505) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 906 B/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:36:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:21 vm00 bash[20770]: cluster 2026-03-09T17:36:20.785111+0000 mgr.y (mgr.14505) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 906 B/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:36:22.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:36:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:36:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:21 vm02 bash[23351]: cluster 2026-03-09T17:36:20.785111+0000 mgr.y (mgr.14505) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 906 B/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:36:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:21 vm02 bash[23351]: cluster 2026-03-09T17:36:20.785111+0000 mgr.y (mgr.14505) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 906 B/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:36:23.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:22 vm00 bash[28333]: audit 2026-03-09T17:36:21.904712+0000 mgr.y (mgr.14505) 408 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:22 vm00 bash[28333]: audit 2026-03-09T17:36:21.904712+0000 mgr.y (mgr.14505) 408 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:22 vm00 bash[20770]: audit 2026-03-09T17:36:21.904712+0000 mgr.y (mgr.14505) 408 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:22 vm00 bash[20770]: audit 2026-03-09T17:36:21.904712+0000 mgr.y (mgr.14505) 408 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:23.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:22 vm02 bash[23351]: audit 2026-03-09T17:36:21.904712+0000 mgr.y (mgr.14505) 408 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:22 vm02 bash[23351]: audit 2026-03-09T17:36:21.904712+0000 mgr.y (mgr.14505) 408 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:23 vm00 bash[28333]: cluster 2026-03-09T17:36:22.785733+0000 mgr.y (mgr.14505) 409 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-09T17:36:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:23 vm00 bash[28333]: cluster 2026-03-09T17:36:22.785733+0000 mgr.y (mgr.14505) 409 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-09T17:36:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:23 vm00 bash[20770]: cluster 2026-03-09T17:36:22.785733+0000 mgr.y (mgr.14505) 409 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-09T17:36:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:23 vm00 bash[20770]: cluster 2026-03-09T17:36:22.785733+0000 mgr.y (mgr.14505) 409 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-09T17:36:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:23 vm02 bash[23351]: cluster 2026-03-09T17:36:22.785733+0000 mgr.y (mgr.14505) 409 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-09T17:36:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:23 vm02 bash[23351]: cluster 2026-03-09T17:36:22.785733+0000 mgr.y (mgr.14505) 409 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-09T17:36:26.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:25 vm00 bash[28333]: cluster 2026-03-09T17:36:24.786248+0000 mgr.y (mgr.14505) 410 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:25 vm00 bash[28333]: cluster 2026-03-09T17:36:24.786248+0000 mgr.y (mgr.14505) 410 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:25 vm00 bash[20770]: cluster 2026-03-09T17:36:24.786248+0000 mgr.y (mgr.14505) 410 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:25 vm00 bash[20770]: cluster 2026-03-09T17:36:24.786248+0000 mgr.y (mgr.14505) 410 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:25 vm02 bash[23351]: cluster 2026-03-09T17:36:24.786248+0000 mgr.y (mgr.14505) 410 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:25 vm02 bash[23351]: cluster 2026-03-09T17:36:24.786248+0000 mgr.y (mgr.14505) 410 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:36:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:36:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:36:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:27 vm00 bash[28333]: cluster 2026-03-09T17:36:26.786647+0000 mgr.y (mgr.14505) 411 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:36:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:27 vm00 bash[28333]: cluster 2026-03-09T17:36:26.786647+0000 mgr.y (mgr.14505) 411 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:36:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:27 vm00 bash[28333]: audit 2026-03-09T17:36:27.821760+0000 mon.c (mon.2) 605 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:27 vm00 bash[28333]: audit 2026-03-09T17:36:27.821760+0000 mon.c (mon.2) 605 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:27 vm00 bash[20770]: cluster 2026-03-09T17:36:26.786647+0000 mgr.y (mgr.14505) 411 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:36:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:27 vm00 bash[20770]: cluster 2026-03-09T17:36:26.786647+0000 mgr.y (mgr.14505) 411 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:36:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:27 vm00 bash[20770]: audit 2026-03-09T17:36:27.821760+0000 mon.c (mon.2) 605 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:27 vm00 bash[20770]: audit 2026-03-09T17:36:27.821760+0000 mon.c (mon.2) 605 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:27 vm02 bash[23351]: cluster 2026-03-09T17:36:26.786647+0000 mgr.y (mgr.14505) 411 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:36:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:27 vm02 bash[23351]: cluster 2026-03-09T17:36:26.786647+0000 mgr.y (mgr.14505) 411 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:36:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:27 vm02 bash[23351]: audit 2026-03-09T17:36:27.821760+0000 mon.c (mon.2) 605 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:27 vm02 bash[23351]: audit 2026-03-09T17:36:27.821760+0000 mon.c (mon.2) 605 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:29.788 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 17:36:29 vm00 bash[31220]: debug 2026-03-09T17:36:29.515+0000 7f4782c8d640 -1 snap_mapper.add_oid found existing snaps mapped on 288:647adeec:test-rados-api-vm00-60118-80::foo:2, removing 2026-03-09T17:36:29.788 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:36:29 vm00 bash[37416]: debug 2026-03-09T17:36:29.515+0000 7f73812d1640 -1 snap_mapper.add_oid found existing snaps mapped on 288:647adeec:test-rados-api-vm00-60118-80::foo:2, removing 2026-03-09T17:36:29.886 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:36:29 vm02 bash[38575]: debug 2026-03-09T17:36:29.516+0000 7f6ffef7f640 -1 snap_mapper.add_oid found existing snaps mapped on 288:647adeec:test-rados-api-vm00-60118-80::foo:2, removing 2026-03-09T17:36:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: cluster 2026-03-09T17:36:28.787526+0000 mgr.y (mgr.14505) 412 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: cluster 2026-03-09T17:36:28.787526+0000 mgr.y (mgr.14505) 412 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: audit 2026-03-09T17:36:29.519065+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: audit 2026-03-09T17:36:29.519065+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: audit 2026-03-09T17:36:29.519945+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: audit 2026-03-09T17:36:29.519945+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: audit 2026-03-09T17:36:29.520270+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: audit 2026-03-09T17:36:29.520270+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: audit 2026-03-09T17:36:29.520860+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:29 vm00 bash[28333]: audit 2026-03-09T17:36:29.520860+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: cluster 2026-03-09T17:36:28.787526+0000 mgr.y (mgr.14505) 412 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: cluster 2026-03-09T17:36:28.787526+0000 mgr.y (mgr.14505) 412 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: audit 2026-03-09T17:36:29.519065+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: audit 2026-03-09T17:36:29.519065+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: audit 2026-03-09T17:36:29.519945+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: audit 2026-03-09T17:36:29.519945+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: audit 2026-03-09T17:36:29.520270+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: audit 2026-03-09T17:36:29.520270+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: audit 2026-03-09T17:36:29.520860+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:29 vm00 bash[20770]: audit 2026-03-09T17:36:29.520860+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: cluster 2026-03-09T17:36:28.787526+0000 mgr.y (mgr.14505) 412 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: cluster 2026-03-09T17:36:28.787526+0000 mgr.y (mgr.14505) 412 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: audit 2026-03-09T17:36:29.519065+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: audit 2026-03-09T17:36:29.519065+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: audit 2026-03-09T17:36:29.519945+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: audit 2026-03-09T17:36:29.519945+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: audit 2026-03-09T17:36:29.520270+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: audit 2026-03-09T17:36:29.520270+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: audit 2026-03-09T17:36:29.520860+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:29 vm02 bash[23351]: audit 2026-03-09T17:36:29.520860+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-79"}]: dispatch 2026-03-09T17:36:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:30 vm00 bash[28333]: cluster 2026-03-09T17:36:29.978520+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T17:36:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:30 vm00 bash[28333]: cluster 2026-03-09T17:36:29.978520+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T17:36:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:30 vm00 bash[20770]: cluster 2026-03-09T17:36:29.978520+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T17:36:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:30 vm00 bash[20770]: cluster 2026-03-09T17:36:29.978520+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T17:36:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:30 vm02 bash[23351]: cluster 2026-03-09T17:36:29.978520+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T17:36:31.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:30 vm02 bash[23351]: cluster 2026-03-09T17:36:29.978520+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:32 vm00 bash[28333]: cluster 2026-03-09T17:36:30.787830+0000 mgr.y (mgr.14505) 413 : cluster [DBG] pgmap v651: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:32 vm00 bash[28333]: cluster 2026-03-09T17:36:30.787830+0000 mgr.y (mgr.14505) 413 : cluster [DBG] pgmap v651: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:32 vm00 bash[28333]: cluster 2026-03-09T17:36:31.002762+0000 mon.a (mon.0) 2628 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:32 vm00 bash[28333]: cluster 2026-03-09T17:36:31.002762+0000 mon.a (mon.0) 2628 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:32 vm00 bash[28333]: audit 2026-03-09T17:36:31.003817+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:32 vm00 bash[28333]: audit 2026-03-09T17:36:31.003817+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:32 vm00 bash[28333]: audit 2026-03-09T17:36:31.006406+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:32 vm00 bash[28333]: audit 2026-03-09T17:36:31.006406+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:32 vm00 bash[20770]: cluster 2026-03-09T17:36:30.787830+0000 mgr.y (mgr.14505) 413 : cluster [DBG] pgmap v651: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:32 vm00 bash[20770]: cluster 2026-03-09T17:36:30.787830+0000 mgr.y (mgr.14505) 413 : cluster [DBG] pgmap v651: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:32 vm00 bash[20770]: cluster 2026-03-09T17:36:31.002762+0000 mon.a (mon.0) 2628 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:32 vm00 bash[20770]: cluster 2026-03-09T17:36:31.002762+0000 mon.a (mon.0) 2628 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:32 vm00 bash[20770]: audit 2026-03-09T17:36:31.003817+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:32 vm00 bash[20770]: audit 2026-03-09T17:36:31.003817+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:32 vm00 bash[20770]: audit 2026-03-09T17:36:31.006406+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:32 vm00 bash[20770]: audit 2026-03-09T17:36:31.006406+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:32 vm02 bash[23351]: cluster 2026-03-09T17:36:30.787830+0000 mgr.y (mgr.14505) 413 : cluster [DBG] pgmap v651: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:36:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:32 vm02 bash[23351]: cluster 2026-03-09T17:36:30.787830+0000 mgr.y (mgr.14505) 413 : cluster [DBG] pgmap v651: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:36:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:32 vm02 bash[23351]: cluster 2026-03-09T17:36:31.002762+0000 mon.a (mon.0) 2628 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T17:36:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:32 vm02 bash[23351]: cluster 2026-03-09T17:36:31.002762+0000 mon.a (mon.0) 2628 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T17:36:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:32 vm02 bash[23351]: audit 2026-03-09T17:36:31.003817+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:32 vm02 bash[23351]: audit 2026-03-09T17:36:31.003817+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:32 vm02 bash[23351]: audit 2026-03-09T17:36:31.006406+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:32 vm02 bash[23351]: audit 2026-03-09T17:36:31.006406+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:32.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:36:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:31.913991+0000 mgr.y (mgr.14505) 414 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:31.913991+0000 mgr.y (mgr.14505) 414 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:31.987680+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:31.987680+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: cluster 2026-03-09T17:36:31.998693+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: cluster 2026-03-09T17:36:31.998693+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:31.998958+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:31.998958+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:32.014978+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:32.014978+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:32.030316+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:33 vm00 bash[28333]: audit 2026-03-09T17:36:32.030316+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:31.913991+0000 mgr.y (mgr.14505) 414 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:31.913991+0000 mgr.y (mgr.14505) 414 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:31.987680+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:31.987680+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: cluster 2026-03-09T17:36:31.998693+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: cluster 2026-03-09T17:36:31.998693+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:31.998958+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:31.998958+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:32.014978+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:32.014978+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:32.030316+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:33 vm00 bash[20770]: audit 2026-03-09T17:36:32.030316+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:31.913991+0000 mgr.y (mgr.14505) 414 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:31.913991+0000 mgr.y (mgr.14505) 414 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:31.987680+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:31.987680+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: cluster 2026-03-09T17:36:31.998693+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: cluster 2026-03-09T17:36:31.998693+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:31.998958+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:31.998958+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:32.014978+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:32.014978+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:32.030316+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:33 vm02 bash[23351]: audit 2026-03-09T17:36:32.030316+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: cluster 2026-03-09T17:36:32.788385+0000 mgr.y (mgr.14505) 415 : cluster [DBG] pgmap v654: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: cluster 2026-03-09T17:36:32.788385+0000 mgr.y (mgr.14505) 415 : cluster [DBG] pgmap v654: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: audit 2026-03-09T17:36:33.020095+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: audit 2026-03-09T17:36:33.020095+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: cluster 2026-03-09T17:36:33.022726+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: cluster 2026-03-09T17:36:33.022726+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: audit 2026-03-09T17:36:33.026638+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: audit 2026-03-09T17:36:33.026638+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: audit 2026-03-09T17:36:33.050031+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:34 vm02 bash[23351]: audit 2026-03-09T17:36:33.050031+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: cluster 2026-03-09T17:36:32.788385+0000 mgr.y (mgr.14505) 415 : cluster [DBG] pgmap v654: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: cluster 2026-03-09T17:36:32.788385+0000 mgr.y (mgr.14505) 415 : cluster [DBG] pgmap v654: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: audit 2026-03-09T17:36:33.020095+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: audit 2026-03-09T17:36:33.020095+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: cluster 2026-03-09T17:36:33.022726+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: cluster 2026-03-09T17:36:33.022726+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: audit 2026-03-09T17:36:33.026638+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: audit 2026-03-09T17:36:33.026638+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: audit 2026-03-09T17:36:33.050031+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:34 vm00 bash[28333]: audit 2026-03-09T17:36:33.050031+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: cluster 2026-03-09T17:36:32.788385+0000 mgr.y (mgr.14505) 415 : cluster [DBG] pgmap v654: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: cluster 2026-03-09T17:36:32.788385+0000 mgr.y (mgr.14505) 415 : cluster [DBG] pgmap v654: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: audit 2026-03-09T17:36:33.020095+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: audit 2026-03-09T17:36:33.020095+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: cluster 2026-03-09T17:36:33.022726+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: cluster 2026-03-09T17:36:33.022726+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: audit 2026-03-09T17:36:33.026638+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: audit 2026-03-09T17:36:33.026638+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: audit 2026-03-09T17:36:33.050031+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:34 vm00 bash[20770]: audit 2026-03-09T17:36:33.050031+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:35.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:35 vm02 bash[23351]: audit 2026-03-09T17:36:34.055762+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:35.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:35 vm02 bash[23351]: audit 2026-03-09T17:36:34.055762+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:35.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:35 vm02 bash[23351]: cluster 2026-03-09T17:36:34.057120+0000 mon.a (mon.0) 2637 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T17:36:35.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:35 vm02 bash[23351]: cluster 2026-03-09T17:36:34.057120+0000 mon.a (mon.0) 2637 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T17:36:35.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:35 vm02 bash[23351]: audit 2026-03-09T17:36:34.058816+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:35 vm02 bash[23351]: audit 2026-03-09T17:36:34.058816+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:35 vm02 bash[23351]: audit 2026-03-09T17:36:34.059908+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:35 vm02 bash[23351]: audit 2026-03-09T17:36:34.059908+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:35 vm00 bash[28333]: audit 2026-03-09T17:36:34.055762+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:35 vm00 bash[28333]: audit 2026-03-09T17:36:34.055762+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:35 vm00 bash[28333]: cluster 2026-03-09T17:36:34.057120+0000 mon.a (mon.0) 2637 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:35 vm00 bash[28333]: cluster 2026-03-09T17:36:34.057120+0000 mon.a (mon.0) 2637 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:35 vm00 bash[28333]: audit 2026-03-09T17:36:34.058816+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:35 vm00 bash[28333]: audit 2026-03-09T17:36:34.058816+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:35 vm00 bash[28333]: audit 2026-03-09T17:36:34.059908+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:35 vm00 bash[28333]: audit 2026-03-09T17:36:34.059908+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:35 vm00 bash[20770]: audit 2026-03-09T17:36:34.055762+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:35 vm00 bash[20770]: audit 2026-03-09T17:36:34.055762+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:35 vm00 bash[20770]: cluster 2026-03-09T17:36:34.057120+0000 mon.a (mon.0) 2637 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:35 vm00 bash[20770]: cluster 2026-03-09T17:36:34.057120+0000 mon.a (mon.0) 2637 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:35 vm00 bash[20770]: audit 2026-03-09T17:36:34.058816+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:35 vm00 bash[20770]: audit 2026-03-09T17:36:34.058816+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:35 vm00 bash[20770]: audit 2026-03-09T17:36:34.059908+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:35.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:35 vm00 bash[20770]: audit 2026-03-09T17:36:34.059908+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: cluster 2026-03-09T17:36:34.788713+0000 mgr.y (mgr.14505) 416 : cluster [DBG] pgmap v657: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: cluster 2026-03-09T17:36:34.788713+0000 mgr.y (mgr.14505) 416 : cluster [DBG] pgmap v657: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: audit 2026-03-09T17:36:35.063206+0000 mon.a (mon.0) 2639 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: audit 2026-03-09T17:36:35.063206+0000 mon.a (mon.0) 2639 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: cluster 2026-03-09T17:36:35.066792+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: cluster 2026-03-09T17:36:35.066792+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: audit 2026-03-09T17:36:35.071463+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: audit 2026-03-09T17:36:35.071463+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: audit 2026-03-09T17:36:35.073190+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:36 vm02 bash[23351]: audit 2026-03-09T17:36:35.073190+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: cluster 2026-03-09T17:36:34.788713+0000 mgr.y (mgr.14505) 416 : cluster [DBG] pgmap v657: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: cluster 2026-03-09T17:36:34.788713+0000 mgr.y (mgr.14505) 416 : cluster [DBG] pgmap v657: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: audit 2026-03-09T17:36:35.063206+0000 mon.a (mon.0) 2639 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: audit 2026-03-09T17:36:35.063206+0000 mon.a (mon.0) 2639 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: cluster 2026-03-09T17:36:35.066792+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: cluster 2026-03-09T17:36:35.066792+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: audit 2026-03-09T17:36:35.071463+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: audit 2026-03-09T17:36:35.071463+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: audit 2026-03-09T17:36:35.073190+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:36 vm00 bash[28333]: audit 2026-03-09T17:36:35.073190+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: cluster 2026-03-09T17:36:34.788713+0000 mgr.y (mgr.14505) 416 : cluster [DBG] pgmap v657: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: cluster 2026-03-09T17:36:34.788713+0000 mgr.y (mgr.14505) 416 : cluster [DBG] pgmap v657: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: audit 2026-03-09T17:36:35.063206+0000 mon.a (mon.0) 2639 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: audit 2026-03-09T17:36:35.063206+0000 mon.a (mon.0) 2639 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: cluster 2026-03-09T17:36:35.066792+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: cluster 2026-03-09T17:36:35.066792+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: audit 2026-03-09T17:36:35.071463+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: audit 2026-03-09T17:36:35.071463+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: audit 2026-03-09T17:36:35.073190+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:36 vm00 bash[20770]: audit 2026-03-09T17:36:35.073190+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:36:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:36:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:36:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:37 vm02 bash[23351]: audit 2026-03-09T17:36:36.065942+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:37 vm02 bash[23351]: audit 2026-03-09T17:36:36.065942+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:37 vm02 bash[23351]: cluster 2026-03-09T17:36:36.069548+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T17:36:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:37 vm02 bash[23351]: cluster 2026-03-09T17:36:36.069548+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T17:36:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:37 vm02 bash[23351]: audit 2026-03-09T17:36:36.098123+0000 mon.b (mon.1) 452 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:37 vm02 bash[23351]: audit 2026-03-09T17:36:36.098123+0000 mon.b (mon.1) 452 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:37 vm02 bash[23351]: audit 2026-03-09T17:36:36.099184+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:37 vm02 bash[23351]: audit 2026-03-09T17:36:36.099184+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:37 vm00 bash[28333]: audit 2026-03-09T17:36:36.065942+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:37 vm00 bash[28333]: audit 2026-03-09T17:36:36.065942+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:37 vm00 bash[28333]: cluster 2026-03-09T17:36:36.069548+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:37 vm00 bash[28333]: cluster 2026-03-09T17:36:36.069548+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:37 vm00 bash[28333]: audit 2026-03-09T17:36:36.098123+0000 mon.b (mon.1) 452 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:37 vm00 bash[28333]: audit 2026-03-09T17:36:36.098123+0000 mon.b (mon.1) 452 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:37 vm00 bash[28333]: audit 2026-03-09T17:36:36.099184+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:37 vm00 bash[28333]: audit 2026-03-09T17:36:36.099184+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:37 vm00 bash[20770]: audit 2026-03-09T17:36:36.065942+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:37 vm00 bash[20770]: audit 2026-03-09T17:36:36.065942+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:37 vm00 bash[20770]: cluster 2026-03-09T17:36:36.069548+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:37 vm00 bash[20770]: cluster 2026-03-09T17:36:36.069548+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:37 vm00 bash[20770]: audit 2026-03-09T17:36:36.098123+0000 mon.b (mon.1) 452 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:37 vm00 bash[20770]: audit 2026-03-09T17:36:36.098123+0000 mon.b (mon.1) 452 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:37 vm00 bash[20770]: audit 2026-03-09T17:36:36.099184+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:37.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:37 vm00 bash[20770]: audit 2026-03-09T17:36:36.099184+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: cluster 2026-03-09T17:36:36.789094+0000 mgr.y (mgr.14505) 417 : cluster [DBG] pgmap v660: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: cluster 2026-03-09T17:36:36.789094+0000 mgr.y (mgr.14505) 417 : cluster [DBG] pgmap v660: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: audit 2026-03-09T17:36:37.079946+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: audit 2026-03-09T17:36:37.079946+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: cluster 2026-03-09T17:36:37.083498+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: cluster 2026-03-09T17:36:37.083498+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: audit 2026-03-09T17:36:37.113803+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: audit 2026-03-09T17:36:37.113803+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: audit 2026-03-09T17:36:37.115044+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:38 vm02 bash[23351]: audit 2026-03-09T17:36:37.115044+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: cluster 2026-03-09T17:36:36.789094+0000 mgr.y (mgr.14505) 417 : cluster [DBG] pgmap v660: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: cluster 2026-03-09T17:36:36.789094+0000 mgr.y (mgr.14505) 417 : cluster [DBG] pgmap v660: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: audit 2026-03-09T17:36:37.079946+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: audit 2026-03-09T17:36:37.079946+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: cluster 2026-03-09T17:36:37.083498+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: cluster 2026-03-09T17:36:37.083498+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: audit 2026-03-09T17:36:37.113803+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: audit 2026-03-09T17:36:37.113803+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: audit 2026-03-09T17:36:37.115044+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:38 vm00 bash[28333]: audit 2026-03-09T17:36:37.115044+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: cluster 2026-03-09T17:36:36.789094+0000 mgr.y (mgr.14505) 417 : cluster [DBG] pgmap v660: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: cluster 2026-03-09T17:36:36.789094+0000 mgr.y (mgr.14505) 417 : cluster [DBG] pgmap v660: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: audit 2026-03-09T17:36:37.079946+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: audit 2026-03-09T17:36:37.079946+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: cluster 2026-03-09T17:36:37.083498+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: cluster 2026-03-09T17:36:37.083498+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: audit 2026-03-09T17:36:37.113803+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: audit 2026-03-09T17:36:37.113803+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: audit 2026-03-09T17:36:37.115044+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:38 vm00 bash[20770]: audit 2026-03-09T17:36:37.115044+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T17:36:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:39 vm02 bash[23351]: audit 2026-03-09T17:36:38.110195+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T17:36:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:39 vm02 bash[23351]: audit 2026-03-09T17:36:38.110195+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T17:36:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:39 vm02 bash[23351]: cluster 2026-03-09T17:36:38.115937+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T17:36:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:39 vm02 bash[23351]: cluster 2026-03-09T17:36:38.115937+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T17:36:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:39 vm02 bash[23351]: audit 2026-03-09T17:36:38.137114+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:39 vm02 bash[23351]: audit 2026-03-09T17:36:38.137114+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:39 vm02 bash[23351]: audit 2026-03-09T17:36:38.138482+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:39 vm02 bash[23351]: audit 2026-03-09T17:36:38.138482+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:39 vm00 bash[28333]: audit 2026-03-09T17:36:38.110195+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:39 vm00 bash[28333]: audit 2026-03-09T17:36:38.110195+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:39 vm00 bash[28333]: cluster 2026-03-09T17:36:38.115937+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:39 vm00 bash[28333]: cluster 2026-03-09T17:36:38.115937+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:39 vm00 bash[28333]: audit 2026-03-09T17:36:38.137114+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:39 vm00 bash[28333]: audit 2026-03-09T17:36:38.137114+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:39 vm00 bash[28333]: audit 2026-03-09T17:36:38.138482+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:39 vm00 bash[28333]: audit 2026-03-09T17:36:38.138482+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:39 vm00 bash[20770]: audit 2026-03-09T17:36:38.110195+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:39 vm00 bash[20770]: audit 2026-03-09T17:36:38.110195+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:39 vm00 bash[20770]: cluster 2026-03-09T17:36:38.115937+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:39 vm00 bash[20770]: cluster 2026-03-09T17:36:38.115937+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:39 vm00 bash[20770]: audit 2026-03-09T17:36:38.137114+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:39 vm00 bash[20770]: audit 2026-03-09T17:36:38.137114+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:39 vm00 bash[20770]: audit 2026-03-09T17:36:38.138482+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:39.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:39 vm00 bash[20770]: audit 2026-03-09T17:36:38.138482+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: cluster 2026-03-09T17:36:38.789529+0000 mgr.y (mgr.14505) 418 : cluster [DBG] pgmap v663: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 9.2 KiB/s wr, 200 op/s 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: cluster 2026-03-09T17:36:38.789529+0000 mgr.y (mgr.14505) 418 : cluster [DBG] pgmap v663: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 9.2 KiB/s wr, 200 op/s 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.119963+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.119963+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: cluster 2026-03-09T17:36:39.126387+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: cluster 2026-03-09T17:36:39.126387+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.165097+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.165097+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.165895+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.165895+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.166263+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.166263+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.166910+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:40 vm00 bash[28333]: audit 2026-03-09T17:36:39.166910+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: cluster 2026-03-09T17:36:38.789529+0000 mgr.y (mgr.14505) 418 : cluster [DBG] pgmap v663: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 9.2 KiB/s wr, 200 op/s 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: cluster 2026-03-09T17:36:38.789529+0000 mgr.y (mgr.14505) 418 : cluster [DBG] pgmap v663: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 9.2 KiB/s wr, 200 op/s 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.119963+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.119963+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: cluster 2026-03-09T17:36:39.126387+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: cluster 2026-03-09T17:36:39.126387+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.165097+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.165097+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.165895+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.165895+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.166263+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.166263+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.166910+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:40 vm00 bash[20770]: audit 2026-03-09T17:36:39.166910+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: cluster 2026-03-09T17:36:38.789529+0000 mgr.y (mgr.14505) 418 : cluster [DBG] pgmap v663: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 9.2 KiB/s wr, 200 op/s 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: cluster 2026-03-09T17:36:38.789529+0000 mgr.y (mgr.14505) 418 : cluster [DBG] pgmap v663: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 9.2 KiB/s wr, 200 op/s 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.119963+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.119963+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: cluster 2026-03-09T17:36:39.126387+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: cluster 2026-03-09T17:36:39.126387+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.165097+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.165097+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.165895+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.165895+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.166263+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.166263+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.166910+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:40 vm02 bash[23351]: audit 2026-03-09T17:36:39.166910+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-81"}]: dispatch 2026-03-09T17:36:41.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:41 vm00 bash[28333]: cluster 2026-03-09T17:36:40.197675+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T17:36:41.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:41 vm00 bash[28333]: cluster 2026-03-09T17:36:40.197675+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T17:36:41.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:41 vm00 bash[20770]: cluster 2026-03-09T17:36:40.197675+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T17:36:41.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:41 vm00 bash[20770]: cluster 2026-03-09T17:36:40.197675+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T17:36:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:41 vm02 bash[23351]: cluster 2026-03-09T17:36:40.197675+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T17:36:41.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:41 vm02 bash[23351]: cluster 2026-03-09T17:36:40.197675+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T17:36:42.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:36:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:42 vm00 bash[28333]: cluster 2026-03-09T17:36:40.789904+0000 mgr.y (mgr.14505) 419 : cluster [DBG] pgmap v666: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 199 op/s 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:42 vm00 bash[28333]: cluster 2026-03-09T17:36:40.789904+0000 mgr.y (mgr.14505) 419 : cluster [DBG] pgmap v666: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 199 op/s 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:42 vm00 bash[28333]: cluster 2026-03-09T17:36:41.224230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:42 vm00 bash[28333]: cluster 2026-03-09T17:36:41.224230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:42 vm00 bash[28333]: audit 2026-03-09T17:36:41.224451+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:42 vm00 bash[28333]: audit 2026-03-09T17:36:41.224451+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:42 vm00 bash[28333]: audit 2026-03-09T17:36:41.226330+0000 mon.a (mon.0) 2657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:42 vm00 bash[28333]: audit 2026-03-09T17:36:41.226330+0000 mon.a (mon.0) 2657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:42 vm00 bash[20770]: cluster 2026-03-09T17:36:40.789904+0000 mgr.y (mgr.14505) 419 : cluster [DBG] pgmap v666: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 199 op/s 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:42 vm00 bash[20770]: cluster 2026-03-09T17:36:40.789904+0000 mgr.y (mgr.14505) 419 : cluster [DBG] pgmap v666: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 199 op/s 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:42 vm00 bash[20770]: cluster 2026-03-09T17:36:41.224230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:42 vm00 bash[20770]: cluster 2026-03-09T17:36:41.224230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:42 vm00 bash[20770]: audit 2026-03-09T17:36:41.224451+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:42 vm00 bash[20770]: audit 2026-03-09T17:36:41.224451+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:42 vm00 bash[20770]: audit 2026-03-09T17:36:41.226330+0000 mon.a (mon.0) 2657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:42 vm00 bash[20770]: audit 2026-03-09T17:36:41.226330+0000 mon.a (mon.0) 2657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:42 vm02 bash[23351]: cluster 2026-03-09T17:36:40.789904+0000 mgr.y (mgr.14505) 419 : cluster [DBG] pgmap v666: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 199 op/s 2026-03-09T17:36:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:42 vm02 bash[23351]: cluster 2026-03-09T17:36:40.789904+0000 mgr.y (mgr.14505) 419 : cluster [DBG] pgmap v666: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 199 op/s 2026-03-09T17:36:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:42 vm02 bash[23351]: cluster 2026-03-09T17:36:41.224230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T17:36:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:42 vm02 bash[23351]: cluster 2026-03-09T17:36:41.224230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T17:36:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:42 vm02 bash[23351]: audit 2026-03-09T17:36:41.224451+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:42 vm02 bash[23351]: audit 2026-03-09T17:36:41.224451+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:42 vm02 bash[23351]: audit 2026-03-09T17:36:41.226330+0000 mon.a (mon.0) 2657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:42 vm02 bash[23351]: audit 2026-03-09T17:36:41.226330+0000 mon.a (mon.0) 2657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:41.924669+0000 mgr.y (mgr.14505) 420 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:41.924669+0000 mgr.y (mgr.14505) 420 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.206334+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.206334+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: cluster 2026-03-09T17:36:42.222181+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: cluster 2026-03-09T17:36:42.222181+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.254746+0000 mon.b (mon.1) 458 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.254746+0000 mon.b (mon.1) 458 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.257477+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.257477+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.258717+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.258717+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.834754+0000 mon.c (mon.2) 606 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:43 vm00 bash[28333]: audit 2026-03-09T17:36:42.834754+0000 mon.c (mon.2) 606 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:41.924669+0000 mgr.y (mgr.14505) 420 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:41.924669+0000 mgr.y (mgr.14505) 420 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.206334+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.206334+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: cluster 2026-03-09T17:36:42.222181+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: cluster 2026-03-09T17:36:42.222181+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.254746+0000 mon.b (mon.1) 458 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.254746+0000 mon.b (mon.1) 458 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.257477+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.257477+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.258717+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.258717+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.834754+0000 mon.c (mon.2) 606 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:43 vm00 bash[20770]: audit 2026-03-09T17:36:42.834754+0000 mon.c (mon.2) 606 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:41.924669+0000 mgr.y (mgr.14505) 420 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:41.924669+0000 mgr.y (mgr.14505) 420 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.206334+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.206334+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: cluster 2026-03-09T17:36:42.222181+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: cluster 2026-03-09T17:36:42.222181+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.254746+0000 mon.b (mon.1) 458 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.254746+0000 mon.b (mon.1) 458 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.257477+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.257477+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.258717+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.258717+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.834754+0000 mon.c (mon.2) 606 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:43 vm02 bash[23351]: audit 2026-03-09T17:36:42.834754+0000 mon.c (mon.2) 606 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: cluster 2026-03-09T17:36:42.790446+0000 mgr.y (mgr.14505) 421 : cluster [DBG] pgmap v669: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 2.5 KiB/s wr, 16 op/s 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: cluster 2026-03-09T17:36:42.790446+0000 mgr.y (mgr.14505) 421 : cluster [DBG] pgmap v669: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 2.5 KiB/s wr, 16 op/s 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: audit 2026-03-09T17:36:43.273952+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: audit 2026-03-09T17:36:43.273952+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: cluster 2026-03-09T17:36:43.280370+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: cluster 2026-03-09T17:36:43.280370+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: audit 2026-03-09T17:36:43.282834+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: audit 2026-03-09T17:36:43.282834+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: audit 2026-03-09T17:36:43.301289+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:44 vm02 bash[23351]: audit 2026-03-09T17:36:43.301289+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: cluster 2026-03-09T17:36:42.790446+0000 mgr.y (mgr.14505) 421 : cluster [DBG] pgmap v669: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 2.5 KiB/s wr, 16 op/s 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: cluster 2026-03-09T17:36:42.790446+0000 mgr.y (mgr.14505) 421 : cluster [DBG] pgmap v669: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 2.5 KiB/s wr, 16 op/s 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: audit 2026-03-09T17:36:43.273952+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: audit 2026-03-09T17:36:43.273952+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: cluster 2026-03-09T17:36:43.280370+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: cluster 2026-03-09T17:36:43.280370+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: audit 2026-03-09T17:36:43.282834+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: audit 2026-03-09T17:36:43.282834+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: audit 2026-03-09T17:36:43.301289+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:44 vm00 bash[28333]: audit 2026-03-09T17:36:43.301289+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: cluster 2026-03-09T17:36:42.790446+0000 mgr.y (mgr.14505) 421 : cluster [DBG] pgmap v669: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 2.5 KiB/s wr, 16 op/s 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: cluster 2026-03-09T17:36:42.790446+0000 mgr.y (mgr.14505) 421 : cluster [DBG] pgmap v669: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 2.5 KiB/s wr, 16 op/s 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: audit 2026-03-09T17:36:43.273952+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: audit 2026-03-09T17:36:43.273952+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: cluster 2026-03-09T17:36:43.280370+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: cluster 2026-03-09T17:36:43.280370+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: audit 2026-03-09T17:36:43.282834+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: audit 2026-03-09T17:36:43.282834+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: audit 2026-03-09T17:36:43.301289+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:44 vm00 bash[20770]: audit 2026-03-09T17:36:43.301289+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: audit 2026-03-09T17:36:44.327741+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: audit 2026-03-09T17:36:44.327741+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: cluster 2026-03-09T17:36:44.334828+0000 mon.a (mon.0) 2665 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: cluster 2026-03-09T17:36:44.334828+0000 mon.a (mon.0) 2665 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: audit 2026-03-09T17:36:44.335497+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: audit 2026-03-09T17:36:44.335497+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: audit 2026-03-09T17:36:44.337780+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: audit 2026-03-09T17:36:44.337780+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: audit 2026-03-09T17:36:45.336106+0000 mon.a (mon.0) 2667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: audit 2026-03-09T17:36:45.336106+0000 mon.a (mon.0) 2667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: cluster 2026-03-09T17:36:45.341464+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T17:36:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:45 vm02 bash[23351]: cluster 2026-03-09T17:36:45.341464+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T17:36:45.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: audit 2026-03-09T17:36:44.327741+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:45.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: audit 2026-03-09T17:36:44.327741+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: cluster 2026-03-09T17:36:44.334828+0000 mon.a (mon.0) 2665 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: cluster 2026-03-09T17:36:44.334828+0000 mon.a (mon.0) 2665 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: audit 2026-03-09T17:36:44.335497+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: audit 2026-03-09T17:36:44.335497+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: audit 2026-03-09T17:36:44.337780+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: audit 2026-03-09T17:36:44.337780+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: audit 2026-03-09T17:36:45.336106+0000 mon.a (mon.0) 2667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: audit 2026-03-09T17:36:45.336106+0000 mon.a (mon.0) 2667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: cluster 2026-03-09T17:36:45.341464+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:45 vm00 bash[28333]: cluster 2026-03-09T17:36:45.341464+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: audit 2026-03-09T17:36:44.327741+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: audit 2026-03-09T17:36:44.327741+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: cluster 2026-03-09T17:36:44.334828+0000 mon.a (mon.0) 2665 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: cluster 2026-03-09T17:36:44.334828+0000 mon.a (mon.0) 2665 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: audit 2026-03-09T17:36:44.335497+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: audit 2026-03-09T17:36:44.335497+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: audit 2026-03-09T17:36:44.337780+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: audit 2026-03-09T17:36:44.337780+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: audit 2026-03-09T17:36:45.336106+0000 mon.a (mon.0) 2667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: audit 2026-03-09T17:36:45.336106+0000 mon.a (mon.0) 2667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: cluster 2026-03-09T17:36:45.341464+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T17:36:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:45 vm00 bash[20770]: cluster 2026-03-09T17:36:45.341464+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: cluster 2026-03-09T17:36:44.790720+0000 mgr.y (mgr.14505) 422 : cluster [DBG] pgmap v672: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 5.2 KiB/s wr, 19 op/s 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: cluster 2026-03-09T17:36:44.790720+0000 mgr.y (mgr.14505) 422 : cluster [DBG] pgmap v672: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 5.2 KiB/s wr, 19 op/s 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: audit 2026-03-09T17:36:45.340673+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: audit 2026-03-09T17:36:45.340673+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: audit 2026-03-09T17:36:45.343677+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: audit 2026-03-09T17:36:45.343677+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: audit 2026-03-09T17:36:46.339606+0000 mon.a (mon.0) 2670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: audit 2026-03-09T17:36:46.339606+0000 mon.a (mon.0) 2670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: cluster 2026-03-09T17:36:46.345841+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T17:36:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:46 vm02 bash[23351]: cluster 2026-03-09T17:36:46.345841+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:36:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:36:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: cluster 2026-03-09T17:36:44.790720+0000 mgr.y (mgr.14505) 422 : cluster [DBG] pgmap v672: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 5.2 KiB/s wr, 19 op/s 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: cluster 2026-03-09T17:36:44.790720+0000 mgr.y (mgr.14505) 422 : cluster [DBG] pgmap v672: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 5.2 KiB/s wr, 19 op/s 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: audit 2026-03-09T17:36:45.340673+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: audit 2026-03-09T17:36:45.340673+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: audit 2026-03-09T17:36:45.343677+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: audit 2026-03-09T17:36:45.343677+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: audit 2026-03-09T17:36:46.339606+0000 mon.a (mon.0) 2670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: audit 2026-03-09T17:36:46.339606+0000 mon.a (mon.0) 2670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: cluster 2026-03-09T17:36:46.345841+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:46 vm00 bash[28333]: cluster 2026-03-09T17:36:46.345841+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: cluster 2026-03-09T17:36:44.790720+0000 mgr.y (mgr.14505) 422 : cluster [DBG] pgmap v672: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 5.2 KiB/s wr, 19 op/s 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: cluster 2026-03-09T17:36:44.790720+0000 mgr.y (mgr.14505) 422 : cluster [DBG] pgmap v672: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 5.2 KiB/s wr, 19 op/s 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: audit 2026-03-09T17:36:45.340673+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: audit 2026-03-09T17:36:45.340673+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: audit 2026-03-09T17:36:45.343677+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: audit 2026-03-09T17:36:45.343677+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: audit 2026-03-09T17:36:46.339606+0000 mon.a (mon.0) 2670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: audit 2026-03-09T17:36:46.339606+0000 mon.a (mon.0) 2670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: cluster 2026-03-09T17:36:46.345841+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T17:36:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:46 vm00 bash[20770]: cluster 2026-03-09T17:36:46.345841+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T17:36:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:48 vm02 bash[23351]: cluster 2026-03-09T17:36:46.791066+0000 mgr.y (mgr.14505) 423 : cluster [DBG] pgmap v675: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:36:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:48 vm02 bash[23351]: cluster 2026-03-09T17:36:46.791066+0000 mgr.y (mgr.14505) 423 : cluster [DBG] pgmap v675: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:36:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:48 vm02 bash[23351]: cluster 2026-03-09T17:36:47.365825+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T17:36:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:48 vm02 bash[23351]: cluster 2026-03-09T17:36:47.365825+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T17:36:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:48 vm00 bash[20770]: cluster 2026-03-09T17:36:46.791066+0000 mgr.y (mgr.14505) 423 : cluster [DBG] pgmap v675: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:36:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:48 vm00 bash[20770]: cluster 2026-03-09T17:36:46.791066+0000 mgr.y (mgr.14505) 423 : cluster [DBG] pgmap v675: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:36:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:48 vm00 bash[20770]: cluster 2026-03-09T17:36:47.365825+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T17:36:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:48 vm00 bash[20770]: cluster 2026-03-09T17:36:47.365825+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T17:36:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:48 vm00 bash[28333]: cluster 2026-03-09T17:36:46.791066+0000 mgr.y (mgr.14505) 423 : cluster [DBG] pgmap v675: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:36:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:48 vm00 bash[28333]: cluster 2026-03-09T17:36:46.791066+0000 mgr.y (mgr.14505) 423 : cluster [DBG] pgmap v675: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:36:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:48 vm00 bash[28333]: cluster 2026-03-09T17:36:47.365825+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T17:36:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:48 vm00 bash[28333]: cluster 2026-03-09T17:36:47.365825+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: cluster 2026-03-09T17:36:48.368845+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: cluster 2026-03-09T17:36:48.368845+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.409189+0000 mon.b (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.409189+0000 mon.b (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.409957+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.409957+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.410230+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.410230+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.410937+0000 mon.a (mon.0) 2675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.410937+0000 mon.a (mon.0) 2675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.703799+0000 mon.c (mon.2) 607 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:48.703799+0000 mon.c (mon.2) 607 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:49.016697+0000 mon.c (mon.2) 608 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:49.016697+0000 mon.c (mon.2) 608 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:49.017438+0000 mon.c (mon.2) 609 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:49.017438+0000 mon.c (mon.2) 609 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:49.023975+0000 mon.a (mon.0) 2676 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:49 vm00 bash[28333]: audit 2026-03-09T17:36:49.023975+0000 mon.a (mon.0) 2676 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: cluster 2026-03-09T17:36:48.368845+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: cluster 2026-03-09T17:36:48.368845+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.409189+0000 mon.b (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.409189+0000 mon.b (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.409957+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.409957+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.410230+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.410230+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.410937+0000 mon.a (mon.0) 2675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.410937+0000 mon.a (mon.0) 2675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.703799+0000 mon.c (mon.2) 607 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:48.703799+0000 mon.c (mon.2) 607 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:49.016697+0000 mon.c (mon.2) 608 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:49.016697+0000 mon.c (mon.2) 608 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:49.017438+0000 mon.c (mon.2) 609 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:49.017438+0000 mon.c (mon.2) 609 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:49.023975+0000 mon.a (mon.0) 2676 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:36:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:49 vm00 bash[20770]: audit 2026-03-09T17:36:49.023975+0000 mon.a (mon.0) 2676 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: cluster 2026-03-09T17:36:48.368845+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: cluster 2026-03-09T17:36:48.368845+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.409189+0000 mon.b (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.409189+0000 mon.b (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.409957+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.409957+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.410230+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.410230+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.410937+0000 mon.a (mon.0) 2675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.410937+0000 mon.a (mon.0) 2675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-83"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.703799+0000 mon.c (mon.2) 607 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:48.703799+0000 mon.c (mon.2) 607 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:49.016697+0000 mon.c (mon.2) 608 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:49.016697+0000 mon.c (mon.2) 608 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:49.017438+0000 mon.c (mon.2) 609 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:49.017438+0000 mon.c (mon.2) 609 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:49.023975+0000 mon.a (mon.0) 2676 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:36:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:49 vm02 bash[23351]: audit 2026-03-09T17:36:49.023975+0000 mon.a (mon.0) 2676 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:36:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:50 vm00 bash[28333]: cluster 2026-03-09T17:36:48.791379+0000 mgr.y (mgr.14505) 424 : cluster [DBG] pgmap v678: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T17:36:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:50 vm00 bash[28333]: cluster 2026-03-09T17:36:48.791379+0000 mgr.y (mgr.14505) 424 : cluster [DBG] pgmap v678: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T17:36:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:50 vm00 bash[28333]: cluster 2026-03-09T17:36:49.404226+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T17:36:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:50 vm00 bash[28333]: cluster 2026-03-09T17:36:49.404226+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T17:36:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:50 vm00 bash[20770]: cluster 2026-03-09T17:36:48.791379+0000 mgr.y (mgr.14505) 424 : cluster [DBG] pgmap v678: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T17:36:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:50 vm00 bash[20770]: cluster 2026-03-09T17:36:48.791379+0000 mgr.y (mgr.14505) 424 : cluster [DBG] pgmap v678: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T17:36:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:50 vm00 bash[20770]: cluster 2026-03-09T17:36:49.404226+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T17:36:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:50 vm00 bash[20770]: cluster 2026-03-09T17:36:49.404226+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T17:36:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:50 vm02 bash[23351]: cluster 2026-03-09T17:36:48.791379+0000 mgr.y (mgr.14505) 424 : cluster [DBG] pgmap v678: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T17:36:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:50 vm02 bash[23351]: cluster 2026-03-09T17:36:48.791379+0000 mgr.y (mgr.14505) 424 : cluster [DBG] pgmap v678: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T17:36:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:50 vm02 bash[23351]: cluster 2026-03-09T17:36:49.404226+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T17:36:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:50 vm02 bash[23351]: cluster 2026-03-09T17:36:49.404226+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: cluster 2026-03-09T17:36:50.394476+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: cluster 2026-03-09T17:36:50.394476+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: audit 2026-03-09T17:36:50.411639+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: audit 2026-03-09T17:36:50.411639+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: audit 2026-03-09T17:36:50.419644+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: audit 2026-03-09T17:36:50.419644+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: cluster 2026-03-09T17:36:50.791732+0000 mgr.y (mgr.14505) 425 : cluster [DBG] pgmap v681: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: cluster 2026-03-09T17:36:50.791732+0000 mgr.y (mgr.14505) 425 : cluster [DBG] pgmap v681: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: audit 2026-03-09T17:36:51.395011+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: audit 2026-03-09T17:36:51.395011+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: cluster 2026-03-09T17:36:51.397804+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:51 vm00 bash[28333]: cluster 2026-03-09T17:36:51.397804+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: cluster 2026-03-09T17:36:50.394476+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: cluster 2026-03-09T17:36:50.394476+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: audit 2026-03-09T17:36:50.411639+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: audit 2026-03-09T17:36:50.411639+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: audit 2026-03-09T17:36:50.419644+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: audit 2026-03-09T17:36:50.419644+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: cluster 2026-03-09T17:36:50.791732+0000 mgr.y (mgr.14505) 425 : cluster [DBG] pgmap v681: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: cluster 2026-03-09T17:36:50.791732+0000 mgr.y (mgr.14505) 425 : cluster [DBG] pgmap v681: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: audit 2026-03-09T17:36:51.395011+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: audit 2026-03-09T17:36:51.395011+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: cluster 2026-03-09T17:36:51.397804+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T17:36:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:51 vm00 bash[20770]: cluster 2026-03-09T17:36:51.397804+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: cluster 2026-03-09T17:36:50.394476+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: cluster 2026-03-09T17:36:50.394476+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: audit 2026-03-09T17:36:50.411639+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: audit 2026-03-09T17:36:50.411639+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: audit 2026-03-09T17:36:50.419644+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: audit 2026-03-09T17:36:50.419644+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: cluster 2026-03-09T17:36:50.791732+0000 mgr.y (mgr.14505) 425 : cluster [DBG] pgmap v681: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: cluster 2026-03-09T17:36:50.791732+0000 mgr.y (mgr.14505) 425 : cluster [DBG] pgmap v681: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: audit 2026-03-09T17:36:51.395011+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: audit 2026-03-09T17:36:51.395011+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: cluster 2026-03-09T17:36:51.397804+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T17:36:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:51 vm02 bash[23351]: cluster 2026-03-09T17:36:51.397804+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T17:36:52.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:36:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:51.409995+0000 mon.b (mon.1) 466 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:51.409995+0000 mon.b (mon.1) 466 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: cluster 2026-03-09T17:36:51.417178+0000 mon.a (mon.0) 2682 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: cluster 2026-03-09T17:36:51.417178+0000 mon.a (mon.0) 2682 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:51.419257+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:51.419257+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:51.431555+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:51.431555+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:51.935248+0000 mgr.y (mgr.14505) 426 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:51.935248+0000 mgr.y (mgr.14505) 426 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:52.398026+0000 mon.a (mon.0) 2684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:52.398026+0000 mon.a (mon.0) 2684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: cluster 2026-03-09T17:36:52.401210+0000 mon.a (mon.0) 2685 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: cluster 2026-03-09T17:36:52.401210+0000 mon.a (mon.0) 2685 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:52.401257+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:52.401257+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:52.403007+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:52 vm00 bash[28333]: audit 2026-03-09T17:36:52.403007+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:51.409995+0000 mon.b (mon.1) 466 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:51.409995+0000 mon.b (mon.1) 466 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: cluster 2026-03-09T17:36:51.417178+0000 mon.a (mon.0) 2682 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: cluster 2026-03-09T17:36:51.417178+0000 mon.a (mon.0) 2682 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:51.419257+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:51.419257+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:51.431555+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:51.431555+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:51.935248+0000 mgr.y (mgr.14505) 426 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:51.935248+0000 mgr.y (mgr.14505) 426 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:52.398026+0000 mon.a (mon.0) 2684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:52.398026+0000 mon.a (mon.0) 2684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: cluster 2026-03-09T17:36:52.401210+0000 mon.a (mon.0) 2685 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: cluster 2026-03-09T17:36:52.401210+0000 mon.a (mon.0) 2685 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:52.401257+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:52.401257+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:52.403007+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:52 vm00 bash[20770]: audit 2026-03-09T17:36:52.403007+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:51.409995+0000 mon.b (mon.1) 466 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:51.409995+0000 mon.b (mon.1) 466 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: cluster 2026-03-09T17:36:51.417178+0000 mon.a (mon.0) 2682 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: cluster 2026-03-09T17:36:51.417178+0000 mon.a (mon.0) 2682 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:51.419257+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:51.419257+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:51.431555+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:51.431555+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:51.935248+0000 mgr.y (mgr.14505) 426 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:51.935248+0000 mgr.y (mgr.14505) 426 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:52.398026+0000 mon.a (mon.0) 2684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:52.398026+0000 mon.a (mon.0) 2684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: cluster 2026-03-09T17:36:52.401210+0000 mon.a (mon.0) 2685 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: cluster 2026-03-09T17:36:52.401210+0000 mon.a (mon.0) 2685 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:52.401257+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:52.401257+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:52.403007+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:52 vm02 bash[23351]: audit 2026-03-09T17:36:52.403007+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:53.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: cluster 2026-03-09T17:36:52.792349+0000 mgr.y (mgr.14505) 427 : cluster [DBG] pgmap v684: 292 pgs: 292 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: cluster 2026-03-09T17:36:52.792349+0000 mgr.y (mgr.14505) 427 : cluster [DBG] pgmap v684: 292 pgs: 292 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: audit 2026-03-09T17:36:53.401276+0000 mon.a (mon.0) 2687 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: audit 2026-03-09T17:36:53.401276+0000 mon.a (mon.0) 2687 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: audit 2026-03-09T17:36:53.405210+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: audit 2026-03-09T17:36:53.405210+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: cluster 2026-03-09T17:36:53.413792+0000 mon.a (mon.0) 2688 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: cluster 2026-03-09T17:36:53.413792+0000 mon.a (mon.0) 2688 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: audit 2026-03-09T17:36:53.414014+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:53 vm00 bash[28333]: audit 2026-03-09T17:36:53.414014+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: cluster 2026-03-09T17:36:52.792349+0000 mgr.y (mgr.14505) 427 : cluster [DBG] pgmap v684: 292 pgs: 292 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: cluster 2026-03-09T17:36:52.792349+0000 mgr.y (mgr.14505) 427 : cluster [DBG] pgmap v684: 292 pgs: 292 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: audit 2026-03-09T17:36:53.401276+0000 mon.a (mon.0) 2687 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: audit 2026-03-09T17:36:53.401276+0000 mon.a (mon.0) 2687 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: audit 2026-03-09T17:36:53.405210+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: audit 2026-03-09T17:36:53.405210+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: cluster 2026-03-09T17:36:53.413792+0000 mon.a (mon.0) 2688 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: cluster 2026-03-09T17:36:53.413792+0000 mon.a (mon.0) 2688 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: audit 2026-03-09T17:36:53.414014+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:53 vm00 bash[20770]: audit 2026-03-09T17:36:53.414014+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: cluster 2026-03-09T17:36:52.792349+0000 mgr.y (mgr.14505) 427 : cluster [DBG] pgmap v684: 292 pgs: 292 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: cluster 2026-03-09T17:36:52.792349+0000 mgr.y (mgr.14505) 427 : cluster [DBG] pgmap v684: 292 pgs: 292 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 5 op/s 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: audit 2026-03-09T17:36:53.401276+0000 mon.a (mon.0) 2687 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: audit 2026-03-09T17:36:53.401276+0000 mon.a (mon.0) 2687 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_tier","val": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: audit 2026-03-09T17:36:53.405210+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: audit 2026-03-09T17:36:53.405210+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: cluster 2026-03-09T17:36:53.413792+0000 mon.a (mon.0) 2688 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: cluster 2026-03-09T17:36:53.413792+0000 mon.a (mon.0) 2688 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: audit 2026-03-09T17:36:53.414014+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:53 vm02 bash[23351]: audit 2026-03-09T17:36:53.414014+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: audit 2026-03-09T17:36:54.404722+0000 mon.a (mon.0) 2690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: audit 2026-03-09T17:36:54.404722+0000 mon.a (mon.0) 2690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: cluster 2026-03-09T17:36:54.408161+0000 mon.a (mon.0) 2691 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: cluster 2026-03-09T17:36:54.408161+0000 mon.a (mon.0) 2691 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: audit 2026-03-09T17:36:54.409523+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: audit 2026-03-09T17:36:54.409523+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: audit 2026-03-09T17:36:54.412073+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: audit 2026-03-09T17:36:54.412073+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: cluster 2026-03-09T17:36:54.792642+0000 mgr.y (mgr.14505) 428 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:55 vm00 bash[28333]: cluster 2026-03-09T17:36:54.792642+0000 mgr.y (mgr.14505) 428 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: audit 2026-03-09T17:36:54.404722+0000 mon.a (mon.0) 2690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: audit 2026-03-09T17:36:54.404722+0000 mon.a (mon.0) 2690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: cluster 2026-03-09T17:36:54.408161+0000 mon.a (mon.0) 2691 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: cluster 2026-03-09T17:36:54.408161+0000 mon.a (mon.0) 2691 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: audit 2026-03-09T17:36:54.409523+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: audit 2026-03-09T17:36:54.409523+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: audit 2026-03-09T17:36:54.412073+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: audit 2026-03-09T17:36:54.412073+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: cluster 2026-03-09T17:36:54.792642+0000 mgr.y (mgr.14505) 428 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s 2026-03-09T17:36:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:55 vm00 bash[20770]: cluster 2026-03-09T17:36:54.792642+0000 mgr.y (mgr.14505) 428 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: audit 2026-03-09T17:36:54.404722+0000 mon.a (mon.0) 2690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: audit 2026-03-09T17:36:54.404722+0000 mon.a (mon.0) 2690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: cluster 2026-03-09T17:36:54.408161+0000 mon.a (mon.0) 2691 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: cluster 2026-03-09T17:36:54.408161+0000 mon.a (mon.0) 2691 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: audit 2026-03-09T17:36:54.409523+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: audit 2026-03-09T17:36:54.409523+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: audit 2026-03-09T17:36:54.412073+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: audit 2026-03-09T17:36:54.412073+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: cluster 2026-03-09T17:36:54.792642+0000 mgr.y (mgr.14505) 428 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s 2026-03-09T17:36:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:55 vm02 bash[23351]: cluster 2026-03-09T17:36:54.792642+0000 mgr.y (mgr.14505) 428 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:56 vm00 bash[28333]: audit 2026-03-09T17:36:55.412173+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:56 vm00 bash[28333]: audit 2026-03-09T17:36:55.412173+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:56 vm00 bash[28333]: cluster 2026-03-09T17:36:55.426050+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:56 vm00 bash[28333]: cluster 2026-03-09T17:36:55.426050+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:56 vm00 bash[20770]: audit 2026-03-09T17:36:55.412173+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:56 vm00 bash[20770]: audit 2026-03-09T17:36:55.412173+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:56 vm00 bash[20770]: cluster 2026-03-09T17:36:55.426050+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:56 vm00 bash[20770]: cluster 2026-03-09T17:36:55.426050+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T17:36:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:36:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:36:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:36:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:56 vm02 bash[23351]: audit 2026-03-09T17:36:55.412173+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:56 vm02 bash[23351]: audit 2026-03-09T17:36:55.412173+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:36:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:56 vm02 bash[23351]: cluster 2026-03-09T17:36:55.426050+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T17:36:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:56 vm02 bash[23351]: cluster 2026-03-09T17:36:55.426050+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:57 vm00 bash[28333]: cluster 2026-03-09T17:36:56.431040+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:57 vm00 bash[28333]: cluster 2026-03-09T17:36:56.431040+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:57 vm00 bash[28333]: cluster 2026-03-09T17:36:56.752839+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:57 vm00 bash[28333]: cluster 2026-03-09T17:36:56.752839+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:57 vm00 bash[28333]: cluster 2026-03-09T17:36:56.792976+0000 mgr.y (mgr.14505) 429 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:57 vm00 bash[28333]: cluster 2026-03-09T17:36:56.792976+0000 mgr.y (mgr.14505) 429 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:57 vm00 bash[20770]: cluster 2026-03-09T17:36:56.431040+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:57 vm00 bash[20770]: cluster 2026-03-09T17:36:56.431040+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:57 vm00 bash[20770]: cluster 2026-03-09T17:36:56.752839+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:57 vm00 bash[20770]: cluster 2026-03-09T17:36:56.752839+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:57 vm00 bash[20770]: cluster 2026-03-09T17:36:56.792976+0000 mgr.y (mgr.14505) 429 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:36:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:57 vm00 bash[20770]: cluster 2026-03-09T17:36:56.792976+0000 mgr.y (mgr.14505) 429 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:36:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:57 vm02 bash[23351]: cluster 2026-03-09T17:36:56.431040+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T17:36:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:57 vm02 bash[23351]: cluster 2026-03-09T17:36:56.431040+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T17:36:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:57 vm02 bash[23351]: cluster 2026-03-09T17:36:56.752839+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:57 vm02 bash[23351]: cluster 2026-03-09T17:36:56.752839+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:36:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:57 vm02 bash[23351]: cluster 2026-03-09T17:36:56.792976+0000 mgr.y (mgr.14505) 429 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:36:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:57 vm02 bash[23351]: cluster 2026-03-09T17:36:56.792976+0000 mgr.y (mgr.14505) 429 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: cluster 2026-03-09T17:36:57.437192+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: cluster 2026-03-09T17:36:57.437192+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.481072+0000 mon.b (mon.1) 471 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.481072+0000 mon.b (mon.1) 471 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.481968+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.481968+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.482226+0000 mon.a (mon.0) 2698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.482226+0000 mon.a (mon.0) 2698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.482929+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.482929+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.840908+0000 mon.c (mon.2) 610 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:58 vm00 bash[28333]: audit 2026-03-09T17:36:57.840908+0000 mon.c (mon.2) 610 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: cluster 2026-03-09T17:36:57.437192+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: cluster 2026-03-09T17:36:57.437192+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.481072+0000 mon.b (mon.1) 471 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.481072+0000 mon.b (mon.1) 471 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.481968+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.481968+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.482226+0000 mon.a (mon.0) 2698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.482226+0000 mon.a (mon.0) 2698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.482929+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.482929+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.840908+0000 mon.c (mon.2) 610 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:58 vm00 bash[20770]: audit 2026-03-09T17:36:57.840908+0000 mon.c (mon.2) 610 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: cluster 2026-03-09T17:36:57.437192+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: cluster 2026-03-09T17:36:57.437192+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.481072+0000 mon.b (mon.1) 471 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.481072+0000 mon.b (mon.1) 471 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.481968+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.481968+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.482226+0000 mon.a (mon.0) 2698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.482226+0000 mon.a (mon.0) 2698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.482929+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.482929+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-85"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.840908+0000 mon.c (mon.2) 610 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:58 vm02 bash[23351]: audit 2026-03-09T17:36:57.840908+0000 mon.c (mon.2) 610 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:36:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:59 vm00 bash[28333]: cluster 2026-03-09T17:36:58.455170+0000 mon.a (mon.0) 2700 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T17:36:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:59 vm00 bash[28333]: cluster 2026-03-09T17:36:58.455170+0000 mon.a (mon.0) 2700 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T17:36:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:59 vm00 bash[28333]: cluster 2026-03-09T17:36:58.793303+0000 mgr.y (mgr.14505) 430 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:36:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:36:59 vm00 bash[28333]: cluster 2026-03-09T17:36:58.793303+0000 mgr.y (mgr.14505) 430 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:36:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:59 vm00 bash[20770]: cluster 2026-03-09T17:36:58.455170+0000 mon.a (mon.0) 2700 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T17:36:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:59 vm00 bash[20770]: cluster 2026-03-09T17:36:58.455170+0000 mon.a (mon.0) 2700 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T17:36:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:59 vm00 bash[20770]: cluster 2026-03-09T17:36:58.793303+0000 mgr.y (mgr.14505) 430 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:36:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:36:59 vm00 bash[20770]: cluster 2026-03-09T17:36:58.793303+0000 mgr.y (mgr.14505) 430 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:36:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:59 vm02 bash[23351]: cluster 2026-03-09T17:36:58.455170+0000 mon.a (mon.0) 2700 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T17:36:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:59 vm02 bash[23351]: cluster 2026-03-09T17:36:58.455170+0000 mon.a (mon.0) 2700 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T17:36:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:59 vm02 bash[23351]: cluster 2026-03-09T17:36:58.793303+0000 mgr.y (mgr.14505) 430 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:36:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:36:59 vm02 bash[23351]: cluster 2026-03-09T17:36:58.793303+0000 mgr.y (mgr.14505) 430 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: OK ] LibRadosTwoPoolsPP.ProxyRead (18301 ms) 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.CachePin 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.CachePin (22271 ms) 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.SetRedirectRead 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.SetRedirectRead (3203 ms) 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestPromoteRead 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestPromoteRead (3020 ms) 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRefRead 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRefRead (3172 ms) 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestUnset 2026-03-09T17:37:00.493 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestUnset (3015 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestDedupRefRead 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestDedupRefRead (4047 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount (39439 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 (16280 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestTestSnapCreate 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestTestSnapCreate (3598 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote (3821 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification (24524 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapIncCount 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapIncCount (14216 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvict 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvict (5057 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictPromote 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictPromote (4055 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: waiting for scrubs... 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: done waiting 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch (24546 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.DedupFlushRead 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.DedupFlushRead (10221 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushSnap 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushSnap (9202 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushDupCount 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushDupCount (9050 ms) 2026-03-09T17:37:00.494 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringFlush 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:00 vm00 bash[28333]: cluster 2026-03-09T17:36:59.466038+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:00 vm00 bash[28333]: cluster 2026-03-09T17:36:59.466038+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:00 vm00 bash[28333]: audit 2026-03-09T17:36:59.481726+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:00 vm00 bash[28333]: audit 2026-03-09T17:36:59.481726+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:00 vm00 bash[28333]: audit 2026-03-09T17:36:59.482818+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:00 vm00 bash[28333]: audit 2026-03-09T17:36:59.482818+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:00 vm00 bash[20770]: cluster 2026-03-09T17:36:59.466038+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:00 vm00 bash[20770]: cluster 2026-03-09T17:36:59.466038+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:00 vm00 bash[20770]: audit 2026-03-09T17:36:59.481726+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:00 vm00 bash[20770]: audit 2026-03-09T17:36:59.481726+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:00 vm00 bash[20770]: audit 2026-03-09T17:36:59.482818+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:00 vm00 bash[20770]: audit 2026-03-09T17:36:59.482818+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:00 vm02 bash[23351]: cluster 2026-03-09T17:36:59.466038+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T17:37:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:00 vm02 bash[23351]: cluster 2026-03-09T17:36:59.466038+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T17:37:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:00 vm02 bash[23351]: audit 2026-03-09T17:36:59.481726+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:00 vm02 bash[23351]: audit 2026-03-09T17:36:59.481726+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:00 vm02 bash[23351]: audit 2026-03-09T17:36:59.482818+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:00 vm02 bash[23351]: audit 2026-03-09T17:36:59.482818+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:01.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:01 vm00 bash[28333]: audit 2026-03-09T17:37:00.460374+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:01 vm00 bash[28333]: audit 2026-03-09T17:37:00.460374+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:01 vm00 bash[28333]: cluster 2026-03-09T17:37:00.464536+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:01 vm00 bash[28333]: cluster 2026-03-09T17:37:00.464536+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:01 vm00 bash[28333]: audit 2026-03-09T17:37:00.483016+0000 mon.b (mon.1) 474 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:01 vm00 bash[28333]: audit 2026-03-09T17:37:00.483016+0000 mon.b (mon.1) 474 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:01 vm00 bash[28333]: cluster 2026-03-09T17:37:00.793634+0000 mgr.y (mgr.14505) 431 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:01 vm00 bash[28333]: cluster 2026-03-09T17:37:00.793634+0000 mgr.y (mgr.14505) 431 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:01 vm00 bash[20770]: audit 2026-03-09T17:37:00.460374+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:01 vm00 bash[20770]: audit 2026-03-09T17:37:00.460374+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:01 vm00 bash[20770]: cluster 2026-03-09T17:37:00.464536+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:01 vm00 bash[20770]: cluster 2026-03-09T17:37:00.464536+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:01 vm00 bash[20770]: audit 2026-03-09T17:37:00.483016+0000 mon.b (mon.1) 474 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:01 vm00 bash[20770]: audit 2026-03-09T17:37:00.483016+0000 mon.b (mon.1) 474 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:01 vm00 bash[20770]: cluster 2026-03-09T17:37:00.793634+0000 mgr.y (mgr.14505) 431 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:37:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:01 vm00 bash[20770]: cluster 2026-03-09T17:37:00.793634+0000 mgr.y (mgr.14505) 431 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:37:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:01 vm02 bash[23351]: audit 2026-03-09T17:37:00.460374+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:01 vm02 bash[23351]: audit 2026-03-09T17:37:00.460374+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:01 vm02 bash[23351]: cluster 2026-03-09T17:37:00.464536+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T17:37:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:01 vm02 bash[23351]: cluster 2026-03-09T17:37:00.464536+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T17:37:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:01 vm02 bash[23351]: audit 2026-03-09T17:37:00.483016+0000 mon.b (mon.1) 474 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:01 vm02 bash[23351]: audit 2026-03-09T17:37:00.483016+0000 mon.b (mon.1) 474 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:01 vm02 bash[23351]: cluster 2026-03-09T17:37:00.793634+0000 mgr.y (mgr.14505) 431 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:37:01.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:01 vm02 bash[23351]: cluster 2026-03-09T17:37:00.793634+0000 mgr.y (mgr.14505) 431 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T17:37:02.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:37:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:02 vm00 bash[28333]: cluster 2026-03-09T17:37:01.498169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:02 vm00 bash[28333]: cluster 2026-03-09T17:37:01.498169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:02 vm00 bash[28333]: audit 2026-03-09T17:37:01.511307+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:02 vm00 bash[28333]: audit 2026-03-09T17:37:01.511307+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:02 vm00 bash[28333]: audit 2026-03-09T17:37:01.515783+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:02 vm00 bash[28333]: audit 2026-03-09T17:37:01.515783+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:02 vm00 bash[28333]: audit 2026-03-09T17:37:01.941603+0000 mgr.y (mgr.14505) 432 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:02 vm00 bash[28333]: audit 2026-03-09T17:37:01.941603+0000 mgr.y (mgr.14505) 432 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:02 vm00 bash[20770]: cluster 2026-03-09T17:37:01.498169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:02 vm00 bash[20770]: cluster 2026-03-09T17:37:01.498169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:02 vm00 bash[20770]: audit 2026-03-09T17:37:01.511307+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:02 vm00 bash[20770]: audit 2026-03-09T17:37:01.511307+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:02 vm00 bash[20770]: audit 2026-03-09T17:37:01.515783+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:02 vm00 bash[20770]: audit 2026-03-09T17:37:01.515783+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:02 vm00 bash[20770]: audit 2026-03-09T17:37:01.941603+0000 mgr.y (mgr.14505) 432 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:02 vm00 bash[20770]: audit 2026-03-09T17:37:01.941603+0000 mgr.y (mgr.14505) 432 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:02 vm02 bash[23351]: cluster 2026-03-09T17:37:01.498169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T17:37:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:02 vm02 bash[23351]: cluster 2026-03-09T17:37:01.498169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T17:37:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:02 vm02 bash[23351]: audit 2026-03-09T17:37:01.511307+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:02 vm02 bash[23351]: audit 2026-03-09T17:37:01.511307+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:02 vm02 bash[23351]: audit 2026-03-09T17:37:01.515783+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:02 vm02 bash[23351]: audit 2026-03-09T17:37:01.515783+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:02 vm02 bash[23351]: audit 2026-03-09T17:37:01.941603+0000 mgr.y (mgr.14505) 432 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:02 vm02 bash[23351]: audit 2026-03-09T17:37:01.941603+0000 mgr.y (mgr.14505) 432 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: audit 2026-03-09T17:37:02.487966+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: audit 2026-03-09T17:37:02.487966+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: cluster 2026-03-09T17:37:02.491332+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: cluster 2026-03-09T17:37:02.491332+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: audit 2026-03-09T17:37:02.504249+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: audit 2026-03-09T17:37:02.504249+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: audit 2026-03-09T17:37:02.517232+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: audit 2026-03-09T17:37:02.517232+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: cluster 2026-03-09T17:37:02.794186+0000 mgr.y (mgr.14505) 433 : cluster [DBG] pgmap v699: 324 pgs: 10 unknown, 314 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:03 vm00 bash[28333]: cluster 2026-03-09T17:37:02.794186+0000 mgr.y (mgr.14505) 433 : cluster [DBG] pgmap v699: 324 pgs: 10 unknown, 314 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: audit 2026-03-09T17:37:02.487966+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: audit 2026-03-09T17:37:02.487966+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: cluster 2026-03-09T17:37:02.491332+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: cluster 2026-03-09T17:37:02.491332+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: audit 2026-03-09T17:37:02.504249+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: audit 2026-03-09T17:37:02.504249+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: audit 2026-03-09T17:37:02.517232+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: audit 2026-03-09T17:37:02.517232+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: cluster 2026-03-09T17:37:02.794186+0000 mgr.y (mgr.14505) 433 : cluster [DBG] pgmap v699: 324 pgs: 10 unknown, 314 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:03.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:03 vm00 bash[20770]: cluster 2026-03-09T17:37:02.794186+0000 mgr.y (mgr.14505) 433 : cluster [DBG] pgmap v699: 324 pgs: 10 unknown, 314 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: audit 2026-03-09T17:37:02.487966+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: audit 2026-03-09T17:37:02.487966+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: cluster 2026-03-09T17:37:02.491332+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: cluster 2026-03-09T17:37:02.491332+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: audit 2026-03-09T17:37:02.504249+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: audit 2026-03-09T17:37:02.504249+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: audit 2026-03-09T17:37:02.517232+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: audit 2026-03-09T17:37:02.517232+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]: dispatch 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: cluster 2026-03-09T17:37:02.794186+0000 mgr.y (mgr.14505) 433 : cluster [DBG] pgmap v699: 324 pgs: 10 unknown, 314 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:03.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:03 vm02 bash[23351]: cluster 2026-03-09T17:37:02.794186+0000 mgr.y (mgr.14505) 433 : cluster [DBG] pgmap v699: 324 pgs: 10 unknown, 314 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: cluster 2026-03-09T17:37:03.506623+0000 mon.a (mon.0) 2710 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: cluster 2026-03-09T17:37:03.506623+0000 mon.a (mon.0) 2710 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: audit 2026-03-09T17:37:03.513880+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]': finished 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: audit 2026-03-09T17:37:03.513880+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]': finished 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: audit 2026-03-09T17:37:03.527803+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: audit 2026-03-09T17:37:03.527803+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: cluster 2026-03-09T17:37:03.530101+0000 mon.a (mon.0) 2712 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: cluster 2026-03-09T17:37:03.530101+0000 mon.a (mon.0) 2712 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: audit 2026-03-09T17:37:03.531182+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:04 vm02 bash[23351]: audit 2026-03-09T17:37:03.531182+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: cluster 2026-03-09T17:37:03.506623+0000 mon.a (mon.0) 2710 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: cluster 2026-03-09T17:37:03.506623+0000 mon.a (mon.0) 2710 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: audit 2026-03-09T17:37:03.513880+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]': finished 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: audit 2026-03-09T17:37:03.513880+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]': finished 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: audit 2026-03-09T17:37:03.527803+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: audit 2026-03-09T17:37:03.527803+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: cluster 2026-03-09T17:37:03.530101+0000 mon.a (mon.0) 2712 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: cluster 2026-03-09T17:37:03.530101+0000 mon.a (mon.0) 2712 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: audit 2026-03-09T17:37:03.531182+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:04 vm00 bash[28333]: audit 2026-03-09T17:37:03.531182+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: cluster 2026-03-09T17:37:03.506623+0000 mon.a (mon.0) 2710 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: cluster 2026-03-09T17:37:03.506623+0000 mon.a (mon.0) 2710 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: audit 2026-03-09T17:37:03.513880+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]': finished 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: audit 2026-03-09T17:37:03.513880+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_tier","val": "test-rados-api-vm00-60118-89-test-flush"}]': finished 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: audit 2026-03-09T17:37:03.527803+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: audit 2026-03-09T17:37:03.527803+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: cluster 2026-03-09T17:37:03.530101+0000 mon.a (mon.0) 2712 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: cluster 2026-03-09T17:37:03.530101+0000 mon.a (mon.0) 2712 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: audit 2026-03-09T17:37:03.531182+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:04 vm00 bash[20770]: audit 2026-03-09T17:37:03.531182+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: audit 2026-03-09T17:37:04.607384+0000 mon.a (mon.0) 2714 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: audit 2026-03-09T17:37:04.607384+0000 mon.a (mon.0) 2714 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: audit 2026-03-09T17:37:04.610646+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: audit 2026-03-09T17:37:04.610646+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: cluster 2026-03-09T17:37:04.613074+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: cluster 2026-03-09T17:37:04.613074+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: audit 2026-03-09T17:37:04.616421+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: audit 2026-03-09T17:37:04.616421+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: cluster 2026-03-09T17:37:04.794565+0000 mgr.y (mgr.14505) 434 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:05 vm00 bash[28333]: cluster 2026-03-09T17:37:04.794565+0000 mgr.y (mgr.14505) 434 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: audit 2026-03-09T17:37:04.607384+0000 mon.a (mon.0) 2714 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: audit 2026-03-09T17:37:04.607384+0000 mon.a (mon.0) 2714 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: audit 2026-03-09T17:37:04.610646+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: audit 2026-03-09T17:37:04.610646+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: cluster 2026-03-09T17:37:04.613074+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: cluster 2026-03-09T17:37:04.613074+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: audit 2026-03-09T17:37:04.616421+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: audit 2026-03-09T17:37:04.616421+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: cluster 2026-03-09T17:37:04.794565+0000 mgr.y (mgr.14505) 434 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:05 vm00 bash[20770]: cluster 2026-03-09T17:37:04.794565+0000 mgr.y (mgr.14505) 434 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: audit 2026-03-09T17:37:04.607384+0000 mon.a (mon.0) 2714 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: audit 2026-03-09T17:37:04.607384+0000 mon.a (mon.0) 2714 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: audit 2026-03-09T17:37:04.610646+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: audit 2026-03-09T17:37:04.610646+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: cluster 2026-03-09T17:37:04.613074+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: cluster 2026-03-09T17:37:04.613074+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: audit 2026-03-09T17:37:04.616421+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: audit 2026-03-09T17:37:04.616421+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: cluster 2026-03-09T17:37:04.794565+0000 mgr.y (mgr.14505) 434 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:05 vm02 bash[23351]: cluster 2026-03-09T17:37:04.794565+0000 mgr.y (mgr.14505) 434 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:37:06.731 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:37:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:37:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:37:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:06 vm00 bash[28333]: audit 2026-03-09T17:37:05.713209+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:37:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:06 vm00 bash[28333]: audit 2026-03-09T17:37:05.713209+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:37:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:06 vm00 bash[28333]: cluster 2026-03-09T17:37:05.716376+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T17:37:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:06 vm00 bash[28333]: cluster 2026-03-09T17:37:05.716376+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T17:37:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:06 vm00 bash[20770]: audit 2026-03-09T17:37:05.713209+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:37:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:06 vm00 bash[20770]: audit 2026-03-09T17:37:05.713209+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:37:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:06 vm00 bash[20770]: cluster 2026-03-09T17:37:05.716376+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T17:37:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:06 vm00 bash[20770]: cluster 2026-03-09T17:37:05.716376+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T17:37:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:06 vm02 bash[23351]: audit 2026-03-09T17:37:05.713209+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:37:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:06 vm02 bash[23351]: audit 2026-03-09T17:37:05.713209+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:37:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:06 vm02 bash[23351]: cluster 2026-03-09T17:37:05.716376+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T17:37:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:06 vm02 bash[23351]: cluster 2026-03-09T17:37:05.716376+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: cluster 2026-03-09T17:37:06.741261+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: cluster 2026-03-09T17:37:06.741261+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: cluster 2026-03-09T17:37:06.794916+0000 mgr.y (mgr.14505) 435 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: cluster 2026-03-09T17:37:06.794916+0000 mgr.y (mgr.14505) 435 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: audit 2026-03-09T17:37:06.803671+0000 mon.b (mon.1) 479 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: audit 2026-03-09T17:37:06.803671+0000 mon.b (mon.1) 479 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: audit 2026-03-09T17:37:06.804774+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: audit 2026-03-09T17:37:06.804774+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: audit 2026-03-09T17:37:06.804841+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: audit 2026-03-09T17:37:06.804841+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: audit 2026-03-09T17:37:06.805801+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:07 vm00 bash[28333]: audit 2026-03-09T17:37:06.805801+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: cluster 2026-03-09T17:37:06.741261+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: cluster 2026-03-09T17:37:06.741261+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: cluster 2026-03-09T17:37:06.794916+0000 mgr.y (mgr.14505) 435 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: cluster 2026-03-09T17:37:06.794916+0000 mgr.y (mgr.14505) 435 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: audit 2026-03-09T17:37:06.803671+0000 mon.b (mon.1) 479 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: audit 2026-03-09T17:37:06.803671+0000 mon.b (mon.1) 479 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: audit 2026-03-09T17:37:06.804774+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: audit 2026-03-09T17:37:06.804774+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: audit 2026-03-09T17:37:06.804841+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: audit 2026-03-09T17:37:06.804841+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: audit 2026-03-09T17:37:06.805801+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:07 vm00 bash[20770]: audit 2026-03-09T17:37:06.805801+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: cluster 2026-03-09T17:37:06.741261+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: cluster 2026-03-09T17:37:06.741261+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: cluster 2026-03-09T17:37:06.794916+0000 mgr.y (mgr.14505) 435 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: cluster 2026-03-09T17:37:06.794916+0000 mgr.y (mgr.14505) 435 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: audit 2026-03-09T17:37:06.803671+0000 mon.b (mon.1) 479 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: audit 2026-03-09T17:37:06.803671+0000 mon.b (mon.1) 479 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: audit 2026-03-09T17:37:06.804774+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: audit 2026-03-09T17:37:06.804774+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: audit 2026-03-09T17:37:06.804841+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: audit 2026-03-09T17:37:06.804841+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: audit 2026-03-09T17:37:06.805801+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:07 vm02 bash[23351]: audit 2026-03-09T17:37:06.805801+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-87"}]: dispatch 2026-03-09T17:37:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:08 vm00 bash[28333]: cluster 2026-03-09T17:37:07.765460+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T17:37:09.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:08 vm00 bash[28333]: cluster 2026-03-09T17:37:07.765460+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T17:37:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:08 vm00 bash[20770]: cluster 2026-03-09T17:37:07.765460+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T17:37:09.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:08 vm00 bash[20770]: cluster 2026-03-09T17:37:07.765460+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T17:37:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:08 vm02 bash[23351]: cluster 2026-03-09T17:37:07.765460+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T17:37:09.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:08 vm02 bash[23351]: cluster 2026-03-09T17:37:07.765460+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:09 vm00 bash[28333]: cluster 2026-03-09T17:37:08.768863+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:09 vm00 bash[28333]: cluster 2026-03-09T17:37:08.768863+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:09 vm00 bash[28333]: audit 2026-03-09T17:37:08.770977+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:09 vm00 bash[28333]: audit 2026-03-09T17:37:08.770977+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:09 vm00 bash[28333]: audit 2026-03-09T17:37:08.775563+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:09 vm00 bash[28333]: audit 2026-03-09T17:37:08.775563+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:09 vm00 bash[28333]: cluster 2026-03-09T17:37:08.795291+0000 mgr.y (mgr.14505) 436 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:09 vm00 bash[28333]: cluster 2026-03-09T17:37:08.795291+0000 mgr.y (mgr.14505) 436 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:09 vm00 bash[20770]: cluster 2026-03-09T17:37:08.768863+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:09 vm00 bash[20770]: cluster 2026-03-09T17:37:08.768863+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:09 vm00 bash[20770]: audit 2026-03-09T17:37:08.770977+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:09 vm00 bash[20770]: audit 2026-03-09T17:37:08.770977+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:09 vm00 bash[20770]: audit 2026-03-09T17:37:08.775563+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:09 vm00 bash[20770]: audit 2026-03-09T17:37:08.775563+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:09 vm00 bash[20770]: cluster 2026-03-09T17:37:08.795291+0000 mgr.y (mgr.14505) 436 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:10.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:09 vm00 bash[20770]: cluster 2026-03-09T17:37:08.795291+0000 mgr.y (mgr.14505) 436 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:09 vm02 bash[23351]: cluster 2026-03-09T17:37:08.768863+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T17:37:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:09 vm02 bash[23351]: cluster 2026-03-09T17:37:08.768863+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T17:37:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:09 vm02 bash[23351]: audit 2026-03-09T17:37:08.770977+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:09 vm02 bash[23351]: audit 2026-03-09T17:37:08.770977+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:09 vm02 bash[23351]: audit 2026-03-09T17:37:08.775563+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:09 vm02 bash[23351]: audit 2026-03-09T17:37:08.775563+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:09 vm02 bash[23351]: cluster 2026-03-09T17:37:08.795291+0000 mgr.y (mgr.14505) 436 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:10.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:09 vm02 bash[23351]: cluster 2026-03-09T17:37:08.795291+0000 mgr.y (mgr.14505) 436 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: cluster 2026-03-09T17:37:09.757719+0000 mon.a (mon.0) 2725 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: cluster 2026-03-09T17:37:09.757719+0000 mon.a (mon.0) 2725 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: audit 2026-03-09T17:37:09.768734+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: audit 2026-03-09T17:37:09.768734+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: audit 2026-03-09T17:37:09.783541+0000 mon.b (mon.1) 482 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: audit 2026-03-09T17:37:09.783541+0000 mon.b (mon.1) 482 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: cluster 2026-03-09T17:37:09.784739+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: cluster 2026-03-09T17:37:09.784739+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: audit 2026-03-09T17:37:09.792649+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: audit 2026-03-09T17:37:09.792649+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: audit 2026-03-09T17:37:09.800011+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:10 vm00 bash[28333]: audit 2026-03-09T17:37:09.800011+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: cluster 2026-03-09T17:37:09.757719+0000 mon.a (mon.0) 2725 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: cluster 2026-03-09T17:37:09.757719+0000 mon.a (mon.0) 2725 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: audit 2026-03-09T17:37:09.768734+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: audit 2026-03-09T17:37:09.768734+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: audit 2026-03-09T17:37:09.783541+0000 mon.b (mon.1) 482 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: audit 2026-03-09T17:37:09.783541+0000 mon.b (mon.1) 482 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: cluster 2026-03-09T17:37:09.784739+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: cluster 2026-03-09T17:37:09.784739+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: audit 2026-03-09T17:37:09.792649+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: audit 2026-03-09T17:37:09.792649+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: audit 2026-03-09T17:37:09.800011+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:10 vm00 bash[20770]: audit 2026-03-09T17:37:09.800011+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: cluster 2026-03-09T17:37:09.757719+0000 mon.a (mon.0) 2725 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: cluster 2026-03-09T17:37:09.757719+0000 mon.a (mon.0) 2725 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: audit 2026-03-09T17:37:09.768734+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: audit 2026-03-09T17:37:09.768734+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: audit 2026-03-09T17:37:09.783541+0000 mon.b (mon.1) 482 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: audit 2026-03-09T17:37:09.783541+0000 mon.b (mon.1) 482 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: cluster 2026-03-09T17:37:09.784739+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: cluster 2026-03-09T17:37:09.784739+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: audit 2026-03-09T17:37:09.792649+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: audit 2026-03-09T17:37:09.792649+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: audit 2026-03-09T17:37:09.800011+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:11.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:10 vm02 bash[23351]: audit 2026-03-09T17:37:09.800011+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:37:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:11 vm02 bash[23351]: audit 2026-03-09T17:37:10.771300+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:11 vm02 bash[23351]: audit 2026-03-09T17:37:10.771300+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:11 vm02 bash[23351]: cluster 2026-03-09T17:37:10.774381+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T17:37:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:11 vm02 bash[23351]: cluster 2026-03-09T17:37:10.774381+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T17:37:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:11 vm02 bash[23351]: cluster 2026-03-09T17:37:10.795586+0000 mgr.y (mgr.14505) 437 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:12.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:11 vm02 bash[23351]: cluster 2026-03-09T17:37:10.795586+0000 mgr.y (mgr.14505) 437 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:12.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:37:11 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:37:12.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:11 vm00 bash[28333]: audit 2026-03-09T17:37:10.771300+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:11 vm00 bash[28333]: audit 2026-03-09T17:37:10.771300+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:11 vm00 bash[28333]: cluster 2026-03-09T17:37:10.774381+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:11 vm00 bash[28333]: cluster 2026-03-09T17:37:10.774381+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:11 vm00 bash[28333]: cluster 2026-03-09T17:37:10.795586+0000 mgr.y (mgr.14505) 437 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:11 vm00 bash[28333]: cluster 2026-03-09T17:37:10.795586+0000 mgr.y (mgr.14505) 437 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:11 vm00 bash[20770]: audit 2026-03-09T17:37:10.771300+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:11 vm00 bash[20770]: audit 2026-03-09T17:37:10.771300+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:11 vm00 bash[20770]: cluster 2026-03-09T17:37:10.774381+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:11 vm00 bash[20770]: cluster 2026-03-09T17:37:10.774381+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:11 vm00 bash[20770]: cluster 2026-03-09T17:37:10.795586+0000 mgr.y (mgr.14505) 437 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:11 vm00 bash[20770]: cluster 2026-03-09T17:37:10.795586+0000 mgr.y (mgr.14505) 437 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:37:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:12 vm02 bash[23351]: cluster 2026-03-09T17:37:11.805237+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T17:37:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:12 vm02 bash[23351]: cluster 2026-03-09T17:37:11.805237+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T17:37:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:12 vm02 bash[23351]: audit 2026-03-09T17:37:11.951944+0000 mgr.y (mgr.14505) 438 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:13.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:12 vm02 bash[23351]: audit 2026-03-09T17:37:11.951944+0000 mgr.y (mgr.14505) 438 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:13.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:12 vm00 bash[28333]: cluster 2026-03-09T17:37:11.805237+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T17:37:13.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:12 vm00 bash[28333]: cluster 2026-03-09T17:37:11.805237+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T17:37:13.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:12 vm00 bash[28333]: audit 2026-03-09T17:37:11.951944+0000 mgr.y (mgr.14505) 438 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:12 vm00 bash[28333]: audit 2026-03-09T17:37:11.951944+0000 mgr.y (mgr.14505) 438 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:12 vm00 bash[20770]: cluster 2026-03-09T17:37:11.805237+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T17:37:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:12 vm00 bash[20770]: cluster 2026-03-09T17:37:11.805237+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T17:37:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:12 vm00 bash[20770]: audit 2026-03-09T17:37:11.951944+0000 mgr.y (mgr.14505) 438 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:12 vm00 bash[20770]: audit 2026-03-09T17:37:11.951944+0000 mgr.y (mgr.14505) 438 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: cluster 2026-03-09T17:37:12.796405+0000 mgr.y (mgr.14505) 439 : cluster [DBG] pgmap v713: 292 pgs: 5 unknown, 287 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: cluster 2026-03-09T17:37:12.796405+0000 mgr.y (mgr.14505) 439 : cluster [DBG] pgmap v713: 292 pgs: 5 unknown, 287 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: cluster 2026-03-09T17:37:12.812360+0000 mon.a (mon.0) 2732 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: cluster 2026-03-09T17:37:12.812360+0000 mon.a (mon.0) 2732 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.840764+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.840764+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.841527+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.841527+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.841788+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.841788+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.842483+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.842483+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.847090+0000 mon.c (mon.2) 611 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:14.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:13 vm02 bash[23351]: audit 2026-03-09T17:37:12.847090+0000 mon.c (mon.2) 611 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: cluster 2026-03-09T17:37:12.796405+0000 mgr.y (mgr.14505) 439 : cluster [DBG] pgmap v713: 292 pgs: 5 unknown, 287 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: cluster 2026-03-09T17:37:12.796405+0000 mgr.y (mgr.14505) 439 : cluster [DBG] pgmap v713: 292 pgs: 5 unknown, 287 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: cluster 2026-03-09T17:37:12.812360+0000 mon.a (mon.0) 2732 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: cluster 2026-03-09T17:37:12.812360+0000 mon.a (mon.0) 2732 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.840764+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.840764+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.841527+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.841527+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.841788+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.841788+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.842483+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.842483+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.847090+0000 mon.c (mon.2) 611 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:13 vm00 bash[28333]: audit 2026-03-09T17:37:12.847090+0000 mon.c (mon.2) 611 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: cluster 2026-03-09T17:37:12.796405+0000 mgr.y (mgr.14505) 439 : cluster [DBG] pgmap v713: 292 pgs: 5 unknown, 287 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: cluster 2026-03-09T17:37:12.796405+0000 mgr.y (mgr.14505) 439 : cluster [DBG] pgmap v713: 292 pgs: 5 unknown, 287 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: cluster 2026-03-09T17:37:12.812360+0000 mon.a (mon.0) 2732 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: cluster 2026-03-09T17:37:12.812360+0000 mon.a (mon.0) 2732 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.840764+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.840764+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.841527+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.841527+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.841788+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.841788+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.842483+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.842483+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-90"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.847090+0000 mon.c (mon.2) 611 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:13 vm00 bash[20770]: audit 2026-03-09T17:37:12.847090+0000 mon.c (mon.2) 611 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:14 vm02 bash[23351]: cluster 2026-03-09T17:37:13.818484+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T17:37:15.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:14 vm02 bash[23351]: cluster 2026-03-09T17:37:13.818484+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T17:37:15.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:14 vm00 bash[28333]: cluster 2026-03-09T17:37:13.818484+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T17:37:15.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:14 vm00 bash[28333]: cluster 2026-03-09T17:37:13.818484+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T17:37:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:14 vm00 bash[20770]: cluster 2026-03-09T17:37:13.818484+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T17:37:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:14 vm00 bash[20770]: cluster 2026-03-09T17:37:13.818484+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T17:37:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:15 vm02 bash[23351]: cluster 2026-03-09T17:37:14.796675+0000 mgr.y (mgr.14505) 440 : cluster [DBG] pgmap v716: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 B/s wr, 2 op/s 2026-03-09T17:37:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:15 vm02 bash[23351]: cluster 2026-03-09T17:37:14.796675+0000 mgr.y (mgr.14505) 440 : cluster [DBG] pgmap v716: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 B/s wr, 2 op/s 2026-03-09T17:37:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:15 vm02 bash[23351]: cluster 2026-03-09T17:37:14.829125+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T17:37:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:15 vm02 bash[23351]: cluster 2026-03-09T17:37:14.829125+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T17:37:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:15 vm02 bash[23351]: audit 2026-03-09T17:37:14.840736+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:15 vm02 bash[23351]: audit 2026-03-09T17:37:14.840736+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:15 vm02 bash[23351]: audit 2026-03-09T17:37:14.842606+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:15 vm02 bash[23351]: audit 2026-03-09T17:37:14.842606+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:15 vm00 bash[28333]: cluster 2026-03-09T17:37:14.796675+0000 mgr.y (mgr.14505) 440 : cluster [DBG] pgmap v716: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 B/s wr, 2 op/s 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:15 vm00 bash[28333]: cluster 2026-03-09T17:37:14.796675+0000 mgr.y (mgr.14505) 440 : cluster [DBG] pgmap v716: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 B/s wr, 2 op/s 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:15 vm00 bash[28333]: cluster 2026-03-09T17:37:14.829125+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:15 vm00 bash[28333]: cluster 2026-03-09T17:37:14.829125+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:15 vm00 bash[28333]: audit 2026-03-09T17:37:14.840736+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:15 vm00 bash[28333]: audit 2026-03-09T17:37:14.840736+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:15 vm00 bash[28333]: audit 2026-03-09T17:37:14.842606+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:15 vm00 bash[28333]: audit 2026-03-09T17:37:14.842606+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:15 vm00 bash[20770]: cluster 2026-03-09T17:37:14.796675+0000 mgr.y (mgr.14505) 440 : cluster [DBG] pgmap v716: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 B/s wr, 2 op/s 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:15 vm00 bash[20770]: cluster 2026-03-09T17:37:14.796675+0000 mgr.y (mgr.14505) 440 : cluster [DBG] pgmap v716: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1018 B/s wr, 2 op/s 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:15 vm00 bash[20770]: cluster 2026-03-09T17:37:14.829125+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:15 vm00 bash[20770]: cluster 2026-03-09T17:37:14.829125+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:15 vm00 bash[20770]: audit 2026-03-09T17:37:14.840736+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:15 vm00 bash[20770]: audit 2026-03-09T17:37:14.840736+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:15 vm00 bash[20770]: audit 2026-03-09T17:37:14.842606+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:15 vm00 bash[20770]: audit 2026-03-09T17:37:14.842606+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:16.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:37:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:37:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: audit 2026-03-09T17:37:15.827418+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: audit 2026-03-09T17:37:15.827418+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: audit 2026-03-09T17:37:15.832132+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: audit 2026-03-09T17:37:15.832132+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: cluster 2026-03-09T17:37:15.833040+0000 mon.a (mon.0) 2739 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: cluster 2026-03-09T17:37:15.833040+0000 mon.a (mon.0) 2739 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: cluster 2026-03-09T17:37:16.756034+0000 mon.a (mon.0) 2740 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: cluster 2026-03-09T17:37:16.756034+0000 mon.a (mon.0) 2740 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: cluster 2026-03-09T17:37:16.833598+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T17:37:17.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:16 vm02 bash[23351]: cluster 2026-03-09T17:37:16.833598+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T17:37:17.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: audit 2026-03-09T17:37:15.827418+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: audit 2026-03-09T17:37:15.827418+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: audit 2026-03-09T17:37:15.832132+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: audit 2026-03-09T17:37:15.832132+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: cluster 2026-03-09T17:37:15.833040+0000 mon.a (mon.0) 2739 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: cluster 2026-03-09T17:37:15.833040+0000 mon.a (mon.0) 2739 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: cluster 2026-03-09T17:37:16.756034+0000 mon.a (mon.0) 2740 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: cluster 2026-03-09T17:37:16.756034+0000 mon.a (mon.0) 2740 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: cluster 2026-03-09T17:37:16.833598+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:16 vm00 bash[28333]: cluster 2026-03-09T17:37:16.833598+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: audit 2026-03-09T17:37:15.827418+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: audit 2026-03-09T17:37:15.827418+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: audit 2026-03-09T17:37:15.832132+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: audit 2026-03-09T17:37:15.832132+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: cluster 2026-03-09T17:37:15.833040+0000 mon.a (mon.0) 2739 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: cluster 2026-03-09T17:37:15.833040+0000 mon.a (mon.0) 2739 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: cluster 2026-03-09T17:37:16.756034+0000 mon.a (mon.0) 2740 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: cluster 2026-03-09T17:37:16.756034+0000 mon.a (mon.0) 2740 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: cluster 2026-03-09T17:37:16.833598+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T17:37:17.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:16 vm00 bash[20770]: cluster 2026-03-09T17:37:16.833598+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T17:37:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:17 vm02 bash[23351]: cluster 2026-03-09T17:37:16.796927+0000 mgr.y (mgr.14505) 441 : cluster [DBG] pgmap v719: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:17 vm02 bash[23351]: cluster 2026-03-09T17:37:16.796927+0000 mgr.y (mgr.14505) 441 : cluster [DBG] pgmap v719: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:17 vm02 bash[23351]: cluster 2026-03-09T17:37:17.838381+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T17:37:18.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:17 vm02 bash[23351]: cluster 2026-03-09T17:37:17.838381+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T17:37:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:17 vm00 bash[28333]: cluster 2026-03-09T17:37:16.796927+0000 mgr.y (mgr.14505) 441 : cluster [DBG] pgmap v719: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:18.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:17 vm00 bash[28333]: cluster 2026-03-09T17:37:16.796927+0000 mgr.y (mgr.14505) 441 : cluster [DBG] pgmap v719: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:17 vm00 bash[28333]: cluster 2026-03-09T17:37:17.838381+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T17:37:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:17 vm00 bash[28333]: cluster 2026-03-09T17:37:17.838381+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T17:37:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:17 vm00 bash[20770]: cluster 2026-03-09T17:37:16.796927+0000 mgr.y (mgr.14505) 441 : cluster [DBG] pgmap v719: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:17 vm00 bash[20770]: cluster 2026-03-09T17:37:16.796927+0000 mgr.y (mgr.14505) 441 : cluster [DBG] pgmap v719: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:17 vm00 bash[20770]: cluster 2026-03-09T17:37:17.838381+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T17:37:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:17 vm00 bash[20770]: cluster 2026-03-09T17:37:17.838381+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T17:37:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:18 vm02 bash[23351]: audit 2026-03-09T17:37:17.878934+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:18 vm02 bash[23351]: audit 2026-03-09T17:37:17.878934+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:18 vm02 bash[23351]: audit 2026-03-09T17:37:17.879835+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:18 vm02 bash[23351]: audit 2026-03-09T17:37:17.879835+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:18 vm02 bash[23351]: audit 2026-03-09T17:37:17.880008+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:18 vm02 bash[23351]: audit 2026-03-09T17:37:17.880008+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:18 vm02 bash[23351]: audit 2026-03-09T17:37:17.880806+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:18 vm02 bash[23351]: audit 2026-03-09T17:37:17.880806+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.287 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:18 vm00 bash[28333]: audit 2026-03-09T17:37:17.878934+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:18 vm00 bash[28333]: audit 2026-03-09T17:37:17.878934+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:18 vm00 bash[28333]: audit 2026-03-09T17:37:17.879835+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:18 vm00 bash[28333]: audit 2026-03-09T17:37:17.879835+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:18 vm00 bash[28333]: audit 2026-03-09T17:37:17.880008+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:18 vm00 bash[28333]: audit 2026-03-09T17:37:17.880008+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:18 vm00 bash[28333]: audit 2026-03-09T17:37:17.880806+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:18 vm00 bash[28333]: audit 2026-03-09T17:37:17.880806+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:18 vm00 bash[20770]: audit 2026-03-09T17:37:17.878934+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:18 vm00 bash[20770]: audit 2026-03-09T17:37:17.878934+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:18 vm00 bash[20770]: audit 2026-03-09T17:37:17.879835+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:18 vm00 bash[20770]: audit 2026-03-09T17:37:17.879835+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:18 vm00 bash[20770]: audit 2026-03-09T17:37:17.880008+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:18 vm00 bash[20770]: audit 2026-03-09T17:37:17.880008+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:18 vm00 bash[20770]: audit 2026-03-09T17:37:17.880806+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:18 vm00 bash[20770]: audit 2026-03-09T17:37:17.880806+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-92"}]: dispatch 2026-03-09T17:37:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:19 vm02 bash[23351]: cluster 2026-03-09T17:37:18.797361+0000 mgr.y (mgr.14505) 442 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:37:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:19 vm02 bash[23351]: cluster 2026-03-09T17:37:18.797361+0000 mgr.y (mgr.14505) 442 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:37:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:19 vm02 bash[23351]: cluster 2026-03-09T17:37:18.890553+0000 mon.a (mon.0) 2745 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T17:37:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:19 vm02 bash[23351]: cluster 2026-03-09T17:37:18.890553+0000 mon.a (mon.0) 2745 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T17:37:20.136 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:37:20 vm02 bash[51223]: logger=cleanup t=2026-03-09T17:37:20.032835889Z level=info msg="Completed cleanup jobs" duration=1.708035ms 2026-03-09T17:37:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:19 vm00 bash[28333]: cluster 2026-03-09T17:37:18.797361+0000 mgr.y (mgr.14505) 442 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:37:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:19 vm00 bash[28333]: cluster 2026-03-09T17:37:18.797361+0000 mgr.y (mgr.14505) 442 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:37:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:19 vm00 bash[28333]: cluster 2026-03-09T17:37:18.890553+0000 mon.a (mon.0) 2745 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T17:37:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:19 vm00 bash[28333]: cluster 2026-03-09T17:37:18.890553+0000 mon.a (mon.0) 2745 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T17:37:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:19 vm00 bash[20770]: cluster 2026-03-09T17:37:18.797361+0000 mgr.y (mgr.14505) 442 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:37:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:19 vm00 bash[20770]: cluster 2026-03-09T17:37:18.797361+0000 mgr.y (mgr.14505) 442 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T17:37:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:19 vm00 bash[20770]: cluster 2026-03-09T17:37:18.890553+0000 mon.a (mon.0) 2745 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T17:37:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:19 vm00 bash[20770]: cluster 2026-03-09T17:37:18.890553+0000 mon.a (mon.0) 2745 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T17:37:20.636 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:37:20 vm02 bash[51223]: logger=plugins.update.checker t=2026-03-09T17:37:20.193570465Z level=info msg="Update check succeeded" duration=54.278273ms 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:20 vm00 bash[28333]: cluster 2026-03-09T17:37:19.904637+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:20 vm00 bash[28333]: cluster 2026-03-09T17:37:19.904637+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:20 vm00 bash[28333]: audit 2026-03-09T17:37:19.904830+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:20 vm00 bash[28333]: audit 2026-03-09T17:37:19.904830+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:20 vm00 bash[28333]: audit 2026-03-09T17:37:19.906322+0000 mon.a (mon.0) 2747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:20 vm00 bash[28333]: audit 2026-03-09T17:37:19.906322+0000 mon.a (mon.0) 2747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:20 vm00 bash[20770]: cluster 2026-03-09T17:37:19.904637+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:20 vm00 bash[20770]: cluster 2026-03-09T17:37:19.904637+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:20 vm00 bash[20770]: audit 2026-03-09T17:37:19.904830+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:20 vm00 bash[20770]: audit 2026-03-09T17:37:19.904830+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:20 vm00 bash[20770]: audit 2026-03-09T17:37:19.906322+0000 mon.a (mon.0) 2747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:20 vm00 bash[20770]: audit 2026-03-09T17:37:19.906322+0000 mon.a (mon.0) 2747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:20 vm02 bash[23351]: cluster 2026-03-09T17:37:19.904637+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T17:37:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:20 vm02 bash[23351]: cluster 2026-03-09T17:37:19.904637+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T17:37:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:20 vm02 bash[23351]: audit 2026-03-09T17:37:19.904830+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:20 vm02 bash[23351]: audit 2026-03-09T17:37:19.904830+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:20 vm02 bash[23351]: audit 2026-03-09T17:37:19.906322+0000 mon.a (mon.0) 2747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:20 vm02 bash[23351]: audit 2026-03-09T17:37:19.906322+0000 mon.a (mon.0) 2747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: cluster 2026-03-09T17:37:20.797714+0000 mgr.y (mgr.14505) 443 : cluster [DBG] pgmap v725: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: cluster 2026-03-09T17:37:20.797714+0000 mgr.y (mgr.14505) 443 : cluster [DBG] pgmap v725: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: audit 2026-03-09T17:37:20.884998+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: audit 2026-03-09T17:37:20.884998+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: cluster 2026-03-09T17:37:20.889904+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: cluster 2026-03-09T17:37:20.889904+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: audit 2026-03-09T17:37:20.904996+0000 mon.b (mon.1) 491 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: audit 2026-03-09T17:37:20.904996+0000 mon.b (mon.1) 491 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: cluster 2026-03-09T17:37:21.756518+0000 mon.a (mon.0) 2750 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:21 vm00 bash[28333]: cluster 2026-03-09T17:37:21.756518+0000 mon.a (mon.0) 2750 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: cluster 2026-03-09T17:37:20.797714+0000 mgr.y (mgr.14505) 443 : cluster [DBG] pgmap v725: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: cluster 2026-03-09T17:37:20.797714+0000 mgr.y (mgr.14505) 443 : cluster [DBG] pgmap v725: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: audit 2026-03-09T17:37:20.884998+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: audit 2026-03-09T17:37:20.884998+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: cluster 2026-03-09T17:37:20.889904+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: cluster 2026-03-09T17:37:20.889904+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: audit 2026-03-09T17:37:20.904996+0000 mon.b (mon.1) 491 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: audit 2026-03-09T17:37:20.904996+0000 mon.b (mon.1) 491 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: cluster 2026-03-09T17:37:21.756518+0000 mon.a (mon.0) 2750 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:21 vm00 bash[20770]: cluster 2026-03-09T17:37:21.756518+0000 mon.a (mon.0) 2750 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: cluster 2026-03-09T17:37:20.797714+0000 mgr.y (mgr.14505) 443 : cluster [DBG] pgmap v725: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: cluster 2026-03-09T17:37:20.797714+0000 mgr.y (mgr.14505) 443 : cluster [DBG] pgmap v725: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: audit 2026-03-09T17:37:20.884998+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: audit 2026-03-09T17:37:20.884998+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: cluster 2026-03-09T17:37:20.889904+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: cluster 2026-03-09T17:37:20.889904+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: audit 2026-03-09T17:37:20.904996+0000 mon.b (mon.1) 491 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: audit 2026-03-09T17:37:20.904996+0000 mon.b (mon.1) 491 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: cluster 2026-03-09T17:37:21.756518+0000 mon.a (mon.0) 2750 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:21 vm02 bash[23351]: cluster 2026-03-09T17:37:21.756518+0000 mon.a (mon.0) 2750 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:22.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:37:21 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:37:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:22 vm00 bash[28333]: cluster 2026-03-09T17:37:21.915635+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T17:37:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:22 vm00 bash[28333]: cluster 2026-03-09T17:37:21.915635+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T17:37:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:22 vm00 bash[28333]: audit 2026-03-09T17:37:21.961632+0000 mgr.y (mgr.14505) 444 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:22 vm00 bash[28333]: audit 2026-03-09T17:37:21.961632+0000 mgr.y (mgr.14505) 444 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:22 vm00 bash[20770]: cluster 2026-03-09T17:37:21.915635+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T17:37:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:22 vm00 bash[20770]: cluster 2026-03-09T17:37:21.915635+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T17:37:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:22 vm00 bash[20770]: audit 2026-03-09T17:37:21.961632+0000 mgr.y (mgr.14505) 444 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:22 vm00 bash[20770]: audit 2026-03-09T17:37:21.961632+0000 mgr.y (mgr.14505) 444 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:22 vm02 bash[23351]: cluster 2026-03-09T17:37:21.915635+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T17:37:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:22 vm02 bash[23351]: cluster 2026-03-09T17:37:21.915635+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T17:37:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:22 vm02 bash[23351]: audit 2026-03-09T17:37:21.961632+0000 mgr.y (mgr.14505) 444 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:22 vm02 bash[23351]: audit 2026-03-09T17:37:21.961632+0000 mgr.y (mgr.14505) 444 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:23 vm00 bash[28333]: cluster 2026-03-09T17:37:22.798300+0000 mgr.y (mgr.14505) 445 : cluster [DBG] pgmap v728: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:23 vm00 bash[28333]: cluster 2026-03-09T17:37:22.798300+0000 mgr.y (mgr.14505) 445 : cluster [DBG] pgmap v728: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:23 vm00 bash[28333]: cluster 2026-03-09T17:37:22.945806+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T17:37:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:23 vm00 bash[28333]: cluster 2026-03-09T17:37:22.945806+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T17:37:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:23 vm00 bash[20770]: cluster 2026-03-09T17:37:22.798300+0000 mgr.y (mgr.14505) 445 : cluster [DBG] pgmap v728: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:23 vm00 bash[20770]: cluster 2026-03-09T17:37:22.798300+0000 mgr.y (mgr.14505) 445 : cluster [DBG] pgmap v728: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:23 vm00 bash[20770]: cluster 2026-03-09T17:37:22.945806+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T17:37:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:23 vm00 bash[20770]: cluster 2026-03-09T17:37:22.945806+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T17:37:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:23 vm02 bash[23351]: cluster 2026-03-09T17:37:22.798300+0000 mgr.y (mgr.14505) 445 : cluster [DBG] pgmap v728: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:23 vm02 bash[23351]: cluster 2026-03-09T17:37:22.798300+0000 mgr.y (mgr.14505) 445 : cluster [DBG] pgmap v728: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:23 vm02 bash[23351]: cluster 2026-03-09T17:37:22.945806+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T17:37:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:23 vm02 bash[23351]: cluster 2026-03-09T17:37:22.945806+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T17:37:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:25 vm00 bash[28333]: cluster 2026-03-09T17:37:24.798708+0000 mgr.y (mgr.14505) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-09T17:37:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:25 vm00 bash[28333]: cluster 2026-03-09T17:37:24.798708+0000 mgr.y (mgr.14505) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-09T17:37:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:25 vm00 bash[20770]: cluster 2026-03-09T17:37:24.798708+0000 mgr.y (mgr.14505) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-09T17:37:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:25 vm00 bash[20770]: cluster 2026-03-09T17:37:24.798708+0000 mgr.y (mgr.14505) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-09T17:37:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:25 vm02 bash[23351]: cluster 2026-03-09T17:37:24.798708+0000 mgr.y (mgr.14505) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-09T17:37:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:25 vm02 bash[23351]: cluster 2026-03-09T17:37:24.798708+0000 mgr.y (mgr.14505) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-09T17:37:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:37:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:37:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:37:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:26 vm00 bash[28333]: cluster 2026-03-09T17:37:26.757070+0000 mon.a (mon.0) 2753 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:27.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:26 vm00 bash[28333]: cluster 2026-03-09T17:37:26.757070+0000 mon.a (mon.0) 2753 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:26 vm00 bash[20770]: cluster 2026-03-09T17:37:26.757070+0000 mon.a (mon.0) 2753 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:27.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:26 vm00 bash[20770]: cluster 2026-03-09T17:37:26.757070+0000 mon.a (mon.0) 2753 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:26 vm02 bash[23351]: cluster 2026-03-09T17:37:26.757070+0000 mon.a (mon.0) 2753 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:26 vm02 bash[23351]: cluster 2026-03-09T17:37:26.757070+0000 mon.a (mon.0) 2753 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:27 vm00 bash[28333]: cluster 2026-03-09T17:37:26.799134+0000 mgr.y (mgr.14505) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T17:37:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:27 vm00 bash[28333]: cluster 2026-03-09T17:37:26.799134+0000 mgr.y (mgr.14505) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T17:37:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:27 vm00 bash[28333]: audit 2026-03-09T17:37:27.856923+0000 mon.c (mon.2) 612 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:27 vm00 bash[28333]: audit 2026-03-09T17:37:27.856923+0000 mon.c (mon.2) 612 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:27 vm00 bash[20770]: cluster 2026-03-09T17:37:26.799134+0000 mgr.y (mgr.14505) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T17:37:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:27 vm00 bash[20770]: cluster 2026-03-09T17:37:26.799134+0000 mgr.y (mgr.14505) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T17:37:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:27 vm00 bash[20770]: audit 2026-03-09T17:37:27.856923+0000 mon.c (mon.2) 612 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:27 vm00 bash[20770]: audit 2026-03-09T17:37:27.856923+0000 mon.c (mon.2) 612 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:27 vm02 bash[23351]: cluster 2026-03-09T17:37:26.799134+0000 mgr.y (mgr.14505) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T17:37:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:27 vm02 bash[23351]: cluster 2026-03-09T17:37:26.799134+0000 mgr.y (mgr.14505) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T17:37:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:27 vm02 bash[23351]: audit 2026-03-09T17:37:27.856923+0000 mon.c (mon.2) 612 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:27 vm02 bash[23351]: audit 2026-03-09T17:37:27.856923+0000 mon.c (mon.2) 612 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:29 vm00 bash[28333]: cluster 2026-03-09T17:37:28.799972+0000 mgr.y (mgr.14505) 448 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:37:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:29 vm00 bash[28333]: cluster 2026-03-09T17:37:28.799972+0000 mgr.y (mgr.14505) 448 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:37:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:29 vm00 bash[20770]: cluster 2026-03-09T17:37:28.799972+0000 mgr.y (mgr.14505) 448 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:37:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:29 vm00 bash[20770]: cluster 2026-03-09T17:37:28.799972+0000 mgr.y (mgr.14505) 448 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:37:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:29 vm02 bash[23351]: cluster 2026-03-09T17:37:28.799972+0000 mgr.y (mgr.14505) 448 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:37:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:29 vm02 bash[23351]: cluster 2026-03-09T17:37:28.799972+0000 mgr.y (mgr.14505) 448 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T17:37:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:32 vm00 bash[28333]: cluster 2026-03-09T17:37:30.800312+0000 mgr.y (mgr.14505) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-09T17:37:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:32 vm00 bash[28333]: cluster 2026-03-09T17:37:30.800312+0000 mgr.y (mgr.14505) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-09T17:37:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:31 vm00 bash[20770]: cluster 2026-03-09T17:37:30.800312+0000 mgr.y (mgr.14505) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-09T17:37:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:31 vm00 bash[20770]: cluster 2026-03-09T17:37:30.800312+0000 mgr.y (mgr.14505) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-09T17:37:32.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:37:31 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:37:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:31 vm02 bash[23351]: cluster 2026-03-09T17:37:30.800312+0000 mgr.y (mgr.14505) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-09T17:37:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:31 vm02 bash[23351]: cluster 2026-03-09T17:37:30.800312+0000 mgr.y (mgr.14505) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-09T17:37:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:33 vm00 bash[28333]: audit 2026-03-09T17:37:31.972390+0000 mgr.y (mgr.14505) 450 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:33 vm00 bash[28333]: audit 2026-03-09T17:37:31.972390+0000 mgr.y (mgr.14505) 450 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:33 vm00 bash[20770]: audit 2026-03-09T17:37:31.972390+0000 mgr.y (mgr.14505) 450 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:33 vm00 bash[20770]: audit 2026-03-09T17:37:31.972390+0000 mgr.y (mgr.14505) 450 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:33 vm02 bash[23351]: audit 2026-03-09T17:37:31.972390+0000 mgr.y (mgr.14505) 450 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:33 vm02 bash[23351]: audit 2026-03-09T17:37:31.972390+0000 mgr.y (mgr.14505) 450 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:34 vm00 bash[28333]: cluster 2026-03-09T17:37:32.800963+0000 mgr.y (mgr.14505) 451 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T17:37:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:34 vm00 bash[28333]: cluster 2026-03-09T17:37:32.800963+0000 mgr.y (mgr.14505) 451 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T17:37:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:34 vm00 bash[28333]: cluster 2026-03-09T17:37:33.034216+0000 mon.a (mon.0) 2754 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T17:37:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:34 vm00 bash[28333]: cluster 2026-03-09T17:37:33.034216+0000 mon.a (mon.0) 2754 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T17:37:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:34 vm00 bash[20770]: cluster 2026-03-09T17:37:32.800963+0000 mgr.y (mgr.14505) 451 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T17:37:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:34 vm00 bash[20770]: cluster 2026-03-09T17:37:32.800963+0000 mgr.y (mgr.14505) 451 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T17:37:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:34 vm00 bash[20770]: cluster 2026-03-09T17:37:33.034216+0000 mon.a (mon.0) 2754 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T17:37:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:34 vm00 bash[20770]: cluster 2026-03-09T17:37:33.034216+0000 mon.a (mon.0) 2754 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T17:37:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:34 vm02 bash[23351]: cluster 2026-03-09T17:37:32.800963+0000 mgr.y (mgr.14505) 451 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T17:37:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:34 vm02 bash[23351]: cluster 2026-03-09T17:37:32.800963+0000 mgr.y (mgr.14505) 451 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T17:37:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:34 vm02 bash[23351]: cluster 2026-03-09T17:37:33.034216+0000 mon.a (mon.0) 2754 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T17:37:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:34 vm02 bash[23351]: cluster 2026-03-09T17:37:33.034216+0000 mon.a (mon.0) 2754 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T17:37:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:36 vm02 bash[23351]: cluster 2026-03-09T17:37:34.801303+0000 mgr.y (mgr.14505) 452 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:36 vm02 bash[23351]: cluster 2026-03-09T17:37:34.801303+0000 mgr.y (mgr.14505) 452 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:36.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:36 vm00 bash[28333]: cluster 2026-03-09T17:37:34.801303+0000 mgr.y (mgr.14505) 452 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:36 vm00 bash[28333]: cluster 2026-03-09T17:37:34.801303+0000 mgr.y (mgr.14505) 452 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:36 vm00 bash[20770]: cluster 2026-03-09T17:37:34.801303+0000 mgr.y (mgr.14505) 452 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:36 vm00 bash[20770]: cluster 2026-03-09T17:37:34.801303+0000 mgr.y (mgr.14505) 452 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:37:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:37:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:37:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:38 vm00 bash[28333]: cluster 2026-03-09T17:37:36.801626+0000 mgr.y (mgr.14505) 453 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:38 vm00 bash[28333]: cluster 2026-03-09T17:37:36.801626+0000 mgr.y (mgr.14505) 453 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:38 vm00 bash[20770]: cluster 2026-03-09T17:37:36.801626+0000 mgr.y (mgr.14505) 453 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:38 vm00 bash[20770]: cluster 2026-03-09T17:37:36.801626+0000 mgr.y (mgr.14505) 453 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:38.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:38 vm02 bash[23351]: cluster 2026-03-09T17:37:36.801626+0000 mgr.y (mgr.14505) 453 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:38.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:38 vm02 bash[23351]: cluster 2026-03-09T17:37:36.801626+0000 mgr.y (mgr.14505) 453 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 2 op/s 2026-03-09T17:37:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:40 vm00 bash[28333]: cluster 2026-03-09T17:37:38.802398+0000 mgr.y (mgr.14505) 454 : cluster [DBG] pgmap v738: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:40 vm00 bash[28333]: cluster 2026-03-09T17:37:38.802398+0000 mgr.y (mgr.14505) 454 : cluster [DBG] pgmap v738: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:40 vm00 bash[20770]: cluster 2026-03-09T17:37:38.802398+0000 mgr.y (mgr.14505) 454 : cluster [DBG] pgmap v738: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:40 vm00 bash[20770]: cluster 2026-03-09T17:37:38.802398+0000 mgr.y (mgr.14505) 454 : cluster [DBG] pgmap v738: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:40 vm02 bash[23351]: cluster 2026-03-09T17:37:38.802398+0000 mgr.y (mgr.14505) 454 : cluster [DBG] pgmap v738: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:40 vm02 bash[23351]: cluster 2026-03-09T17:37:38.802398+0000 mgr.y (mgr.14505) 454 : cluster [DBG] pgmap v738: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:42 vm02 bash[23351]: cluster 2026-03-09T17:37:40.802806+0000 mgr.y (mgr.14505) 455 : cluster [DBG] pgmap v739: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:42 vm02 bash[23351]: cluster 2026-03-09T17:37:40.802806+0000 mgr.y (mgr.14505) 455 : cluster [DBG] pgmap v739: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:42.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:37:41 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:37:42.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:42 vm00 bash[28333]: cluster 2026-03-09T17:37:40.802806+0000 mgr.y (mgr.14505) 455 : cluster [DBG] pgmap v739: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:42 vm00 bash[28333]: cluster 2026-03-09T17:37:40.802806+0000 mgr.y (mgr.14505) 455 : cluster [DBG] pgmap v739: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:42 vm00 bash[20770]: cluster 2026-03-09T17:37:40.802806+0000 mgr.y (mgr.14505) 455 : cluster [DBG] pgmap v739: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:42 vm00 bash[20770]: cluster 2026-03-09T17:37:40.802806+0000 mgr.y (mgr.14505) 455 : cluster [DBG] pgmap v739: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:41.977738+0000 mgr.y (mgr.14505) 456 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:41.977738+0000 mgr.y (mgr.14505) 456 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:42.863301+0000 mon.c (mon.2) 613 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:42.863301+0000 mon.c (mon.2) 613 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:43.047404+0000 mon.b (mon.1) 492 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:43.047404+0000 mon.b (mon.1) 492 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:43.048278+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:43.048278+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:43.048479+0000 mon.a (mon.0) 2755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:43.048479+0000 mon.a (mon.0) 2755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:43.049253+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:43 vm00 bash[28333]: audit 2026-03-09T17:37:43.049253+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:41.977738+0000 mgr.y (mgr.14505) 456 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:41.977738+0000 mgr.y (mgr.14505) 456 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:42.863301+0000 mon.c (mon.2) 613 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:42.863301+0000 mon.c (mon.2) 613 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:43.047404+0000 mon.b (mon.1) 492 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:43.047404+0000 mon.b (mon.1) 492 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:43.048278+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:43.048278+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:43.048479+0000 mon.a (mon.0) 2755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:43.048479+0000 mon.a (mon.0) 2755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:43.049253+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:43 vm00 bash[20770]: audit 2026-03-09T17:37:43.049253+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:41.977738+0000 mgr.y (mgr.14505) 456 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:41.977738+0000 mgr.y (mgr.14505) 456 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:42.863301+0000 mon.c (mon.2) 613 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:42.863301+0000 mon.c (mon.2) 613 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:43.047404+0000 mon.b (mon.1) 492 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:43.047404+0000 mon.b (mon.1) 492 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:43.048278+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:43.048278+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:43.048479+0000 mon.a (mon.0) 2755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:43.048479+0000 mon.a (mon.0) 2755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:43.049253+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:43 vm02 bash[23351]: audit 2026-03-09T17:37:43.049253+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-94"}]: dispatch 2026-03-09T17:37:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:44 vm00 bash[28333]: cluster 2026-03-09T17:37:42.803509+0000 mgr.y (mgr.14505) 457 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:44 vm00 bash[28333]: cluster 2026-03-09T17:37:42.803509+0000 mgr.y (mgr.14505) 457 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:44 vm00 bash[28333]: cluster 2026-03-09T17:37:43.188423+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T17:37:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:44 vm00 bash[28333]: cluster 2026-03-09T17:37:43.188423+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T17:37:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:44 vm00 bash[20770]: cluster 2026-03-09T17:37:42.803509+0000 mgr.y (mgr.14505) 457 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:44 vm00 bash[20770]: cluster 2026-03-09T17:37:42.803509+0000 mgr.y (mgr.14505) 457 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:44 vm00 bash[20770]: cluster 2026-03-09T17:37:43.188423+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T17:37:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:44 vm00 bash[20770]: cluster 2026-03-09T17:37:43.188423+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T17:37:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:44 vm02 bash[23351]: cluster 2026-03-09T17:37:42.803509+0000 mgr.y (mgr.14505) 457 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:44 vm02 bash[23351]: cluster 2026-03-09T17:37:42.803509+0000 mgr.y (mgr.14505) 457 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:37:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:44 vm02 bash[23351]: cluster 2026-03-09T17:37:43.188423+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T17:37:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:44 vm02 bash[23351]: cluster 2026-03-09T17:37:43.188423+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:45 vm00 bash[28333]: cluster 2026-03-09T17:37:44.186340+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:45 vm00 bash[28333]: cluster 2026-03-09T17:37:44.186340+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:45 vm00 bash[28333]: audit 2026-03-09T17:37:44.198821+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:45 vm00 bash[28333]: audit 2026-03-09T17:37:44.198821+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:45 vm00 bash[28333]: audit 2026-03-09T17:37:44.199953+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:45 vm00 bash[28333]: audit 2026-03-09T17:37:44.199953+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:45 vm00 bash[20770]: cluster 2026-03-09T17:37:44.186340+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:45 vm00 bash[20770]: cluster 2026-03-09T17:37:44.186340+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:45 vm00 bash[20770]: audit 2026-03-09T17:37:44.198821+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:45 vm00 bash[20770]: audit 2026-03-09T17:37:44.198821+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:45 vm00 bash[20770]: audit 2026-03-09T17:37:44.199953+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:45 vm00 bash[20770]: audit 2026-03-09T17:37:44.199953+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:45 vm02 bash[23351]: cluster 2026-03-09T17:37:44.186340+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T17:37:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:45 vm02 bash[23351]: cluster 2026-03-09T17:37:44.186340+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T17:37:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:45 vm02 bash[23351]: audit 2026-03-09T17:37:44.198821+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:45 vm02 bash[23351]: audit 2026-03-09T17:37:44.198821+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:45 vm02 bash[23351]: audit 2026-03-09T17:37:44.199953+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:45.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:45 vm02 bash[23351]: audit 2026-03-09T17:37:44.199953+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: cluster 2026-03-09T17:37:44.803809+0000 mgr.y (mgr.14505) 458 : cluster [DBG] pgmap v743: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: cluster 2026-03-09T17:37:44.803809+0000 mgr.y (mgr.14505) 458 : cluster [DBG] pgmap v743: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: cluster 2026-03-09T17:37:45.179330+0000 mon.a (mon.0) 2760 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: cluster 2026-03-09T17:37:45.179330+0000 mon.a (mon.0) 2760 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: audit 2026-03-09T17:37:45.189591+0000 mon.a (mon.0) 2761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: audit 2026-03-09T17:37:45.189591+0000 mon.a (mon.0) 2761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: cluster 2026-03-09T17:37:45.207147+0000 mon.a (mon.0) 2762 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: cluster 2026-03-09T17:37:45.207147+0000 mon.a (mon.0) 2762 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: audit 2026-03-09T17:37:45.208086+0000 mon.b (mon.1) 495 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:46 vm00 bash[28333]: audit 2026-03-09T17:37:45.208086+0000 mon.b (mon.1) 495 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: cluster 2026-03-09T17:37:44.803809+0000 mgr.y (mgr.14505) 458 : cluster [DBG] pgmap v743: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: cluster 2026-03-09T17:37:44.803809+0000 mgr.y (mgr.14505) 458 : cluster [DBG] pgmap v743: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: cluster 2026-03-09T17:37:45.179330+0000 mon.a (mon.0) 2760 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: cluster 2026-03-09T17:37:45.179330+0000 mon.a (mon.0) 2760 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: audit 2026-03-09T17:37:45.189591+0000 mon.a (mon.0) 2761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: audit 2026-03-09T17:37:45.189591+0000 mon.a (mon.0) 2761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: cluster 2026-03-09T17:37:45.207147+0000 mon.a (mon.0) 2762 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: cluster 2026-03-09T17:37:45.207147+0000 mon.a (mon.0) 2762 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: audit 2026-03-09T17:37:45.208086+0000 mon.b (mon.1) 495 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:46 vm00 bash[20770]: audit 2026-03-09T17:37:45.208086+0000 mon.b (mon.1) 495 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:37:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:37:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: cluster 2026-03-09T17:37:44.803809+0000 mgr.y (mgr.14505) 458 : cluster [DBG] pgmap v743: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: cluster 2026-03-09T17:37:44.803809+0000 mgr.y (mgr.14505) 458 : cluster [DBG] pgmap v743: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: cluster 2026-03-09T17:37:45.179330+0000 mon.a (mon.0) 2760 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: cluster 2026-03-09T17:37:45.179330+0000 mon.a (mon.0) 2760 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: audit 2026-03-09T17:37:45.189591+0000 mon.a (mon.0) 2761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: audit 2026-03-09T17:37:45.189591+0000 mon.a (mon.0) 2761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: cluster 2026-03-09T17:37:45.207147+0000 mon.a (mon.0) 2762 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: cluster 2026-03-09T17:37:45.207147+0000 mon.a (mon.0) 2762 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: audit 2026-03-09T17:37:45.208086+0000 mon.b (mon.1) 495 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:46 vm02 bash[23351]: audit 2026-03-09T17:37:45.208086+0000 mon.b (mon.1) 495 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:37:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:47 vm00 bash[28333]: cluster 2026-03-09T17:37:46.214134+0000 mon.a (mon.0) 2763 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T17:37:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:47 vm00 bash[28333]: cluster 2026-03-09T17:37:46.214134+0000 mon.a (mon.0) 2763 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T17:37:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:47 vm00 bash[28333]: cluster 2026-03-09T17:37:46.775017+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T17:37:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:47 vm00 bash[28333]: cluster 2026-03-09T17:37:46.775017+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T17:37:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:47 vm00 bash[20770]: cluster 2026-03-09T17:37:46.214134+0000 mon.a (mon.0) 2763 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T17:37:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:47 vm00 bash[20770]: cluster 2026-03-09T17:37:46.214134+0000 mon.a (mon.0) 2763 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T17:37:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:47 vm00 bash[20770]: cluster 2026-03-09T17:37:46.775017+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T17:37:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:47 vm00 bash[20770]: cluster 2026-03-09T17:37:46.775017+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T17:37:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:47 vm02 bash[23351]: cluster 2026-03-09T17:37:46.214134+0000 mon.a (mon.0) 2763 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T17:37:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:47 vm02 bash[23351]: cluster 2026-03-09T17:37:46.214134+0000 mon.a (mon.0) 2763 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T17:37:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:47 vm02 bash[23351]: cluster 2026-03-09T17:37:46.775017+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T17:37:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:47 vm02 bash[23351]: cluster 2026-03-09T17:37:46.775017+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T17:37:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:48 vm00 bash[28333]: cluster 2026-03-09T17:37:46.804079+0000 mgr.y (mgr.14505) 459 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:48 vm00 bash[28333]: cluster 2026-03-09T17:37:46.804079+0000 mgr.y (mgr.14505) 459 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:48 vm00 bash[20770]: cluster 2026-03-09T17:37:46.804079+0000 mgr.y (mgr.14505) 459 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:48 vm00 bash[20770]: cluster 2026-03-09T17:37:46.804079+0000 mgr.y (mgr.14505) 459 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:48 vm02 bash[23351]: cluster 2026-03-09T17:37:46.804079+0000 mgr.y (mgr.14505) 459 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:48 vm02 bash[23351]: cluster 2026-03-09T17:37:46.804079+0000 mgr.y (mgr.14505) 459 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:37:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:49 vm00 bash[28333]: audit 2026-03-09T17:37:49.062574+0000 mon.c (mon.2) 614 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:37:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:49 vm00 bash[28333]: audit 2026-03-09T17:37:49.062574+0000 mon.c (mon.2) 614 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:37:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:49 vm00 bash[20770]: audit 2026-03-09T17:37:49.062574+0000 mon.c (mon.2) 614 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:37:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:49 vm00 bash[20770]: audit 2026-03-09T17:37:49.062574+0000 mon.c (mon.2) 614 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:37:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:49 vm02 bash[23351]: audit 2026-03-09T17:37:49.062574+0000 mon.c (mon.2) 614 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:37:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:49 vm02 bash[23351]: audit 2026-03-09T17:37:49.062574+0000 mon.c (mon.2) 614 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: cluster 2026-03-09T17:37:48.804766+0000 mgr.y (mgr.14505) 460 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: cluster 2026-03-09T17:37:48.804766+0000 mgr.y (mgr.14505) 460 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.384745+0000 mon.c (mon.2) 615 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.384745+0000 mon.c (mon.2) 615 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.385144+0000 mon.a (mon.0) 2765 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.385144+0000 mon.a (mon.0) 2765 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.385771+0000 mon.c (mon.2) 616 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.385771+0000 mon.c (mon.2) 616 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.386007+0000 mon.a (mon.0) 2766 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.386007+0000 mon.a (mon.0) 2766 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.387414+0000 mon.c (mon.2) 617 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.387414+0000 mon.c (mon.2) 617 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.388340+0000 mon.c (mon.2) 618 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.388340+0000 mon.c (mon.2) 618 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.394218+0000 mon.a (mon.0) 2767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:50 vm00 bash[28333]: audit 2026-03-09T17:37:49.394218+0000 mon.a (mon.0) 2767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: cluster 2026-03-09T17:37:48.804766+0000 mgr.y (mgr.14505) 460 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: cluster 2026-03-09T17:37:48.804766+0000 mgr.y (mgr.14505) 460 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.384745+0000 mon.c (mon.2) 615 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.384745+0000 mon.c (mon.2) 615 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.385144+0000 mon.a (mon.0) 2765 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.385144+0000 mon.a (mon.0) 2765 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.385771+0000 mon.c (mon.2) 616 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.385771+0000 mon.c (mon.2) 616 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.386007+0000 mon.a (mon.0) 2766 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.386007+0000 mon.a (mon.0) 2766 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.387414+0000 mon.c (mon.2) 617 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.387414+0000 mon.c (mon.2) 617 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.388340+0000 mon.c (mon.2) 618 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.388340+0000 mon.c (mon.2) 618 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.394218+0000 mon.a (mon.0) 2767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:37:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:50 vm00 bash[20770]: audit 2026-03-09T17:37:49.394218+0000 mon.a (mon.0) 2767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: cluster 2026-03-09T17:37:48.804766+0000 mgr.y (mgr.14505) 460 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: cluster 2026-03-09T17:37:48.804766+0000 mgr.y (mgr.14505) 460 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.384745+0000 mon.c (mon.2) 615 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.384745+0000 mon.c (mon.2) 615 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.385144+0000 mon.a (mon.0) 2765 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.385144+0000 mon.a (mon.0) 2765 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.385771+0000 mon.c (mon.2) 616 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.385771+0000 mon.c (mon.2) 616 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.386007+0000 mon.a (mon.0) 2766 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.386007+0000 mon.a (mon.0) 2766 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.387414+0000 mon.c (mon.2) 617 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.387414+0000 mon.c (mon.2) 617 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.388340+0000 mon.c (mon.2) 618 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.388340+0000 mon.c (mon.2) 618 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.394218+0000 mon.a (mon.0) 2767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:37:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:50 vm02 bash[23351]: audit 2026-03-09T17:37:49.394218+0000 mon.a (mon.0) 2767 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:37:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:52 vm02 bash[23351]: cluster 2026-03-09T17:37:50.805090+0000 mgr.y (mgr.14505) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:52 vm02 bash[23351]: cluster 2026-03-09T17:37:50.805090+0000 mgr.y (mgr.14505) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:52 vm02 bash[23351]: cluster 2026-03-09T17:37:51.761305+0000 mon.a (mon.0) 2768 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:52 vm02 bash[23351]: cluster 2026-03-09T17:37:51.761305+0000 mon.a (mon.0) 2768 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:52.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:37:51 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:37:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:52 vm00 bash[28333]: cluster 2026-03-09T17:37:50.805090+0000 mgr.y (mgr.14505) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:52 vm00 bash[28333]: cluster 2026-03-09T17:37:50.805090+0000 mgr.y (mgr.14505) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:52 vm00 bash[28333]: cluster 2026-03-09T17:37:51.761305+0000 mon.a (mon.0) 2768 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:52 vm00 bash[28333]: cluster 2026-03-09T17:37:51.761305+0000 mon.a (mon.0) 2768 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:52 vm00 bash[20770]: cluster 2026-03-09T17:37:50.805090+0000 mgr.y (mgr.14505) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:52 vm00 bash[20770]: cluster 2026-03-09T17:37:50.805090+0000 mgr.y (mgr.14505) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:37:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:52 vm00 bash[20770]: cluster 2026-03-09T17:37:51.761305+0000 mon.a (mon.0) 2768 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:52 vm00 bash[20770]: cluster 2026-03-09T17:37:51.761305+0000 mon.a (mon.0) 2768 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:37:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:53 vm00 bash[28333]: audit 2026-03-09T17:37:51.985934+0000 mgr.y (mgr.14505) 462 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:53 vm00 bash[28333]: audit 2026-03-09T17:37:51.985934+0000 mgr.y (mgr.14505) 462 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:53 vm00 bash[20770]: audit 2026-03-09T17:37:51.985934+0000 mgr.y (mgr.14505) 462 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:53 vm00 bash[20770]: audit 2026-03-09T17:37:51.985934+0000 mgr.y (mgr.14505) 462 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:53 vm02 bash[23351]: audit 2026-03-09T17:37:51.985934+0000 mgr.y (mgr.14505) 462 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:53 vm02 bash[23351]: audit 2026-03-09T17:37:51.985934+0000 mgr.y (mgr.14505) 462 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:37:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:54 vm00 bash[28333]: cluster 2026-03-09T17:37:52.805709+0000 mgr.y (mgr.14505) 463 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 807 B/s wr, 2 op/s 2026-03-09T17:37:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:54 vm00 bash[28333]: cluster 2026-03-09T17:37:52.805709+0000 mgr.y (mgr.14505) 463 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 807 B/s wr, 2 op/s 2026-03-09T17:37:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:54 vm00 bash[20770]: cluster 2026-03-09T17:37:52.805709+0000 mgr.y (mgr.14505) 463 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 807 B/s wr, 2 op/s 2026-03-09T17:37:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:54 vm00 bash[20770]: cluster 2026-03-09T17:37:52.805709+0000 mgr.y (mgr.14505) 463 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 807 B/s wr, 2 op/s 2026-03-09T17:37:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:54 vm02 bash[23351]: cluster 2026-03-09T17:37:52.805709+0000 mgr.y (mgr.14505) 463 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 807 B/s wr, 2 op/s 2026-03-09T17:37:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:54 vm02 bash[23351]: cluster 2026-03-09T17:37:52.805709+0000 mgr.y (mgr.14505) 463 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 807 B/s wr, 2 op/s 2026-03-09T17:37:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:56 vm00 bash[20770]: cluster 2026-03-09T17:37:54.806240+0000 mgr.y (mgr.14505) 464 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 714 B/s wr, 2 op/s 2026-03-09T17:37:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:56 vm00 bash[20770]: cluster 2026-03-09T17:37:54.806240+0000 mgr.y (mgr.14505) 464 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 714 B/s wr, 2 op/s 2026-03-09T17:37:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:56 vm00 bash[28333]: cluster 2026-03-09T17:37:54.806240+0000 mgr.y (mgr.14505) 464 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 714 B/s wr, 2 op/s 2026-03-09T17:37:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:56 vm00 bash[28333]: cluster 2026-03-09T17:37:54.806240+0000 mgr.y (mgr.14505) 464 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 714 B/s wr, 2 op/s 2026-03-09T17:37:56.538 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 17:37:56 vm00 bash[37416]: debug 2026-03-09T17:37:56.251+0000 7f73802cf640 -1 snap_mapper.add_oid found existing snaps mapped on 102:05748823:test-rados-api-vm00-60118-97::foo:21, removing 2026-03-09T17:37:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:37:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:37:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:37:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:56 vm02 bash[23351]: cluster 2026-03-09T17:37:54.806240+0000 mgr.y (mgr.14505) 464 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 714 B/s wr, 2 op/s 2026-03-09T17:37:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:56 vm02 bash[23351]: cluster 2026-03-09T17:37:54.806240+0000 mgr.y (mgr.14505) 464 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 714 B/s wr, 2 op/s 2026-03-09T17:37:56.636 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 17:37:56 vm02 bash[32792]: debug 2026-03-09T17:37:56.252+0000 7f84dd32b640 -1 snap_mapper.add_oid found existing snaps mapped on 102:05748823:test-rados-api-vm00-60118-97::foo:21, removing 2026-03-09T17:37:56.636 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 17:37:56 vm02 bash[38575]: debug 2026-03-09T17:37:56.248+0000 7f6ffef7f640 -1 snap_mapper.add_oid found existing snaps mapped on 102:05748823:test-rados-api-vm00-60118-97::foo:21, removing 2026-03-09T17:37:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:57 vm02 bash[23351]: audit 2026-03-09T17:37:56.262719+0000 mon.b (mon.1) 496 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:57 vm02 bash[23351]: audit 2026-03-09T17:37:56.262719+0000 mon.b (mon.1) 496 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:57 vm02 bash[23351]: audit 2026-03-09T17:37:56.267931+0000 mon.a (mon.0) 2769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:57 vm02 bash[23351]: audit 2026-03-09T17:37:56.267931+0000 mon.a (mon.0) 2769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:57 vm02 bash[23351]: audit 2026-03-09T17:37:56.268934+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:57 vm02 bash[23351]: audit 2026-03-09T17:37:56.268934+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:57 vm02 bash[23351]: audit 2026-03-09T17:37:56.276824+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:57 vm02 bash[23351]: audit 2026-03-09T17:37:56.276824+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:57 vm00 bash[20770]: audit 2026-03-09T17:37:56.262719+0000 mon.b (mon.1) 496 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:57 vm00 bash[20770]: audit 2026-03-09T17:37:56.262719+0000 mon.b (mon.1) 496 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:57 vm00 bash[20770]: audit 2026-03-09T17:37:56.267931+0000 mon.a (mon.0) 2769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:57 vm00 bash[20770]: audit 2026-03-09T17:37:56.267931+0000 mon.a (mon.0) 2769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:57 vm00 bash[20770]: audit 2026-03-09T17:37:56.268934+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:57 vm00 bash[20770]: audit 2026-03-09T17:37:56.268934+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:57 vm00 bash[20770]: audit 2026-03-09T17:37:56.276824+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:57 vm00 bash[20770]: audit 2026-03-09T17:37:56.276824+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:57 vm00 bash[28333]: audit 2026-03-09T17:37:56.262719+0000 mon.b (mon.1) 496 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:57 vm00 bash[28333]: audit 2026-03-09T17:37:56.262719+0000 mon.b (mon.1) 496 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:57 vm00 bash[28333]: audit 2026-03-09T17:37:56.267931+0000 mon.a (mon.0) 2769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:57 vm00 bash[28333]: audit 2026-03-09T17:37:56.267931+0000 mon.a (mon.0) 2769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:57 vm00 bash[28333]: audit 2026-03-09T17:37:56.268934+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:57 vm00 bash[28333]: audit 2026-03-09T17:37:56.268934+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:57 vm00 bash[28333]: audit 2026-03-09T17:37:56.276824+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:57.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:57 vm00 bash[28333]: audit 2026-03-09T17:37:56.276824+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-96"}]: dispatch 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:58 vm00 bash[20770]: cluster 2026-03-09T17:37:56.806553+0000 mgr.y (mgr.14505) 465 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 611 B/s wr, 2 op/s 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:58 vm00 bash[20770]: cluster 2026-03-09T17:37:56.806553+0000 mgr.y (mgr.14505) 465 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 611 B/s wr, 2 op/s 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:58 vm00 bash[20770]: cluster 2026-03-09T17:37:57.296179+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:58 vm00 bash[20770]: cluster 2026-03-09T17:37:57.296179+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:58 vm00 bash[20770]: audit 2026-03-09T17:37:57.871354+0000 mon.c (mon.2) 619 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:58 vm00 bash[20770]: audit 2026-03-09T17:37:57.871354+0000 mon.c (mon.2) 619 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:58 vm00 bash[28333]: cluster 2026-03-09T17:37:56.806553+0000 mgr.y (mgr.14505) 465 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 611 B/s wr, 2 op/s 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:58 vm00 bash[28333]: cluster 2026-03-09T17:37:56.806553+0000 mgr.y (mgr.14505) 465 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 611 B/s wr, 2 op/s 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:58 vm00 bash[28333]: cluster 2026-03-09T17:37:57.296179+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:58 vm00 bash[28333]: cluster 2026-03-09T17:37:57.296179+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:58 vm00 bash[28333]: audit 2026-03-09T17:37:57.871354+0000 mon.c (mon.2) 619 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:58 vm00 bash[28333]: audit 2026-03-09T17:37:57.871354+0000 mon.c (mon.2) 619 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:58 vm02 bash[23351]: cluster 2026-03-09T17:37:56.806553+0000 mgr.y (mgr.14505) 465 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 611 B/s wr, 2 op/s 2026-03-09T17:37:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:58 vm02 bash[23351]: cluster 2026-03-09T17:37:56.806553+0000 mgr.y (mgr.14505) 465 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 611 B/s wr, 2 op/s 2026-03-09T17:37:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:58 vm02 bash[23351]: cluster 2026-03-09T17:37:57.296179+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T17:37:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:58 vm02 bash[23351]: cluster 2026-03-09T17:37:57.296179+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T17:37:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:58 vm02 bash[23351]: audit 2026-03-09T17:37:57.871354+0000 mon.c (mon.2) 619 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:58 vm02 bash[23351]: audit 2026-03-09T17:37:57.871354+0000 mon.c (mon.2) 619 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:37:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:59 vm02 bash[23351]: cluster 2026-03-09T17:37:58.306229+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T17:37:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:59 vm02 bash[23351]: cluster 2026-03-09T17:37:58.306229+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T17:37:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:59 vm02 bash[23351]: audit 2026-03-09T17:37:58.310639+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:59 vm02 bash[23351]: audit 2026-03-09T17:37:58.310639+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:59 vm02 bash[23351]: audit 2026-03-09T17:37:58.315084+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:37:59 vm02 bash[23351]: audit 2026-03-09T17:37:58.315084+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:59 vm00 bash[20770]: cluster 2026-03-09T17:37:58.306229+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:59 vm00 bash[20770]: cluster 2026-03-09T17:37:58.306229+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:59 vm00 bash[20770]: audit 2026-03-09T17:37:58.310639+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:59 vm00 bash[20770]: audit 2026-03-09T17:37:58.310639+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:59 vm00 bash[20770]: audit 2026-03-09T17:37:58.315084+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:37:59 vm00 bash[20770]: audit 2026-03-09T17:37:58.315084+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:59 vm00 bash[28333]: cluster 2026-03-09T17:37:58.306229+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:59 vm00 bash[28333]: cluster 2026-03-09T17:37:58.306229+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:59 vm00 bash[28333]: audit 2026-03-09T17:37:58.310639+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:59 vm00 bash[28333]: audit 2026-03-09T17:37:58.310639+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:59 vm00 bash[28333]: audit 2026-03-09T17:37:58.315084+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:37:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:37:59 vm00 bash[28333]: audit 2026-03-09T17:37:58.315084+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: cluster 2026-03-09T17:37:58.806839+0000 mgr.y (mgr.14505) 466 : cluster [DBG] pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 op/s 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: cluster 2026-03-09T17:37:58.806839+0000 mgr.y (mgr.14505) 466 : cluster [DBG] pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 op/s 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: cluster 2026-03-09T17:37:59.294829+0000 mon.a (mon.0) 2774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: cluster 2026-03-09T17:37:59.294829+0000 mon.a (mon.0) 2774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: audit 2026-03-09T17:37:59.296861+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: audit 2026-03-09T17:37:59.296861+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: cluster 2026-03-09T17:37:59.306687+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: cluster 2026-03-09T17:37:59.306687+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: audit 2026-03-09T17:37:59.336475+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: audit 2026-03-09T17:37:59.336475+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: audit 2026-03-09T17:37:59.337553+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:00 vm02 bash[23351]: audit 2026-03-09T17:37:59.337553+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: cluster 2026-03-09T17:37:58.806839+0000 mgr.y (mgr.14505) 466 : cluster [DBG] pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 op/s 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: cluster 2026-03-09T17:37:58.806839+0000 mgr.y (mgr.14505) 466 : cluster [DBG] pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 op/s 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: cluster 2026-03-09T17:37:59.294829+0000 mon.a (mon.0) 2774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: cluster 2026-03-09T17:37:59.294829+0000 mon.a (mon.0) 2774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: audit 2026-03-09T17:37:59.296861+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: audit 2026-03-09T17:37:59.296861+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: cluster 2026-03-09T17:37:59.306687+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: cluster 2026-03-09T17:37:59.306687+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: audit 2026-03-09T17:37:59.336475+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: audit 2026-03-09T17:37:59.336475+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: audit 2026-03-09T17:37:59.337553+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:00 vm00 bash[20770]: audit 2026-03-09T17:37:59.337553+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: cluster 2026-03-09T17:37:58.806839+0000 mgr.y (mgr.14505) 466 : cluster [DBG] pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 op/s 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: cluster 2026-03-09T17:37:58.806839+0000 mgr.y (mgr.14505) 466 : cluster [DBG] pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 op/s 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: cluster 2026-03-09T17:37:59.294829+0000 mon.a (mon.0) 2774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: cluster 2026-03-09T17:37:59.294829+0000 mon.a (mon.0) 2774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: audit 2026-03-09T17:37:59.296861+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: audit 2026-03-09T17:37:59.296861+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: cluster 2026-03-09T17:37:59.306687+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: cluster 2026-03-09T17:37:59.306687+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T17:38:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: audit 2026-03-09T17:37:59.336475+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: audit 2026-03-09T17:37:59.336475+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: audit 2026-03-09T17:37:59.337553+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:00.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:00 vm00 bash[28333]: audit 2026-03-09T17:37:59.337553+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:00.326255+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:00.326255+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:00.329157+0000 mon.b (mon.1) 500 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:00.329157+0000 mon.b (mon.1) 500 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: cluster 2026-03-09T17:38:00.333931+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: cluster 2026-03-09T17:38:00.333931+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:00.335369+0000 mon.a (mon.0) 2780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:00.335369+0000 mon.a (mon.0) 2780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: cluster 2026-03-09T17:38:01.326414+0000 mon.a (mon.0) 2781 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: cluster 2026-03-09T17:38:01.326414+0000 mon.a (mon.0) 2781 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:01.329721+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]': finished 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:01.329721+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]': finished 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: cluster 2026-03-09T17:38:01.335050+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: cluster 2026-03-09T17:38:01.335050+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:01.337467+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:01.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:01 vm02 bash[23351]: audit 2026-03-09T17:38:01.337467+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:00.326255+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:00.326255+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:00.329157+0000 mon.b (mon.1) 500 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:00.329157+0000 mon.b (mon.1) 500 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: cluster 2026-03-09T17:38:00.333931+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: cluster 2026-03-09T17:38:00.333931+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:00.335369+0000 mon.a (mon.0) 2780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:00.335369+0000 mon.a (mon.0) 2780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: cluster 2026-03-09T17:38:01.326414+0000 mon.a (mon.0) 2781 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: cluster 2026-03-09T17:38:01.326414+0000 mon.a (mon.0) 2781 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:01.329721+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]': finished 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:01.329721+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]': finished 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: cluster 2026-03-09T17:38:01.335050+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: cluster 2026-03-09T17:38:01.335050+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:01.337467+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:01.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:01 vm00 bash[20770]: audit 2026-03-09T17:38:01.337467+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:00.326255+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:00.326255+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:00.329157+0000 mon.b (mon.1) 500 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:00.329157+0000 mon.b (mon.1) 500 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: cluster 2026-03-09T17:38:00.333931+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: cluster 2026-03-09T17:38:00.333931+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:00.335369+0000 mon.a (mon.0) 2780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:00.335369+0000 mon.a (mon.0) 2780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]: dispatch 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: cluster 2026-03-09T17:38:01.326414+0000 mon.a (mon.0) 2781 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: cluster 2026-03-09T17:38:01.326414+0000 mon.a (mon.0) 2781 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:01.329721+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]': finished 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:01.329721+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-98", "mode": "writeback"}]': finished 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: cluster 2026-03-09T17:38:01.335050+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: cluster 2026-03-09T17:38:01.335050+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:01.337467+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:01.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:01 vm00 bash[28333]: audit 2026-03-09T17:38:01.337467+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:02.353 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:38:01 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: cluster 2026-03-09T17:38:00.807219+0000 mgr.y (mgr.14505) 467 : cluster [DBG] pgmap v758: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 op/s 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: cluster 2026-03-09T17:38:00.807219+0000 mgr.y (mgr.14505) 467 : cluster [DBG] pgmap v758: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 op/s 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: audit 2026-03-09T17:38:01.339460+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: audit 2026-03-09T17:38:01.339460+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: audit 2026-03-09T17:38:02.333224+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: audit 2026-03-09T17:38:02.333224+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: cluster 2026-03-09T17:38:02.335656+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: cluster 2026-03-09T17:38:02.335656+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: audit 2026-03-09T17:38:02.336086+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: audit 2026-03-09T17:38:02.336086+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: audit 2026-03-09T17:38:02.345320+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:02 vm02 bash[23351]: audit 2026-03-09T17:38:02.345320+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: cluster 2026-03-09T17:38:00.807219+0000 mgr.y (mgr.14505) 467 : cluster [DBG] pgmap v758: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 op/s 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: cluster 2026-03-09T17:38:00.807219+0000 mgr.y (mgr.14505) 467 : cluster [DBG] pgmap v758: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 op/s 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: audit 2026-03-09T17:38:01.339460+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: audit 2026-03-09T17:38:01.339460+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: audit 2026-03-09T17:38:02.333224+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: audit 2026-03-09T17:38:02.333224+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: cluster 2026-03-09T17:38:02.335656+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: cluster 2026-03-09T17:38:02.335656+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: audit 2026-03-09T17:38:02.336086+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: audit 2026-03-09T17:38:02.336086+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: audit 2026-03-09T17:38:02.345320+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:02 vm00 bash[20770]: audit 2026-03-09T17:38:02.345320+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: cluster 2026-03-09T17:38:00.807219+0000 mgr.y (mgr.14505) 467 : cluster [DBG] pgmap v758: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 op/s 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: cluster 2026-03-09T17:38:00.807219+0000 mgr.y (mgr.14505) 467 : cluster [DBG] pgmap v758: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 op/s 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: audit 2026-03-09T17:38:01.339460+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: audit 2026-03-09T17:38:01.339460+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: audit 2026-03-09T17:38:02.333224+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: audit 2026-03-09T17:38:02.333224+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: cluster 2026-03-09T17:38:02.335656+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: cluster 2026-03-09T17:38:02.335656+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: audit 2026-03-09T17:38:02.336086+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: audit 2026-03-09T17:38:02.336086+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: audit 2026-03-09T17:38:02.345320+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:02 vm00 bash[28333]: audit 2026-03-09T17:38:02.345320+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: audit 2026-03-09T17:38:01.994196+0000 mgr.y (mgr.14505) 468 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: audit 2026-03-09T17:38:01.994196+0000 mgr.y (mgr.14505) 468 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: cluster 2026-03-09T17:38:03.333414+0000 mon.a (mon.0) 2788 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: cluster 2026-03-09T17:38:03.333414+0000 mon.a (mon.0) 2788 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: audit 2026-03-09T17:38:03.337997+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: audit 2026-03-09T17:38:03.337997+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: cluster 2026-03-09T17:38:03.341509+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: cluster 2026-03-09T17:38:03.341509+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: audit 2026-03-09T17:38:03.344603+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: audit 2026-03-09T17:38:03.344603+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: audit 2026-03-09T17:38:03.346761+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:03 vm02 bash[23351]: audit 2026-03-09T17:38:03.346761+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: audit 2026-03-09T17:38:01.994196+0000 mgr.y (mgr.14505) 468 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: audit 2026-03-09T17:38:01.994196+0000 mgr.y (mgr.14505) 468 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: cluster 2026-03-09T17:38:03.333414+0000 mon.a (mon.0) 2788 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: cluster 2026-03-09T17:38:03.333414+0000 mon.a (mon.0) 2788 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: audit 2026-03-09T17:38:03.337997+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: audit 2026-03-09T17:38:03.337997+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: cluster 2026-03-09T17:38:03.341509+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: cluster 2026-03-09T17:38:03.341509+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: audit 2026-03-09T17:38:03.344603+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: audit 2026-03-09T17:38:03.344603+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: audit 2026-03-09T17:38:03.346761+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:03 vm00 bash[28333]: audit 2026-03-09T17:38:03.346761+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: audit 2026-03-09T17:38:01.994196+0000 mgr.y (mgr.14505) 468 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: audit 2026-03-09T17:38:01.994196+0000 mgr.y (mgr.14505) 468 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: cluster 2026-03-09T17:38:03.333414+0000 mon.a (mon.0) 2788 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: cluster 2026-03-09T17:38:03.333414+0000 mon.a (mon.0) 2788 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: audit 2026-03-09T17:38:03.337997+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: audit 2026-03-09T17:38:03.337997+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: cluster 2026-03-09T17:38:03.341509+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: cluster 2026-03-09T17:38:03.341509+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: audit 2026-03-09T17:38:03.344603+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: audit 2026-03-09T17:38:03.344603+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: audit 2026-03-09T17:38:03.346761+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:03 vm00 bash[20770]: audit 2026-03-09T17:38:03.346761+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: cluster 2026-03-09T17:38:02.808001+0000 mgr.y (mgr.14505) 469 : cluster [DBG] pgmap v761: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: cluster 2026-03-09T17:38:02.808001+0000 mgr.y (mgr.14505) 469 : cluster [DBG] pgmap v761: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: audit 2026-03-09T17:38:04.343234+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: audit 2026-03-09T17:38:04.343234+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: audit 2026-03-09T17:38:04.346594+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: audit 2026-03-09T17:38:04.346594+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: cluster 2026-03-09T17:38:04.350650+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: cluster 2026-03-09T17:38:04.350650+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: audit 2026-03-09T17:38:04.351548+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:04 vm02 bash[23351]: audit 2026-03-09T17:38:04.351548+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: cluster 2026-03-09T17:38:02.808001+0000 mgr.y (mgr.14505) 469 : cluster [DBG] pgmap v761: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: cluster 2026-03-09T17:38:02.808001+0000 mgr.y (mgr.14505) 469 : cluster [DBG] pgmap v761: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: audit 2026-03-09T17:38:04.343234+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: audit 2026-03-09T17:38:04.343234+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: audit 2026-03-09T17:38:04.346594+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: audit 2026-03-09T17:38:04.346594+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: cluster 2026-03-09T17:38:04.350650+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: cluster 2026-03-09T17:38:04.350650+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: audit 2026-03-09T17:38:04.351548+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:04 vm00 bash[20770]: audit 2026-03-09T17:38:04.351548+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: cluster 2026-03-09T17:38:02.808001+0000 mgr.y (mgr.14505) 469 : cluster [DBG] pgmap v761: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: cluster 2026-03-09T17:38:02.808001+0000 mgr.y (mgr.14505) 469 : cluster [DBG] pgmap v761: 292 pgs: 4 unknown, 288 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: audit 2026-03-09T17:38:04.343234+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: audit 2026-03-09T17:38:04.343234+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: audit 2026-03-09T17:38:04.346594+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: audit 2026-03-09T17:38:04.346594+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: cluster 2026-03-09T17:38:04.350650+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: cluster 2026-03-09T17:38:04.350650+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: audit 2026-03-09T17:38:04.351548+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:04 vm00 bash[28333]: audit 2026-03-09T17:38:04.351548+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: cluster 2026-03-09T17:38:04.808375+0000 mgr.y (mgr.14505) 470 : cluster [DBG] pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: cluster 2026-03-09T17:38:04.808375+0000 mgr.y (mgr.14505) 470 : cluster [DBG] pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: audit 2026-03-09T17:38:05.346737+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: audit 2026-03-09T17:38:05.346737+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: audit 2026-03-09T17:38:05.355506+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: audit 2026-03-09T17:38:05.355506+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: cluster 2026-03-09T17:38:05.358477+0000 mon.a (mon.0) 2796 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: cluster 2026-03-09T17:38:05.358477+0000 mon.a (mon.0) 2796 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: audit 2026-03-09T17:38:05.359077+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:06 vm02 bash[23351]: audit 2026-03-09T17:38:05.359077+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: cluster 2026-03-09T17:38:04.808375+0000 mgr.y (mgr.14505) 470 : cluster [DBG] pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: cluster 2026-03-09T17:38:04.808375+0000 mgr.y (mgr.14505) 470 : cluster [DBG] pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: audit 2026-03-09T17:38:05.346737+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: audit 2026-03-09T17:38:05.346737+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: audit 2026-03-09T17:38:05.355506+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: audit 2026-03-09T17:38:05.355506+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: cluster 2026-03-09T17:38:05.358477+0000 mon.a (mon.0) 2796 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: cluster 2026-03-09T17:38:05.358477+0000 mon.a (mon.0) 2796 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: audit 2026-03-09T17:38:05.359077+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:06 vm00 bash[20770]: audit 2026-03-09T17:38:05.359077+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: cluster 2026-03-09T17:38:04.808375+0000 mgr.y (mgr.14505) 470 : cluster [DBG] pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: cluster 2026-03-09T17:38:04.808375+0000 mgr.y (mgr.14505) 470 : cluster [DBG] pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: audit 2026-03-09T17:38:05.346737+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: audit 2026-03-09T17:38:05.346737+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: audit 2026-03-09T17:38:05.355506+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: audit 2026-03-09T17:38:05.355506+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: cluster 2026-03-09T17:38:05.358477+0000 mon.a (mon.0) 2796 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: cluster 2026-03-09T17:38:05.358477+0000 mon.a (mon.0) 2796 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T17:38:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: audit 2026-03-09T17:38:05.359077+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:06 vm00 bash[28333]: audit 2026-03-09T17:38:05.359077+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T17:38:06.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:38:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:38:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: audit 2026-03-09T17:38:06.350984+0000 mon.a (mon.0) 2798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: audit 2026-03-09T17:38:06.350984+0000 mon.a (mon.0) 2798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: cluster 2026-03-09T17:38:06.360167+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: cluster 2026-03-09T17:38:06.360167+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: audit 2026-03-09T17:38:06.401104+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: audit 2026-03-09T17:38:06.401104+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: audit 2026-03-09T17:38:06.402241+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: audit 2026-03-09T17:38:06.402241+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: cluster 2026-03-09T17:38:06.763190+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:07 vm02 bash[23351]: cluster 2026-03-09T17:38:06.763190+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: audit 2026-03-09T17:38:06.350984+0000 mon.a (mon.0) 2798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: audit 2026-03-09T17:38:06.350984+0000 mon.a (mon.0) 2798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: cluster 2026-03-09T17:38:06.360167+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: cluster 2026-03-09T17:38:06.360167+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: audit 2026-03-09T17:38:06.401104+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: audit 2026-03-09T17:38:06.401104+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: audit 2026-03-09T17:38:06.402241+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: audit 2026-03-09T17:38:06.402241+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: cluster 2026-03-09T17:38:06.763190+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:07 vm00 bash[20770]: cluster 2026-03-09T17:38:06.763190+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: audit 2026-03-09T17:38:06.350984+0000 mon.a (mon.0) 2798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: audit 2026-03-09T17:38:06.350984+0000 mon.a (mon.0) 2798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: cluster 2026-03-09T17:38:06.360167+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: cluster 2026-03-09T17:38:06.360167+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: audit 2026-03-09T17:38:06.401104+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: audit 2026-03-09T17:38:06.401104+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: audit 2026-03-09T17:38:06.402241+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: audit 2026-03-09T17:38:06.402241+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: cluster 2026-03-09T17:38:06.763190+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:07 vm00 bash[28333]: cluster 2026-03-09T17:38:06.763190+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: cluster 2026-03-09T17:38:06.809525+0000 mgr.y (mgr.14505) 471 : cluster [DBG] pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: cluster 2026-03-09T17:38:06.809525+0000 mgr.y (mgr.14505) 471 : cluster [DBG] pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: audit 2026-03-09T17:38:07.381050+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: audit 2026-03-09T17:38:07.381050+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: audit 2026-03-09T17:38:07.385486+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: audit 2026-03-09T17:38:07.385486+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: cluster 2026-03-09T17:38:07.392276+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: cluster 2026-03-09T17:38:07.392276+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: audit 2026-03-09T17:38:07.393025+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:08 vm00 bash[20770]: audit 2026-03-09T17:38:07.393025+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: cluster 2026-03-09T17:38:06.809525+0000 mgr.y (mgr.14505) 471 : cluster [DBG] pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: cluster 2026-03-09T17:38:06.809525+0000 mgr.y (mgr.14505) 471 : cluster [DBG] pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: audit 2026-03-09T17:38:07.381050+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: audit 2026-03-09T17:38:07.381050+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: audit 2026-03-09T17:38:07.385486+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: audit 2026-03-09T17:38:07.385486+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: cluster 2026-03-09T17:38:07.392276+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T17:38:08.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: cluster 2026-03-09T17:38:07.392276+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T17:38:08.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: audit 2026-03-09T17:38:07.393025+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:08 vm00 bash[28333]: audit 2026-03-09T17:38:07.393025+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: cluster 2026-03-09T17:38:06.809525+0000 mgr.y (mgr.14505) 471 : cluster [DBG] pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: cluster 2026-03-09T17:38:06.809525+0000 mgr.y (mgr.14505) 471 : cluster [DBG] pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: audit 2026-03-09T17:38:07.381050+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: audit 2026-03-09T17:38:07.381050+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: audit 2026-03-09T17:38:07.385486+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: audit 2026-03-09T17:38:07.385486+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: cluster 2026-03-09T17:38:07.392276+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: cluster 2026-03-09T17:38:07.392276+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: audit 2026-03-09T17:38:07.393025+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:08 vm02 bash[23351]: audit 2026-03-09T17:38:07.393025+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]: dispatch 2026-03-09T17:38:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:09 vm00 bash[20770]: audit 2026-03-09T17:38:08.384414+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:09 vm00 bash[20770]: audit 2026-03-09T17:38:08.384414+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:09 vm00 bash[20770]: cluster 2026-03-09T17:38:08.387097+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T17:38:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:09 vm00 bash[20770]: cluster 2026-03-09T17:38:08.387097+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T17:38:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:09 vm00 bash[28333]: audit 2026-03-09T17:38:08.384414+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:09 vm00 bash[28333]: audit 2026-03-09T17:38:08.384414+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:09 vm00 bash[28333]: cluster 2026-03-09T17:38:08.387097+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T17:38:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:09 vm00 bash[28333]: cluster 2026-03-09T17:38:08.387097+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T17:38:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:09 vm02 bash[23351]: audit 2026-03-09T17:38:08.384414+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:09 vm02 bash[23351]: audit 2026-03-09T17:38:08.384414+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-98"}]': finished 2026-03-09T17:38:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:09 vm02 bash[23351]: cluster 2026-03-09T17:38:08.387097+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T17:38:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:09 vm02 bash[23351]: cluster 2026-03-09T17:38:08.387097+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T17:38:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:10 vm00 bash[28333]: cluster 2026-03-09T17:38:08.809864+0000 mgr.y (mgr.14505) 472 : cluster [DBG] pgmap v770: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:10 vm00 bash[28333]: cluster 2026-03-09T17:38:08.809864+0000 mgr.y (mgr.14505) 472 : cluster [DBG] pgmap v770: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:10 vm00 bash[28333]: cluster 2026-03-09T17:38:09.422556+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T17:38:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:10 vm00 bash[28333]: cluster 2026-03-09T17:38:09.422556+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T17:38:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:10 vm00 bash[20770]: cluster 2026-03-09T17:38:08.809864+0000 mgr.y (mgr.14505) 472 : cluster [DBG] pgmap v770: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:10 vm00 bash[20770]: cluster 2026-03-09T17:38:08.809864+0000 mgr.y (mgr.14505) 472 : cluster [DBG] pgmap v770: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:10 vm00 bash[20770]: cluster 2026-03-09T17:38:09.422556+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T17:38:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:10 vm00 bash[20770]: cluster 2026-03-09T17:38:09.422556+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T17:38:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:10 vm02 bash[23351]: cluster 2026-03-09T17:38:08.809864+0000 mgr.y (mgr.14505) 472 : cluster [DBG] pgmap v770: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:10 vm02 bash[23351]: cluster 2026-03-09T17:38:08.809864+0000 mgr.y (mgr.14505) 472 : cluster [DBG] pgmap v770: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:10 vm02 bash[23351]: cluster 2026-03-09T17:38:09.422556+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T17:38:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:10 vm02 bash[23351]: cluster 2026-03-09T17:38:09.422556+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:11 vm00 bash[20770]: cluster 2026-03-09T17:38:10.423518+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:11 vm00 bash[20770]: cluster 2026-03-09T17:38:10.423518+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:11 vm00 bash[20770]: audit 2026-03-09T17:38:10.434080+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:11 vm00 bash[20770]: audit 2026-03-09T17:38:10.434080+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:11 vm00 bash[20770]: audit 2026-03-09T17:38:10.439434+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:11 vm00 bash[20770]: audit 2026-03-09T17:38:10.439434+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:11 vm00 bash[20770]: cluster 2026-03-09T17:38:10.810156+0000 mgr.y (mgr.14505) 473 : cluster [DBG] pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:11 vm00 bash[20770]: cluster 2026-03-09T17:38:10.810156+0000 mgr.y (mgr.14505) 473 : cluster [DBG] pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:11 vm00 bash[28333]: cluster 2026-03-09T17:38:10.423518+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:11 vm00 bash[28333]: cluster 2026-03-09T17:38:10.423518+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:11 vm00 bash[28333]: audit 2026-03-09T17:38:10.434080+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:11 vm00 bash[28333]: audit 2026-03-09T17:38:10.434080+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:11 vm00 bash[28333]: audit 2026-03-09T17:38:10.439434+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:11 vm00 bash[28333]: audit 2026-03-09T17:38:10.439434+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:11 vm00 bash[28333]: cluster 2026-03-09T17:38:10.810156+0000 mgr.y (mgr.14505) 473 : cluster [DBG] pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:11 vm00 bash[28333]: cluster 2026-03-09T17:38:10.810156+0000 mgr.y (mgr.14505) 473 : cluster [DBG] pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:11 vm02 bash[23351]: cluster 2026-03-09T17:38:10.423518+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T17:38:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:11 vm02 bash[23351]: cluster 2026-03-09T17:38:10.423518+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T17:38:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:11 vm02 bash[23351]: audit 2026-03-09T17:38:10.434080+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:11 vm02 bash[23351]: audit 2026-03-09T17:38:10.434080+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:11 vm02 bash[23351]: audit 2026-03-09T17:38:10.439434+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:11 vm02 bash[23351]: audit 2026-03-09T17:38:10.439434+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:11 vm02 bash[23351]: cluster 2026-03-09T17:38:10.810156+0000 mgr.y (mgr.14505) 473 : cluster [DBG] pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:11 vm02 bash[23351]: cluster 2026-03-09T17:38:10.810156+0000 mgr.y (mgr.14505) 473 : cluster [DBG] pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:12.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:38:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:11.452013+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:11.452013+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:11.460723+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:11.460723+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: cluster 2026-03-09T17:38:11.466124+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: cluster 2026-03-09T17:38:11.466124+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:11.476339+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:11.476339+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: cluster 2026-03-09T17:38:11.770186+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: cluster 2026-03-09T17:38:11.770186+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:12.004940+0000 mgr.y (mgr.14505) 474 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:12.004940+0000 mgr.y (mgr.14505) 474 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:12.455596+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:12.455596+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:12.463843+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:11.452013+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:11.452013+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:11.460723+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:11.460723+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: cluster 2026-03-09T17:38:11.466124+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: cluster 2026-03-09T17:38:11.466124+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:11.476339+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:11.476339+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: cluster 2026-03-09T17:38:11.770186+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: cluster 2026-03-09T17:38:11.770186+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:12.004940+0000 mgr.y (mgr.14505) 474 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:12.004940+0000 mgr.y (mgr.14505) 474 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:12.455596+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:12.455596+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:12.463843+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:12.463843+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: cluster 2026-03-09T17:38:12.465166+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: cluster 2026-03-09T17:38:12.465166+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:12.466054+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:12 vm00 bash[20770]: audit 2026-03-09T17:38:12.466054+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:12.463843+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: cluster 2026-03-09T17:38:12.465166+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: cluster 2026-03-09T17:38:12.465166+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:12.466054+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:12 vm00 bash[28333]: audit 2026-03-09T17:38:12.466054+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:11.452013+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:11.452013+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:11.460723+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:11.460723+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: cluster 2026-03-09T17:38:11.466124+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: cluster 2026-03-09T17:38:11.466124+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:11.476339+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:11.476339+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: cluster 2026-03-09T17:38:11.770186+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: cluster 2026-03-09T17:38:11.770186+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:12.004940+0000 mgr.y (mgr.14505) 474 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:12.004940+0000 mgr.y (mgr.14505) 474 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:12.455596+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:12.455596+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:12.463843+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:12.463843+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: cluster 2026-03-09T17:38:12.465166+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: cluster 2026-03-09T17:38:12.465166+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:12.466054+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:12 vm02 bash[23351]: audit 2026-03-09T17:38:12.466054+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: cluster 2026-03-09T17:38:12.811080+0000 mgr.y (mgr.14505) 475 : cluster [DBG] pgmap v776: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: cluster 2026-03-09T17:38:12.811080+0000 mgr.y (mgr.14505) 475 : cluster [DBG] pgmap v776: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: audit 2026-03-09T17:38:12.878334+0000 mon.c (mon.2) 620 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: audit 2026-03-09T17:38:12.878334+0000 mon.c (mon.2) 620 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: audit 2026-03-09T17:38:13.458934+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: audit 2026-03-09T17:38:13.458934+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: cluster 2026-03-09T17:38:13.462320+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: cluster 2026-03-09T17:38:13.462320+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: audit 2026-03-09T17:38:13.464820+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: audit 2026-03-09T17:38:13.464820+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: audit 2026-03-09T17:38:13.470468+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:13 vm00 bash[20770]: audit 2026-03-09T17:38:13.470468+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: cluster 2026-03-09T17:38:12.811080+0000 mgr.y (mgr.14505) 475 : cluster [DBG] pgmap v776: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: cluster 2026-03-09T17:38:12.811080+0000 mgr.y (mgr.14505) 475 : cluster [DBG] pgmap v776: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: audit 2026-03-09T17:38:12.878334+0000 mon.c (mon.2) 620 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: audit 2026-03-09T17:38:12.878334+0000 mon.c (mon.2) 620 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: audit 2026-03-09T17:38:13.458934+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: audit 2026-03-09T17:38:13.458934+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: cluster 2026-03-09T17:38:13.462320+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: cluster 2026-03-09T17:38:13.462320+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: audit 2026-03-09T17:38:13.464820+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: audit 2026-03-09T17:38:13.464820+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: audit 2026-03-09T17:38:13.470468+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:13 vm00 bash[28333]: audit 2026-03-09T17:38:13.470468+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: cluster 2026-03-09T17:38:12.811080+0000 mgr.y (mgr.14505) 475 : cluster [DBG] pgmap v776: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: cluster 2026-03-09T17:38:12.811080+0000 mgr.y (mgr.14505) 475 : cluster [DBG] pgmap v776: 292 pgs: 6 unknown, 286 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: audit 2026-03-09T17:38:12.878334+0000 mon.c (mon.2) 620 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: audit 2026-03-09T17:38:12.878334+0000 mon.c (mon.2) 620 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: audit 2026-03-09T17:38:13.458934+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: audit 2026-03-09T17:38:13.458934+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-6", "overlaypool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: cluster 2026-03-09T17:38:13.462320+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: cluster 2026-03-09T17:38:13.462320+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: audit 2026-03-09T17:38:13.464820+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: audit 2026-03-09T17:38:13.464820+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: audit 2026-03-09T17:38:13.470468+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:13 vm02 bash[23351]: audit 2026-03-09T17:38:13.470468+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]: dispatch 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: cluster 2026-03-09T17:38:14.459114+0000 mon.a (mon.0) 2820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: cluster 2026-03-09T17:38:14.459114+0000 mon.a (mon.0) 2820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: audit 2026-03-09T17:38:14.463532+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]': finished 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: audit 2026-03-09T17:38:14.463532+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]': finished 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: audit 2026-03-09T17:38:14.470159+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: audit 2026-03-09T17:38:14.470159+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: cluster 2026-03-09T17:38:14.470429+0000 mon.a (mon.0) 2822 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: cluster 2026-03-09T17:38:14.470429+0000 mon.a (mon.0) 2822 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: audit 2026-03-09T17:38:14.471614+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:14 vm00 bash[28333]: audit 2026-03-09T17:38:14.471614+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: cluster 2026-03-09T17:38:14.459114+0000 mon.a (mon.0) 2820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: cluster 2026-03-09T17:38:14.459114+0000 mon.a (mon.0) 2820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: audit 2026-03-09T17:38:14.463532+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]': finished 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: audit 2026-03-09T17:38:14.463532+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]': finished 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: audit 2026-03-09T17:38:14.470159+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: audit 2026-03-09T17:38:14.470159+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: cluster 2026-03-09T17:38:14.470429+0000 mon.a (mon.0) 2822 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: cluster 2026-03-09T17:38:14.470429+0000 mon.a (mon.0) 2822 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: audit 2026-03-09T17:38:14.471614+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:14 vm00 bash[20770]: audit 2026-03-09T17:38:14.471614+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: cluster 2026-03-09T17:38:14.459114+0000 mon.a (mon.0) 2820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: cluster 2026-03-09T17:38:14.459114+0000 mon.a (mon.0) 2820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: audit 2026-03-09T17:38:14.463532+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]': finished 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: audit 2026-03-09T17:38:14.463532+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-100", "mode": "writeback"}]': finished 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: audit 2026-03-09T17:38:14.470159+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: audit 2026-03-09T17:38:14.470159+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: cluster 2026-03-09T17:38:14.470429+0000 mon.a (mon.0) 2822 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: cluster 2026-03-09T17:38:14.470429+0000 mon.a (mon.0) 2822 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: audit 2026-03-09T17:38:14.471614+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:14 vm02 bash[23351]: audit 2026-03-09T17:38:14.471614+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: cluster 2026-03-09T17:38:14.811412+0000 mgr.y (mgr.14505) 476 : cluster [DBG] pgmap v779: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: cluster 2026-03-09T17:38:14.811412+0000 mgr.y (mgr.14505) 476 : cluster [DBG] pgmap v779: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: audit 2026-03-09T17:38:15.467051+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: audit 2026-03-09T17:38:15.467051+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: audit 2026-03-09T17:38:15.474223+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: audit 2026-03-09T17:38:15.474223+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: cluster 2026-03-09T17:38:15.477490+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: cluster 2026-03-09T17:38:15.477490+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: audit 2026-03-09T17:38:15.479075+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:15 vm00 bash[28333]: audit 2026-03-09T17:38:15.479075+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: cluster 2026-03-09T17:38:14.811412+0000 mgr.y (mgr.14505) 476 : cluster [DBG] pgmap v779: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: cluster 2026-03-09T17:38:14.811412+0000 mgr.y (mgr.14505) 476 : cluster [DBG] pgmap v779: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: audit 2026-03-09T17:38:15.467051+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: audit 2026-03-09T17:38:15.467051+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:15.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: audit 2026-03-09T17:38:15.474223+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: audit 2026-03-09T17:38:15.474223+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: cluster 2026-03-09T17:38:15.477490+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T17:38:15.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: cluster 2026-03-09T17:38:15.477490+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T17:38:15.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: audit 2026-03-09T17:38:15.479075+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:15 vm00 bash[20770]: audit 2026-03-09T17:38:15.479075+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: cluster 2026-03-09T17:38:14.811412+0000 mgr.y (mgr.14505) 476 : cluster [DBG] pgmap v779: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: cluster 2026-03-09T17:38:14.811412+0000 mgr.y (mgr.14505) 476 : cluster [DBG] pgmap v779: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: audit 2026-03-09T17:38:15.467051+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: audit 2026-03-09T17:38:15.467051+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: audit 2026-03-09T17:38:15.474223+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: audit 2026-03-09T17:38:15.474223+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: cluster 2026-03-09T17:38:15.477490+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: cluster 2026-03-09T17:38:15.477490+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: audit 2026-03-09T17:38:15.479075+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:15 vm02 bash[23351]: audit 2026-03-09T17:38:15.479075+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:38:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:38:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: audit 2026-03-09T17:38:16.498722+0000 mon.a (mon.0) 2827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: audit 2026-03-09T17:38:16.498722+0000 mon.a (mon.0) 2827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: cluster 2026-03-09T17:38:16.501850+0000 mon.a (mon.0) 2828 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: cluster 2026-03-09T17:38:16.501850+0000 mon.a (mon.0) 2828 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: audit 2026-03-09T17:38:16.502160+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: audit 2026-03-09T17:38:16.502160+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: audit 2026-03-09T17:38:16.503642+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: audit 2026-03-09T17:38:16.503642+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: cluster 2026-03-09T17:38:16.771123+0000 mon.a (mon.0) 2830 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: cluster 2026-03-09T17:38:16.771123+0000 mon.a (mon.0) 2830 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: cluster 2026-03-09T17:38:16.811827+0000 mgr.y (mgr.14505) 477 : cluster [DBG] pgmap v782: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:17 vm00 bash[20770]: cluster 2026-03-09T17:38:16.811827+0000 mgr.y (mgr.14505) 477 : cluster [DBG] pgmap v782: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: audit 2026-03-09T17:38:16.498722+0000 mon.a (mon.0) 2827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: audit 2026-03-09T17:38:16.498722+0000 mon.a (mon.0) 2827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: cluster 2026-03-09T17:38:16.501850+0000 mon.a (mon.0) 2828 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: cluster 2026-03-09T17:38:16.501850+0000 mon.a (mon.0) 2828 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: audit 2026-03-09T17:38:16.502160+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: audit 2026-03-09T17:38:16.502160+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: audit 2026-03-09T17:38:16.503642+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: audit 2026-03-09T17:38:16.503642+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: cluster 2026-03-09T17:38:16.771123+0000 mon.a (mon.0) 2830 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: cluster 2026-03-09T17:38:16.771123+0000 mon.a (mon.0) 2830 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: cluster 2026-03-09T17:38:16.811827+0000 mgr.y (mgr.14505) 477 : cluster [DBG] pgmap v782: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:17 vm00 bash[28333]: cluster 2026-03-09T17:38:16.811827+0000 mgr.y (mgr.14505) 477 : cluster [DBG] pgmap v782: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: audit 2026-03-09T17:38:16.498722+0000 mon.a (mon.0) 2827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: audit 2026-03-09T17:38:16.498722+0000 mon.a (mon.0) 2827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: cluster 2026-03-09T17:38:16.501850+0000 mon.a (mon.0) 2828 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: cluster 2026-03-09T17:38:16.501850+0000 mon.a (mon.0) 2828 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: audit 2026-03-09T17:38:16.502160+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: audit 2026-03-09T17:38:16.502160+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: audit 2026-03-09T17:38:16.503642+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: audit 2026-03-09T17:38:16.503642+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: cluster 2026-03-09T17:38:16.771123+0000 mon.a (mon.0) 2830 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: cluster 2026-03-09T17:38:16.771123+0000 mon.a (mon.0) 2830 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: cluster 2026-03-09T17:38:16.811827+0000 mgr.y (mgr.14505) 477 : cluster [DBG] pgmap v782: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:17 vm02 bash[23351]: cluster 2026-03-09T17:38:16.811827+0000 mgr.y (mgr.14505) 477 : cluster [DBG] pgmap v782: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: cluster 2026-03-09T17:38:17.499810+0000 mon.a (mon.0) 2831 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: cluster 2026-03-09T17:38:17.499810+0000 mon.a (mon.0) 2831 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: audit 2026-03-09T17:38:17.506535+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: audit 2026-03-09T17:38:17.506535+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: audit 2026-03-09T17:38:17.510846+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: audit 2026-03-09T17:38:17.510846+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: cluster 2026-03-09T17:38:17.521345+0000 mon.a (mon.0) 2833 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: cluster 2026-03-09T17:38:17.521345+0000 mon.a (mon.0) 2833 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: audit 2026-03-09T17:38:17.522572+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:18 vm00 bash[20770]: audit 2026-03-09T17:38:17.522572+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: cluster 2026-03-09T17:38:17.499810+0000 mon.a (mon.0) 2831 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: cluster 2026-03-09T17:38:17.499810+0000 mon.a (mon.0) 2831 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: audit 2026-03-09T17:38:17.506535+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: audit 2026-03-09T17:38:17.506535+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: audit 2026-03-09T17:38:17.510846+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: audit 2026-03-09T17:38:17.510846+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: cluster 2026-03-09T17:38:17.521345+0000 mon.a (mon.0) 2833 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: cluster 2026-03-09T17:38:17.521345+0000 mon.a (mon.0) 2833 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: audit 2026-03-09T17:38:17.522572+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:18 vm00 bash[28333]: audit 2026-03-09T17:38:17.522572+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: cluster 2026-03-09T17:38:17.499810+0000 mon.a (mon.0) 2831 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: cluster 2026-03-09T17:38:17.499810+0000 mon.a (mon.0) 2831 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: audit 2026-03-09T17:38:17.506535+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: audit 2026-03-09T17:38:17.506535+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: audit 2026-03-09T17:38:17.510846+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: audit 2026-03-09T17:38:17.510846+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: cluster 2026-03-09T17:38:17.521345+0000 mon.a (mon.0) 2833 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: cluster 2026-03-09T17:38:17.521345+0000 mon.a (mon.0) 2833 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: audit 2026-03-09T17:38:17.522572+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:18 vm02 bash[23351]: audit 2026-03-09T17:38:17.522572+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: audit 2026-03-09T17:38:18.510250+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: audit 2026-03-09T17:38:18.510250+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: cluster 2026-03-09T17:38:18.515572+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: cluster 2026-03-09T17:38:18.515572+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: audit 2026-03-09T17:38:18.558820+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: audit 2026-03-09T17:38:18.558820+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: audit 2026-03-09T17:38:18.560007+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: audit 2026-03-09T17:38:18.560007+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: cluster 2026-03-09T17:38:18.812199+0000 mgr.y (mgr.14505) 478 : cluster [DBG] pgmap v785: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:19 vm00 bash[20770]: cluster 2026-03-09T17:38:18.812199+0000 mgr.y (mgr.14505) 478 : cluster [DBG] pgmap v785: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: audit 2026-03-09T17:38:18.510250+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: audit 2026-03-09T17:38:18.510250+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: cluster 2026-03-09T17:38:18.515572+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: cluster 2026-03-09T17:38:18.515572+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: audit 2026-03-09T17:38:18.558820+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: audit 2026-03-09T17:38:18.558820+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: audit 2026-03-09T17:38:18.560007+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: audit 2026-03-09T17:38:18.560007+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: cluster 2026-03-09T17:38:18.812199+0000 mgr.y (mgr.14505) 478 : cluster [DBG] pgmap v785: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:19 vm00 bash[28333]: cluster 2026-03-09T17:38:18.812199+0000 mgr.y (mgr.14505) 478 : cluster [DBG] pgmap v785: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: audit 2026-03-09T17:38:18.510250+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: audit 2026-03-09T17:38:18.510250+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: cluster 2026-03-09T17:38:18.515572+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: cluster 2026-03-09T17:38:18.515572+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: audit 2026-03-09T17:38:18.558820+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: audit 2026-03-09T17:38:18.558820+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: audit 2026-03-09T17:38:18.560007+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: audit 2026-03-09T17:38:18.560007+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: cluster 2026-03-09T17:38:18.812199+0000 mgr.y (mgr.14505) 478 : cluster [DBG] pgmap v785: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:19 vm02 bash[23351]: cluster 2026-03-09T17:38:18.812199+0000 mgr.y (mgr.14505) 478 : cluster [DBG] pgmap v785: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:20 vm02 bash[23351]: audit 2026-03-09T17:38:19.535481+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:20 vm02 bash[23351]: audit 2026-03-09T17:38:19.535481+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:20 vm02 bash[23351]: cluster 2026-03-09T17:38:19.540439+0000 mon.a (mon.0) 2839 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T17:38:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:20 vm02 bash[23351]: cluster 2026-03-09T17:38:19.540439+0000 mon.a (mon.0) 2839 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T17:38:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:20 vm02 bash[23351]: audit 2026-03-09T17:38:19.543001+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:20 vm02 bash[23351]: audit 2026-03-09T17:38:19.543001+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:20 vm02 bash[23351]: audit 2026-03-09T17:38:19.545508+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:20 vm02 bash[23351]: audit 2026-03-09T17:38:19.545508+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:20 vm00 bash[28333]: audit 2026-03-09T17:38:19.535481+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:20 vm00 bash[28333]: audit 2026-03-09T17:38:19.535481+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:20 vm00 bash[28333]: cluster 2026-03-09T17:38:19.540439+0000 mon.a (mon.0) 2839 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:20 vm00 bash[28333]: cluster 2026-03-09T17:38:19.540439+0000 mon.a (mon.0) 2839 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:20 vm00 bash[28333]: audit 2026-03-09T17:38:19.543001+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:20 vm00 bash[28333]: audit 2026-03-09T17:38:19.543001+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:20 vm00 bash[28333]: audit 2026-03-09T17:38:19.545508+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:20 vm00 bash[28333]: audit 2026-03-09T17:38:19.545508+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:20 vm00 bash[20770]: audit 2026-03-09T17:38:19.535481+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:20 vm00 bash[20770]: audit 2026-03-09T17:38:19.535481+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]': finished 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:20 vm00 bash[20770]: cluster 2026-03-09T17:38:19.540439+0000 mon.a (mon.0) 2839 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:20 vm00 bash[20770]: cluster 2026-03-09T17:38:19.540439+0000 mon.a (mon.0) 2839 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:20 vm00 bash[20770]: audit 2026-03-09T17:38:19.543001+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:20 vm00 bash[20770]: audit 2026-03-09T17:38:19.543001+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:20 vm00 bash[20770]: audit 2026-03-09T17:38:19.545508+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:20 vm00 bash[20770]: audit 2026-03-09T17:38:19.545508+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]: dispatch 2026-03-09T17:38:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:21 vm02 bash[23351]: audit 2026-03-09T17:38:20.539264+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:21 vm02 bash[23351]: audit 2026-03-09T17:38:20.539264+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:21 vm02 bash[23351]: cluster 2026-03-09T17:38:20.542140+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T17:38:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:21 vm02 bash[23351]: cluster 2026-03-09T17:38:20.542140+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T17:38:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:21 vm02 bash[23351]: cluster 2026-03-09T17:38:20.812562+0000 mgr.y (mgr.14505) 479 : cluster [DBG] pgmap v788: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:21 vm02 bash[23351]: cluster 2026-03-09T17:38:20.812562+0000 mgr.y (mgr.14505) 479 : cluster [DBG] pgmap v788: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:21 vm00 bash[20770]: audit 2026-03-09T17:38:20.539264+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:21 vm00 bash[20770]: audit 2026-03-09T17:38:20.539264+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:21 vm00 bash[20770]: cluster 2026-03-09T17:38:20.542140+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:21 vm00 bash[20770]: cluster 2026-03-09T17:38:20.542140+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:21 vm00 bash[20770]: cluster 2026-03-09T17:38:20.812562+0000 mgr.y (mgr.14505) 479 : cluster [DBG] pgmap v788: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:21 vm00 bash[20770]: cluster 2026-03-09T17:38:20.812562+0000 mgr.y (mgr.14505) 479 : cluster [DBG] pgmap v788: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:21 vm00 bash[28333]: audit 2026-03-09T17:38:20.539264+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:21 vm00 bash[28333]: audit 2026-03-09T17:38:20.539264+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-100"}]': finished 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:21 vm00 bash[28333]: cluster 2026-03-09T17:38:20.542140+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:21 vm00 bash[28333]: cluster 2026-03-09T17:38:20.542140+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:21 vm00 bash[28333]: cluster 2026-03-09T17:38:20.812562+0000 mgr.y (mgr.14505) 479 : cluster [DBG] pgmap v788: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:21 vm00 bash[28333]: cluster 2026-03-09T17:38:20.812562+0000 mgr.y (mgr.14505) 479 : cluster [DBG] pgmap v788: 292 pgs: 292 active+clean; 8.3 MiB data, 965 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:22.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:38:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:38:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:22 vm02 bash[23351]: cluster 2026-03-09T17:38:21.566658+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T17:38:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:22 vm02 bash[23351]: cluster 2026-03-09T17:38:21.566658+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T17:38:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:22 vm02 bash[23351]: audit 2026-03-09T17:38:22.009981+0000 mgr.y (mgr.14505) 480 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:22 vm02 bash[23351]: audit 2026-03-09T17:38:22.009981+0000 mgr.y (mgr.14505) 480 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:22 vm00 bash[20770]: cluster 2026-03-09T17:38:21.566658+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T17:38:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:22 vm00 bash[20770]: cluster 2026-03-09T17:38:21.566658+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T17:38:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:22 vm00 bash[20770]: audit 2026-03-09T17:38:22.009981+0000 mgr.y (mgr.14505) 480 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:22 vm00 bash[20770]: audit 2026-03-09T17:38:22.009981+0000 mgr.y (mgr.14505) 480 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:22 vm00 bash[28333]: cluster 2026-03-09T17:38:21.566658+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T17:38:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:22 vm00 bash[28333]: cluster 2026-03-09T17:38:21.566658+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T17:38:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:22 vm00 bash[28333]: audit 2026-03-09T17:38:22.009981+0000 mgr.y (mgr.14505) 480 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:22 vm00 bash[28333]: audit 2026-03-09T17:38:22.009981+0000 mgr.y (mgr.14505) 480 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:23 vm02 bash[23351]: cluster 2026-03-09T17:38:22.572116+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T17:38:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:23 vm02 bash[23351]: cluster 2026-03-09T17:38:22.572116+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T17:38:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:23 vm02 bash[23351]: audit 2026-03-09T17:38:22.574168+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:23 vm02 bash[23351]: audit 2026-03-09T17:38:22.574168+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:23 vm02 bash[23351]: audit 2026-03-09T17:38:22.576074+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:23 vm02 bash[23351]: audit 2026-03-09T17:38:22.576074+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:23 vm02 bash[23351]: cluster 2026-03-09T17:38:22.813213+0000 mgr.y (mgr.14505) 481 : cluster [DBG] pgmap v791: 292 pgs: 31 creating+peering, 1 unknown, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:23 vm02 bash[23351]: cluster 2026-03-09T17:38:22.813213+0000 mgr.y (mgr.14505) 481 : cluster [DBG] pgmap v791: 292 pgs: 31 creating+peering, 1 unknown, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:23 vm00 bash[28333]: cluster 2026-03-09T17:38:22.572116+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:23 vm00 bash[28333]: cluster 2026-03-09T17:38:22.572116+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:23 vm00 bash[28333]: audit 2026-03-09T17:38:22.574168+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:23 vm00 bash[28333]: audit 2026-03-09T17:38:22.574168+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:23 vm00 bash[28333]: audit 2026-03-09T17:38:22.576074+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:23 vm00 bash[28333]: audit 2026-03-09T17:38:22.576074+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:23 vm00 bash[28333]: cluster 2026-03-09T17:38:22.813213+0000 mgr.y (mgr.14505) 481 : cluster [DBG] pgmap v791: 292 pgs: 31 creating+peering, 1 unknown, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:23 vm00 bash[28333]: cluster 2026-03-09T17:38:22.813213+0000 mgr.y (mgr.14505) 481 : cluster [DBG] pgmap v791: 292 pgs: 31 creating+peering, 1 unknown, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:23 vm00 bash[20770]: cluster 2026-03-09T17:38:22.572116+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:23 vm00 bash[20770]: cluster 2026-03-09T17:38:22.572116+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:23 vm00 bash[20770]: audit 2026-03-09T17:38:22.574168+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:23 vm00 bash[20770]: audit 2026-03-09T17:38:22.574168+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:23 vm00 bash[20770]: audit 2026-03-09T17:38:22.576074+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:23 vm00 bash[20770]: audit 2026-03-09T17:38:22.576074+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:23 vm00 bash[20770]: cluster 2026-03-09T17:38:22.813213+0000 mgr.y (mgr.14505) 481 : cluster [DBG] pgmap v791: 292 pgs: 31 creating+peering, 1 unknown, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:23 vm00 bash[20770]: cluster 2026-03-09T17:38:22.813213+0000 mgr.y (mgr.14505) 481 : cluster [DBG] pgmap v791: 292 pgs: 31 creating+peering, 1 unknown, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: cluster 2026-03-09T17:38:23.569928+0000 mon.a (mon.0) 2846 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: cluster 2026-03-09T17:38:23.569928+0000 mon.a (mon.0) 2846 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: audit 2026-03-09T17:38:23.572292+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: audit 2026-03-09T17:38:23.572292+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: cluster 2026-03-09T17:38:23.577531+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: cluster 2026-03-09T17:38:23.577531+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: audit 2026-03-09T17:38:23.611791+0000 mon.b (mon.1) 519 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: audit 2026-03-09T17:38:23.611791+0000 mon.b (mon.1) 519 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: audit 2026-03-09T17:38:23.614057+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: audit 2026-03-09T17:38:23.614057+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: audit 2026-03-09T17:38:23.619462+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:24 vm02 bash[23351]: audit 2026-03-09T17:38:23.619462+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: cluster 2026-03-09T17:38:23.569928+0000 mon.a (mon.0) 2846 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: cluster 2026-03-09T17:38:23.569928+0000 mon.a (mon.0) 2846 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: audit 2026-03-09T17:38:23.572292+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: audit 2026-03-09T17:38:23.572292+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: cluster 2026-03-09T17:38:23.577531+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: cluster 2026-03-09T17:38:23.577531+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: audit 2026-03-09T17:38:23.611791+0000 mon.b (mon.1) 519 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: audit 2026-03-09T17:38:23.611791+0000 mon.b (mon.1) 519 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: audit 2026-03-09T17:38:23.614057+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: audit 2026-03-09T17:38:23.614057+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: audit 2026-03-09T17:38:23.619462+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:24 vm00 bash[20770]: audit 2026-03-09T17:38:23.619462+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: cluster 2026-03-09T17:38:23.569928+0000 mon.a (mon.0) 2846 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: cluster 2026-03-09T17:38:23.569928+0000 mon.a (mon.0) 2846 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: audit 2026-03-09T17:38:23.572292+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: audit 2026-03-09T17:38:23.572292+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: cluster 2026-03-09T17:38:23.577531+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: cluster 2026-03-09T17:38:23.577531+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: audit 2026-03-09T17:38:23.611791+0000 mon.b (mon.1) 519 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: audit 2026-03-09T17:38:23.611791+0000 mon.b (mon.1) 519 : audit [DBG] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: audit 2026-03-09T17:38:23.614057+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: audit 2026-03-09T17:38:23.614057+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: audit 2026-03-09T17:38:23.619462+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:24 vm00 bash[28333]: audit 2026-03-09T17:38:23.619462+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: audit 2026-03-09T17:38:24.606564+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: audit 2026-03-09T17:38:24.606564+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: cluster 2026-03-09T17:38:24.610267+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: cluster 2026-03-09T17:38:24.610267+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: audit 2026-03-09T17:38:24.613127+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: audit 2026-03-09T17:38:24.613127+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: audit 2026-03-09T17:38:24.614248+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: audit 2026-03-09T17:38:24.614248+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: cluster 2026-03-09T17:38:24.813532+0000 mgr.y (mgr.14505) 482 : cluster [DBG] pgmap v794: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:25 vm02 bash[23351]: cluster 2026-03-09T17:38:24.813532+0000 mgr.y (mgr.14505) 482 : cluster [DBG] pgmap v794: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: audit 2026-03-09T17:38:24.606564+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: audit 2026-03-09T17:38:24.606564+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: cluster 2026-03-09T17:38:24.610267+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: cluster 2026-03-09T17:38:24.610267+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: audit 2026-03-09T17:38:24.613127+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: audit 2026-03-09T17:38:24.613127+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: audit 2026-03-09T17:38:24.614248+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: audit 2026-03-09T17:38:24.614248+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: cluster 2026-03-09T17:38:24.813532+0000 mgr.y (mgr.14505) 482 : cluster [DBG] pgmap v794: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:25 vm00 bash[28333]: cluster 2026-03-09T17:38:24.813532+0000 mgr.y (mgr.14505) 482 : cluster [DBG] pgmap v794: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: audit 2026-03-09T17:38:24.606564+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: audit 2026-03-09T17:38:24.606564+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: cluster 2026-03-09T17:38:24.610267+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: cluster 2026-03-09T17:38:24.610267+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: audit 2026-03-09T17:38:24.613127+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: audit 2026-03-09T17:38:24.613127+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: audit 2026-03-09T17:38:24.614248+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: audit 2026-03-09T17:38:24.614248+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: cluster 2026-03-09T17:38:24.813532+0000 mgr.y (mgr.14505) 482 : cluster [DBG] pgmap v794: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:25 vm00 bash[20770]: cluster 2026-03-09T17:38:24.813532+0000 mgr.y (mgr.14505) 482 : cluster [DBG] pgmap v794: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:26.640 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:38:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:38:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:26 vm00 bash[28333]: audit 2026-03-09T17:38:25.617214+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:26 vm00 bash[28333]: audit 2026-03-09T17:38:25.617214+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:26 vm00 bash[28333]: audit 2026-03-09T17:38:25.624624+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:26 vm00 bash[28333]: audit 2026-03-09T17:38:25.624624+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:26 vm00 bash[28333]: cluster 2026-03-09T17:38:25.632730+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:26 vm00 bash[28333]: cluster 2026-03-09T17:38:25.632730+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:26 vm00 bash[28333]: audit 2026-03-09T17:38:25.634315+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:26 vm00 bash[28333]: audit 2026-03-09T17:38:25.634315+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:26 vm00 bash[20770]: audit 2026-03-09T17:38:25.617214+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:26 vm00 bash[20770]: audit 2026-03-09T17:38:25.617214+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:26 vm00 bash[20770]: audit 2026-03-09T17:38:25.624624+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:26 vm00 bash[20770]: audit 2026-03-09T17:38:25.624624+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:26 vm00 bash[20770]: cluster 2026-03-09T17:38:25.632730+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:26 vm00 bash[20770]: cluster 2026-03-09T17:38:25.632730+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:26 vm00 bash[20770]: audit 2026-03-09T17:38:25.634315+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:26 vm00 bash[20770]: audit 2026-03-09T17:38:25.634315+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:26 vm02 bash[23351]: audit 2026-03-09T17:38:25.617214+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:38:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:26 vm02 bash[23351]: audit 2026-03-09T17:38:25.617214+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T17:38:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:26 vm02 bash[23351]: audit 2026-03-09T17:38:25.624624+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:26 vm02 bash[23351]: audit 2026-03-09T17:38:25.624624+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:26 vm02 bash[23351]: cluster 2026-03-09T17:38:25.632730+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T17:38:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:26 vm02 bash[23351]: cluster 2026-03-09T17:38:25.632730+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T17:38:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:26 vm02 bash[23351]: audit 2026-03-09T17:38:25.634315+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:26 vm02 bash[23351]: audit 2026-03-09T17:38:25.634315+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.620460+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.620460+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: cluster 2026-03-09T17:38:26.622701+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: cluster 2026-03-09T17:38:26.622701+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.673107+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.673107+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.674058+0000 mon.b (mon.1) 524 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.674058+0000 mon.b (mon.1) 524 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.674244+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.674244+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.675084+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: audit 2026-03-09T17:38:26.675084+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: cluster 2026-03-09T17:38:26.813896+0000 mgr.y (mgr.14505) 483 : cluster [DBG] pgmap v797: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:27 vm00 bash[28333]: cluster 2026-03-09T17:38:26.813896+0000 mgr.y (mgr.14505) 483 : cluster [DBG] pgmap v797: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.620460+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.620460+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: cluster 2026-03-09T17:38:26.622701+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: cluster 2026-03-09T17:38:26.622701+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.673107+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.673107+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.674058+0000 mon.b (mon.1) 524 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.674058+0000 mon.b (mon.1) 524 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.674244+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.674244+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.675084+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: audit 2026-03-09T17:38:26.675084+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: cluster 2026-03-09T17:38:26.813896+0000 mgr.y (mgr.14505) 483 : cluster [DBG] pgmap v797: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:27 vm00 bash[20770]: cluster 2026-03-09T17:38:26.813896+0000 mgr.y (mgr.14505) 483 : cluster [DBG] pgmap v797: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.620460+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.620460+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: cluster 2026-03-09T17:38:26.622701+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: cluster 2026-03-09T17:38:26.622701+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.673107+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.673107+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.674058+0000 mon.b (mon.1) 524 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.674058+0000 mon.b (mon.1) 524 : audit [INF] from='client.? 192.168.123.100:0/2243177863' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.674244+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.674244+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-6"}]: dispatch 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.675084+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: audit 2026-03-09T17:38:26.675084+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-6", "tierpool": "test-rados-api-vm00-60118-102"}]: dispatch 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: cluster 2026-03-09T17:38:26.813896+0000 mgr.y (mgr.14505) 483 : cluster [DBG] pgmap v797: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:27 vm02 bash[23351]: cluster 2026-03-09T17:38:26.813896+0000 mgr.y (mgr.14505) 483 : cluster [DBG] pgmap v797: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 987 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:29.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:28 vm00 bash[28333]: cluster 2026-03-09T17:38:27.658401+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T17:38:29.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:28 vm00 bash[28333]: cluster 2026-03-09T17:38:27.658401+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T17:38:29.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:28 vm00 bash[28333]: audit 2026-03-09T17:38:27.885626+0000 mon.c (mon.2) 621 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:29.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:28 vm00 bash[28333]: audit 2026-03-09T17:38:27.885626+0000 mon.c (mon.2) 621 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:29.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:28 vm00 bash[20770]: cluster 2026-03-09T17:38:27.658401+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T17:38:29.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:28 vm00 bash[20770]: cluster 2026-03-09T17:38:27.658401+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T17:38:29.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:28 vm00 bash[20770]: audit 2026-03-09T17:38:27.885626+0000 mon.c (mon.2) 621 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:29.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:28 vm00 bash[20770]: audit 2026-03-09T17:38:27.885626+0000 mon.c (mon.2) 621 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:28 vm02 bash[23351]: cluster 2026-03-09T17:38:27.658401+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T17:38:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:28 vm02 bash[23351]: cluster 2026-03-09T17:38:27.658401+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T17:38:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:28 vm02 bash[23351]: audit 2026-03-09T17:38:27.885626+0000 mon.c (mon.2) 621 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:28 vm02 bash[23351]: audit 2026-03-09T17:38:27.885626+0000 mon.c (mon.2) 621 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: cluster 2026-03-09T17:38:28.663376+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: cluster 2026-03-09T17:38:28.663376+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.680529+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.680529+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.681672+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.681672+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.682558+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.682558+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.682980+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.682980+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.683695+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.683695+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.684015+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: audit 2026-03-09T17:38:28.684015+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: cluster 2026-03-09T17:38:28.814238+0000 mgr.y (mgr.14505) 484 : cluster [DBG] pgmap v800: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:29 vm00 bash[28333]: cluster 2026-03-09T17:38:28.814238+0000 mgr.y (mgr.14505) 484 : cluster [DBG] pgmap v800: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: cluster 2026-03-09T17:38:28.663376+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: cluster 2026-03-09T17:38:28.663376+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.680529+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.680529+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.681672+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.681672+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.682558+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.682558+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.682980+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.682980+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.683695+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.683695+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.684015+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: audit 2026-03-09T17:38:28.684015+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: cluster 2026-03-09T17:38:28.814238+0000 mgr.y (mgr.14505) 484 : cluster [DBG] pgmap v800: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:30.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:29 vm00 bash[20770]: cluster 2026-03-09T17:38:28.814238+0000 mgr.y (mgr.14505) 484 : cluster [DBG] pgmap v800: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: cluster 2026-03-09T17:38:28.663376+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: cluster 2026-03-09T17:38:28.663376+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.680529+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.680529+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.681672+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.681672+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.682558+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.682558+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.682980+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.682980+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.683695+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.683695+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.684015+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: audit 2026-03-09T17:38:28.684015+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: cluster 2026-03-09T17:38:28.814238+0000 mgr.y (mgr.14505) 484 : cluster [DBG] pgmap v800: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:29 vm02 bash[23351]: cluster 2026-03-09T17:38:28.814238+0000 mgr.y (mgr.14505) 484 : cluster [DBG] pgmap v800: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:30 vm00 bash[20770]: audit 2026-03-09T17:38:29.663148+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:30 vm00 bash[20770]: audit 2026-03-09T17:38:29.663148+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:30 vm00 bash[20770]: cluster 2026-03-09T17:38:29.665733+0000 mon.a (mon.0) 2866 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:30 vm00 bash[20770]: cluster 2026-03-09T17:38:29.665733+0000 mon.a (mon.0) 2866 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:30 vm00 bash[20770]: audit 2026-03-09T17:38:29.673349+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:30 vm00 bash[20770]: audit 2026-03-09T17:38:29.673349+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:30 vm00 bash[20770]: audit 2026-03-09T17:38:29.684003+0000 mon.a (mon.0) 2867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:30 vm00 bash[20770]: audit 2026-03-09T17:38:29.684003+0000 mon.a (mon.0) 2867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:30 vm00 bash[28333]: audit 2026-03-09T17:38:29.663148+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:30 vm00 bash[28333]: audit 2026-03-09T17:38:29.663148+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:30 vm00 bash[28333]: cluster 2026-03-09T17:38:29.665733+0000 mon.a (mon.0) 2866 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:30 vm00 bash[28333]: cluster 2026-03-09T17:38:29.665733+0000 mon.a (mon.0) 2866 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:30 vm00 bash[28333]: audit 2026-03-09T17:38:29.673349+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:30 vm00 bash[28333]: audit 2026-03-09T17:38:29.673349+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:30 vm00 bash[28333]: audit 2026-03-09T17:38:29.684003+0000 mon.a (mon.0) 2867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:30 vm00 bash[28333]: audit 2026-03-09T17:38:29.684003+0000 mon.a (mon.0) 2867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:30 vm02 bash[23351]: audit 2026-03-09T17:38:29.663148+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:30 vm02 bash[23351]: audit 2026-03-09T17:38:29.663148+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-60118-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:30 vm02 bash[23351]: cluster 2026-03-09T17:38:29.665733+0000 mon.a (mon.0) 2866 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T17:38:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:30 vm02 bash[23351]: cluster 2026-03-09T17:38:29.665733+0000 mon.a (mon.0) 2866 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T17:38:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:30 vm02 bash[23351]: audit 2026-03-09T17:38:29.673349+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:30 vm02 bash[23351]: audit 2026-03-09T17:38:29.673349+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:30 vm02 bash[23351]: audit 2026-03-09T17:38:29.684003+0000 mon.a (mon.0) 2867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:30 vm02 bash[23351]: audit 2026-03-09T17:38:29.684003+0000 mon.a (mon.0) 2867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:38:32.018 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:31 vm02 bash[23351]: cluster 2026-03-09T17:38:30.705388+0000 mon.a (mon.0) 2868 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T17:38:32.018 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:31 vm02 bash[23351]: cluster 2026-03-09T17:38:30.705388+0000 mon.a (mon.0) 2868 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T17:38:32.018 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:31 vm02 bash[23351]: cluster 2026-03-09T17:38:30.814574+0000 mgr.y (mgr.14505) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:32.018 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:31 vm02 bash[23351]: cluster 2026-03-09T17:38:30.814574+0000 mgr.y (mgr.14505) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:31 vm00 bash[20770]: cluster 2026-03-09T17:38:30.705388+0000 mon.a (mon.0) 2868 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T17:38:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:31 vm00 bash[20770]: cluster 2026-03-09T17:38:30.705388+0000 mon.a (mon.0) 2868 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T17:38:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:31 vm00 bash[20770]: cluster 2026-03-09T17:38:30.814574+0000 mgr.y (mgr.14505) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:31 vm00 bash[20770]: cluster 2026-03-09T17:38:30.814574+0000 mgr.y (mgr.14505) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:31 vm00 bash[28333]: cluster 2026-03-09T17:38:30.705388+0000 mon.a (mon.0) 2868 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T17:38:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:31 vm00 bash[28333]: cluster 2026-03-09T17:38:30.705388+0000 mon.a (mon.0) 2868 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T17:38:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:31 vm00 bash[28333]: cluster 2026-03-09T17:38:30.814574+0000 mgr.y (mgr.14505) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:31 vm00 bash[28333]: cluster 2026-03-09T17:38:30.814574+0000 mgr.y (mgr.14505) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:32.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:38:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:38:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:32 vm02 bash[23351]: audit 2026-03-09T17:38:31.700056+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:38:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:32 vm02 bash[23351]: audit 2026-03-09T17:38:31.700056+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:38:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:32 vm02 bash[23351]: cluster 2026-03-09T17:38:31.720239+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T17:38:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:32 vm02 bash[23351]: cluster 2026-03-09T17:38:31.720239+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T17:38:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:32 vm02 bash[23351]: cluster 2026-03-09T17:38:31.773145+0000 mon.a (mon.0) 2871 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:32 vm02 bash[23351]: cluster 2026-03-09T17:38:31.773145+0000 mon.a (mon.0) 2871 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:32 vm02 bash[23351]: audit 2026-03-09T17:38:32.018005+0000 mgr.y (mgr.14505) 486 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:33.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:32 vm02 bash[23351]: audit 2026-03-09T17:38:32.018005+0000 mgr.y (mgr.14505) 486 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:32 vm00 bash[28333]: audit 2026-03-09T17:38:31.700056+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:32 vm00 bash[28333]: audit 2026-03-09T17:38:31.700056+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:32 vm00 bash[28333]: cluster 2026-03-09T17:38:31.720239+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:32 vm00 bash[28333]: cluster 2026-03-09T17:38:31.720239+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:32 vm00 bash[28333]: cluster 2026-03-09T17:38:31.773145+0000 mon.a (mon.0) 2871 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:32 vm00 bash[28333]: cluster 2026-03-09T17:38:31.773145+0000 mon.a (mon.0) 2871 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:32 vm00 bash[28333]: audit 2026-03-09T17:38:32.018005+0000 mgr.y (mgr.14505) 486 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:32 vm00 bash[28333]: audit 2026-03-09T17:38:32.018005+0000 mgr.y (mgr.14505) 486 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:32 vm00 bash[20770]: audit 2026-03-09T17:38:31.700056+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:32 vm00 bash[20770]: audit 2026-03-09T17:38:31.700056+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-60118-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:32 vm00 bash[20770]: cluster 2026-03-09T17:38:31.720239+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:32 vm00 bash[20770]: cluster 2026-03-09T17:38:31.720239+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:32 vm00 bash[20770]: cluster 2026-03-09T17:38:31.773145+0000 mon.a (mon.0) 2871 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:32 vm00 bash[20770]: cluster 2026-03-09T17:38:31.773145+0000 mon.a (mon.0) 2871 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:32 vm00 bash[20770]: audit 2026-03-09T17:38:32.018005+0000 mgr.y (mgr.14505) 486 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:32 vm00 bash[20770]: audit 2026-03-09T17:38:32.018005+0000 mgr.y (mgr.14505) 486 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:33 vm02 bash[23351]: cluster 2026-03-09T17:38:32.814813+0000 mgr.y (mgr.14505) 487 : cluster [DBG] pgmap v806: 236 pgs: 6 creating+peering, 2 unknown, 228 active+clean; 455 KiB data, 971 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:33 vm02 bash[23351]: cluster 2026-03-09T17:38:32.814813+0000 mgr.y (mgr.14505) 487 : cluster [DBG] pgmap v806: 236 pgs: 6 creating+peering, 2 unknown, 228 active+clean; 455 KiB data, 971 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:33 vm02 bash[23351]: cluster 2026-03-09T17:38:32.816748+0000 mon.a (mon.0) 2872 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T17:38:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:33 vm02 bash[23351]: cluster 2026-03-09T17:38:32.816748+0000 mon.a (mon.0) 2872 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T17:38:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:33 vm02 bash[23351]: cluster 2026-03-09T17:38:33.825174+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T17:38:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:33 vm02 bash[23351]: cluster 2026-03-09T17:38:33.825174+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:33 vm00 bash[20770]: cluster 2026-03-09T17:38:32.814813+0000 mgr.y (mgr.14505) 487 : cluster [DBG] pgmap v806: 236 pgs: 6 creating+peering, 2 unknown, 228 active+clean; 455 KiB data, 971 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:33 vm00 bash[20770]: cluster 2026-03-09T17:38:32.814813+0000 mgr.y (mgr.14505) 487 : cluster [DBG] pgmap v806: 236 pgs: 6 creating+peering, 2 unknown, 228 active+clean; 455 KiB data, 971 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:33 vm00 bash[20770]: cluster 2026-03-09T17:38:32.816748+0000 mon.a (mon.0) 2872 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:33 vm00 bash[20770]: cluster 2026-03-09T17:38:32.816748+0000 mon.a (mon.0) 2872 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:33 vm00 bash[20770]: cluster 2026-03-09T17:38:33.825174+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:33 vm00 bash[20770]: cluster 2026-03-09T17:38:33.825174+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:33 vm00 bash[28333]: cluster 2026-03-09T17:38:32.814813+0000 mgr.y (mgr.14505) 487 : cluster [DBG] pgmap v806: 236 pgs: 6 creating+peering, 2 unknown, 228 active+clean; 455 KiB data, 971 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:33 vm00 bash[28333]: cluster 2026-03-09T17:38:32.814813+0000 mgr.y (mgr.14505) 487 : cluster [DBG] pgmap v806: 236 pgs: 6 creating+peering, 2 unknown, 228 active+clean; 455 KiB data, 971 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:33 vm00 bash[28333]: cluster 2026-03-09T17:38:32.816748+0000 mon.a (mon.0) 2872 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:33 vm00 bash[28333]: cluster 2026-03-09T17:38:32.816748+0000 mon.a (mon.0) 2872 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:33 vm00 bash[28333]: cluster 2026-03-09T17:38:33.825174+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T17:38:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:33 vm00 bash[28333]: cluster 2026-03-09T17:38:33.825174+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T17:38:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:34 vm02 bash[23351]: audit 2026-03-09T17:38:33.838346+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:34 vm02 bash[23351]: audit 2026-03-09T17:38:33.838346+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:34 vm02 bash[23351]: audit 2026-03-09T17:38:33.843578+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:34 vm02 bash[23351]: audit 2026-03-09T17:38:33.843578+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:34 vm00 bash[28333]: audit 2026-03-09T17:38:33.838346+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:34 vm00 bash[28333]: audit 2026-03-09T17:38:33.838346+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:34 vm00 bash[28333]: audit 2026-03-09T17:38:33.843578+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:34 vm00 bash[28333]: audit 2026-03-09T17:38:33.843578+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:34 vm00 bash[20770]: audit 2026-03-09T17:38:33.838346+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:34 vm00 bash[20770]: audit 2026-03-09T17:38:33.838346+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:34 vm00 bash[20770]: audit 2026-03-09T17:38:33.843578+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:34 vm00 bash[20770]: audit 2026-03-09T17:38:33.843578+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: cluster 2026-03-09T17:38:34.815166+0000 mgr.y (mgr.14505) 488 : cluster [DBG] pgmap v808: 268 pgs: 32 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: cluster 2026-03-09T17:38:34.815166+0000 mgr.y (mgr.14505) 488 : cluster [DBG] pgmap v808: 268 pgs: 32 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: audit 2026-03-09T17:38:34.820820+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: audit 2026-03-09T17:38:34.820820+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: cluster 2026-03-09T17:38:34.831776+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: cluster 2026-03-09T17:38:34.831776+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: cluster 2026-03-09T17:38:35.828965+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: cluster 2026-03-09T17:38:35.828965+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: audit 2026-03-09T17:38:35.848021+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: audit 2026-03-09T17:38:35.848021+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: audit 2026-03-09T17:38:35.848646+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:35 vm02 bash[23351]: audit 2026-03-09T17:38:35.848646+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: cluster 2026-03-09T17:38:34.815166+0000 mgr.y (mgr.14505) 488 : cluster [DBG] pgmap v808: 268 pgs: 32 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: cluster 2026-03-09T17:38:34.815166+0000 mgr.y (mgr.14505) 488 : cluster [DBG] pgmap v808: 268 pgs: 32 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: audit 2026-03-09T17:38:34.820820+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: audit 2026-03-09T17:38:34.820820+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: cluster 2026-03-09T17:38:34.831776+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: cluster 2026-03-09T17:38:34.831776+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: cluster 2026-03-09T17:38:35.828965+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: cluster 2026-03-09T17:38:35.828965+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: audit 2026-03-09T17:38:35.848021+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: audit 2026-03-09T17:38:35.848021+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: audit 2026-03-09T17:38:35.848646+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:35 vm00 bash[28333]: audit 2026-03-09T17:38:35.848646+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: cluster 2026-03-09T17:38:34.815166+0000 mgr.y (mgr.14505) 488 : cluster [DBG] pgmap v808: 268 pgs: 32 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: cluster 2026-03-09T17:38:34.815166+0000 mgr.y (mgr.14505) 488 : cluster [DBG] pgmap v808: 268 pgs: 32 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: audit 2026-03-09T17:38:34.820820+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: audit 2026-03-09T17:38:34.820820+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: cluster 2026-03-09T17:38:34.831776+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: cluster 2026-03-09T17:38:34.831776+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: cluster 2026-03-09T17:38:35.828965+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: cluster 2026-03-09T17:38:35.828965+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: audit 2026-03-09T17:38:35.848021+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: audit 2026-03-09T17:38:35.848021+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: audit 2026-03-09T17:38:35.848646+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:35 vm00 bash[20770]: audit 2026-03-09T17:38:35.848646+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:38:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:38:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: cluster 2026-03-09T17:38:36.773741+0000 mon.a (mon.0) 2879 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: cluster 2026-03-09T17:38:36.773741+0000 mon.a (mon.0) 2879 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: audit 2026-03-09T17:38:36.829009+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: audit 2026-03-09T17:38:36.829009+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: cluster 2026-03-09T17:38:36.841416+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: cluster 2026-03-09T17:38:36.841416+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: audit 2026-03-09T17:38:36.852389+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: audit 2026-03-09T17:38:36.852389+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: audit 2026-03-09T17:38:36.859032+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:36 vm00 bash[28333]: audit 2026-03-09T17:38:36.859032+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: cluster 2026-03-09T17:38:36.773741+0000 mon.a (mon.0) 2879 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: cluster 2026-03-09T17:38:36.773741+0000 mon.a (mon.0) 2879 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: audit 2026-03-09T17:38:36.829009+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: audit 2026-03-09T17:38:36.829009+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: cluster 2026-03-09T17:38:36.841416+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: cluster 2026-03-09T17:38:36.841416+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: audit 2026-03-09T17:38:36.852389+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: audit 2026-03-09T17:38:36.852389+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: audit 2026-03-09T17:38:36.859032+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:36 vm00 bash[20770]: audit 2026-03-09T17:38:36.859032+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: cluster 2026-03-09T17:38:36.773741+0000 mon.a (mon.0) 2879 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: cluster 2026-03-09T17:38:36.773741+0000 mon.a (mon.0) 2879 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: audit 2026-03-09T17:38:36.829009+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: audit 2026-03-09T17:38:36.829009+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: cluster 2026-03-09T17:38:36.841416+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: cluster 2026-03-09T17:38:36.841416+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: audit 2026-03-09T17:38:36.852389+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: audit 2026-03-09T17:38:36.852389+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: audit 2026-03-09T17:38:36.859032+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:36 vm02 bash[23351]: audit 2026-03-09T17:38:36.859032+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: cluster 2026-03-09T17:38:36.815562+0000 mgr.y (mgr.14505) 489 : cluster [DBG] pgmap v811: 300 pgs: 64 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: cluster 2026-03-09T17:38:36.815562+0000 mgr.y (mgr.14505) 489 : cluster [DBG] pgmap v811: 300 pgs: 64 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: audit 2026-03-09T17:38:37.831843+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: audit 2026-03-09T17:38:37.831843+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: cluster 2026-03-09T17:38:37.838809+0000 mon.a (mon.0) 2884 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: cluster 2026-03-09T17:38:37.838809+0000 mon.a (mon.0) 2884 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: audit 2026-03-09T17:38:37.841085+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: audit 2026-03-09T17:38:37.841085+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: audit 2026-03-09T17:38:37.841263+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:37 vm00 bash[20770]: audit 2026-03-09T17:38:37.841263+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: cluster 2026-03-09T17:38:36.815562+0000 mgr.y (mgr.14505) 489 : cluster [DBG] pgmap v811: 300 pgs: 64 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: cluster 2026-03-09T17:38:36.815562+0000 mgr.y (mgr.14505) 489 : cluster [DBG] pgmap v811: 300 pgs: 64 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: audit 2026-03-09T17:38:37.831843+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: audit 2026-03-09T17:38:37.831843+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: cluster 2026-03-09T17:38:37.838809+0000 mon.a (mon.0) 2884 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: cluster 2026-03-09T17:38:37.838809+0000 mon.a (mon.0) 2884 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: audit 2026-03-09T17:38:37.841085+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: audit 2026-03-09T17:38:37.841085+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: audit 2026-03-09T17:38:37.841263+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:37 vm00 bash[28333]: audit 2026-03-09T17:38:37.841263+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: cluster 2026-03-09T17:38:36.815562+0000 mgr.y (mgr.14505) 489 : cluster [DBG] pgmap v811: 300 pgs: 64 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: cluster 2026-03-09T17:38:36.815562+0000 mgr.y (mgr.14505) 489 : cluster [DBG] pgmap v811: 300 pgs: 64 unknown, 6 creating+peering, 230 active+clean; 455 KiB data, 967 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: audit 2026-03-09T17:38:37.831843+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: audit 2026-03-09T17:38:37.831843+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: cluster 2026-03-09T17:38:37.838809+0000 mon.a (mon.0) 2884 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: cluster 2026-03-09T17:38:37.838809+0000 mon.a (mon.0) 2884 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: audit 2026-03-09T17:38:37.841085+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: audit 2026-03-09T17:38:37.841085+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: audit 2026-03-09T17:38:37.841263+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:37 vm02 bash[23351]: audit 2026-03-09T17:38:37.841263+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: cluster 2026-03-09T17:38:38.815960+0000 mgr.y (mgr.14505) 490 : cluster [DBG] pgmap v814: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: cluster 2026-03-09T17:38:38.815960+0000 mgr.y (mgr.14505) 490 : cluster [DBG] pgmap v814: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: audit 2026-03-09T17:38:38.835182+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: audit 2026-03-09T17:38:38.835182+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: cluster 2026-03-09T17:38:38.838828+0000 mon.a (mon.0) 2887 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: cluster 2026-03-09T17:38:38.838828+0000 mon.a (mon.0) 2887 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: audit 2026-03-09T17:38:38.845582+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: audit 2026-03-09T17:38:38.845582+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: audit 2026-03-09T17:38:38.848064+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:39 vm02 bash[23351]: audit 2026-03-09T17:38:38.848064+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: cluster 2026-03-09T17:38:38.815960+0000 mgr.y (mgr.14505) 490 : cluster [DBG] pgmap v814: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: cluster 2026-03-09T17:38:38.815960+0000 mgr.y (mgr.14505) 490 : cluster [DBG] pgmap v814: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: audit 2026-03-09T17:38:38.835182+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: audit 2026-03-09T17:38:38.835182+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: cluster 2026-03-09T17:38:38.838828+0000 mon.a (mon.0) 2887 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: cluster 2026-03-09T17:38:38.838828+0000 mon.a (mon.0) 2887 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: audit 2026-03-09T17:38:38.845582+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: audit 2026-03-09T17:38:38.845582+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: audit 2026-03-09T17:38:38.848064+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:39 vm00 bash[28333]: audit 2026-03-09T17:38:38.848064+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: cluster 2026-03-09T17:38:38.815960+0000 mgr.y (mgr.14505) 490 : cluster [DBG] pgmap v814: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: cluster 2026-03-09T17:38:38.815960+0000 mgr.y (mgr.14505) 490 : cluster [DBG] pgmap v814: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: audit 2026-03-09T17:38:38.835182+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: audit 2026-03-09T17:38:38.835182+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-107", "overlaypool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: cluster 2026-03-09T17:38:38.838828+0000 mon.a (mon.0) 2887 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: cluster 2026-03-09T17:38:38.838828+0000 mon.a (mon.0) 2887 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: audit 2026-03-09T17:38:38.845582+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: audit 2026-03-09T17:38:38.845582+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: audit 2026-03-09T17:38:38.848064+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:39 vm00 bash[20770]: audit 2026-03-09T17:38:38.848064+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: cluster 2026-03-09T17:38:39.835350+0000 mon.a (mon.0) 2889 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: cluster 2026-03-09T17:38:39.835350+0000 mon.a (mon.0) 2889 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: audit 2026-03-09T17:38:39.838681+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: audit 2026-03-09T17:38:39.838681+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: cluster 2026-03-09T17:38:39.847405+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: cluster 2026-03-09T17:38:39.847405+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: audit 2026-03-09T17:38:39.887274+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: audit 2026-03-09T17:38:39.887274+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: audit 2026-03-09T17:38:39.887655+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:40 vm02 bash[23351]: audit 2026-03-09T17:38:39.887655+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: cluster 2026-03-09T17:38:39.835350+0000 mon.a (mon.0) 2889 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: cluster 2026-03-09T17:38:39.835350+0000 mon.a (mon.0) 2889 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: audit 2026-03-09T17:38:39.838681+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: audit 2026-03-09T17:38:39.838681+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: cluster 2026-03-09T17:38:39.847405+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: cluster 2026-03-09T17:38:39.847405+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: audit 2026-03-09T17:38:39.887274+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: audit 2026-03-09T17:38:39.887274+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: audit 2026-03-09T17:38:39.887655+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:40 vm00 bash[20770]: audit 2026-03-09T17:38:39.887655+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: cluster 2026-03-09T17:38:39.835350+0000 mon.a (mon.0) 2889 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: cluster 2026-03-09T17:38:39.835350+0000 mon.a (mon.0) 2889 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: audit 2026-03-09T17:38:39.838681+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: audit 2026-03-09T17:38:39.838681+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-107-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: cluster 2026-03-09T17:38:39.847405+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: cluster 2026-03-09T17:38:39.847405+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: audit 2026-03-09T17:38:39.887274+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: audit 2026-03-09T17:38:39.887274+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: audit 2026-03-09T17:38:39.887655+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:41.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:40 vm00 bash[28333]: audit 2026-03-09T17:38:39.887655+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]: dispatch 2026-03-09T17:38:42.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:38:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: cluster 2026-03-09T17:38:40.816363+0000 mgr.y (mgr.14505) 491 : cluster [DBG] pgmap v817: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: cluster 2026-03-09T17:38:40.816363+0000 mgr.y (mgr.14505) 491 : cluster [DBG] pgmap v817: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: audit 2026-03-09T17:38:40.864712+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]': finished 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: audit 2026-03-09T17:38:40.864712+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]': finished 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: cluster 2026-03-09T17:38:40.879748+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: cluster 2026-03-09T17:38:40.879748+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: audit 2026-03-09T17:38:40.884982+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: audit 2026-03-09T17:38:40.884982+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: audit 2026-03-09T17:38:40.885364+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: audit 2026-03-09T17:38:40.885364+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: cluster 2026-03-09T17:38:41.774301+0000 mon.a (mon.0) 2896 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:41 vm02 bash[23351]: cluster 2026-03-09T17:38:41.774301+0000 mon.a (mon.0) 2896 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: cluster 2026-03-09T17:38:40.816363+0000 mgr.y (mgr.14505) 491 : cluster [DBG] pgmap v817: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: cluster 2026-03-09T17:38:40.816363+0000 mgr.y (mgr.14505) 491 : cluster [DBG] pgmap v817: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: audit 2026-03-09T17:38:40.864712+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]': finished 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: audit 2026-03-09T17:38:40.864712+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]': finished 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: cluster 2026-03-09T17:38:40.879748+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: cluster 2026-03-09T17:38:40.879748+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: audit 2026-03-09T17:38:40.884982+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: audit 2026-03-09T17:38:40.884982+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: audit 2026-03-09T17:38:40.885364+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: audit 2026-03-09T17:38:40.885364+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: cluster 2026-03-09T17:38:41.774301+0000 mon.a (mon.0) 2896 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:41 vm00 bash[28333]: cluster 2026-03-09T17:38:41.774301+0000 mon.a (mon.0) 2896 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: cluster 2026-03-09T17:38:40.816363+0000 mgr.y (mgr.14505) 491 : cluster [DBG] pgmap v817: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: cluster 2026-03-09T17:38:40.816363+0000 mgr.y (mgr.14505) 491 : cluster [DBG] pgmap v817: 300 pgs: 300 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: audit 2026-03-09T17:38:40.864712+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]': finished 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: audit 2026-03-09T17:38:40.864712+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-107"}]': finished 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: cluster 2026-03-09T17:38:40.879748+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: cluster 2026-03-09T17:38:40.879748+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: audit 2026-03-09T17:38:40.884982+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: audit 2026-03-09T17:38:40.884982+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.100:0/1527637481' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: audit 2026-03-09T17:38:40.885364+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: audit 2026-03-09T17:38:40.885364+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]: dispatch 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: cluster 2026-03-09T17:38:41.774301+0000 mon.a (mon.0) 2896 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:41 vm00 bash[20770]: cluster 2026-03-09T17:38:41.774301+0000 mon.a (mon.0) 2896 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:42 vm00 bash[28333]: cluster 2026-03-09T17:38:41.864899+0000 mon.a (mon.0) 2897 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:42 vm00 bash[28333]: cluster 2026-03-09T17:38:41.864899+0000 mon.a (mon.0) 2897 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:42 vm00 bash[28333]: audit 2026-03-09T17:38:41.867731+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:42 vm00 bash[28333]: audit 2026-03-09T17:38:41.867731+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:42 vm00 bash[28333]: cluster 2026-03-09T17:38:41.877540+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:42 vm00 bash[28333]: cluster 2026-03-09T17:38:41.877540+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:42 vm00 bash[28333]: audit 2026-03-09T17:38:42.026002+0000 mgr.y (mgr.14505) 492 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:42 vm00 bash[28333]: audit 2026-03-09T17:38:42.026002+0000 mgr.y (mgr.14505) 492 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:42 vm00 bash[20770]: cluster 2026-03-09T17:38:41.864899+0000 mon.a (mon.0) 2897 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:42 vm00 bash[20770]: cluster 2026-03-09T17:38:41.864899+0000 mon.a (mon.0) 2897 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:42 vm00 bash[20770]: audit 2026-03-09T17:38:41.867731+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:42 vm00 bash[20770]: audit 2026-03-09T17:38:41.867731+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:42 vm00 bash[20770]: cluster 2026-03-09T17:38:41.877540+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:42 vm00 bash[20770]: cluster 2026-03-09T17:38:41.877540+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:42 vm00 bash[20770]: audit 2026-03-09T17:38:42.026002+0000 mgr.y (mgr.14505) 492 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:42 vm00 bash[20770]: audit 2026-03-09T17:38:42.026002+0000 mgr.y (mgr.14505) 492 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:42 vm02 bash[23351]: cluster 2026-03-09T17:38:41.864899+0000 mon.a (mon.0) 2897 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:42 vm02 bash[23351]: cluster 2026-03-09T17:38:41.864899+0000 mon.a (mon.0) 2897 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:42 vm02 bash[23351]: audit 2026-03-09T17:38:41.867731+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:42 vm02 bash[23351]: audit 2026-03-09T17:38:41.867731+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-107", "tierpool": "test-rados-api-vm00-60118-107-cache"}]': finished 2026-03-09T17:38:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:42 vm02 bash[23351]: cluster 2026-03-09T17:38:41.877540+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T17:38:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:42 vm02 bash[23351]: cluster 2026-03-09T17:38:41.877540+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T17:38:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:42 vm02 bash[23351]: audit 2026-03-09T17:38:42.026002+0000 mgr.y (mgr.14505) 492 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:42 vm02 bash[23351]: audit 2026-03-09T17:38:42.026002+0000 mgr.y (mgr.14505) 492 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:43 vm00 bash[28333]: cluster 2026-03-09T17:38:42.817209+0000 mgr.y (mgr.14505) 493 : cluster [DBG] pgmap v820: 300 pgs: 300 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:43 vm00 bash[28333]: cluster 2026-03-09T17:38:42.817209+0000 mgr.y (mgr.14505) 493 : cluster [DBG] pgmap v820: 300 pgs: 300 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:43 vm00 bash[28333]: cluster 2026-03-09T17:38:42.895850+0000 mon.a (mon.0) 2900 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:43 vm00 bash[28333]: cluster 2026-03-09T17:38:42.895850+0000 mon.a (mon.0) 2900 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:43 vm00 bash[28333]: audit 2026-03-09T17:38:42.920199+0000 mon.c (mon.2) 633 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:43 vm00 bash[28333]: audit 2026-03-09T17:38:42.920199+0000 mon.c (mon.2) 633 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:43 vm00 bash[20770]: cluster 2026-03-09T17:38:42.817209+0000 mgr.y (mgr.14505) 493 : cluster [DBG] pgmap v820: 300 pgs: 300 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:43 vm00 bash[20770]: cluster 2026-03-09T17:38:42.817209+0000 mgr.y (mgr.14505) 493 : cluster [DBG] pgmap v820: 300 pgs: 300 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:43 vm00 bash[20770]: cluster 2026-03-09T17:38:42.895850+0000 mon.a (mon.0) 2900 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:43 vm00 bash[20770]: cluster 2026-03-09T17:38:42.895850+0000 mon.a (mon.0) 2900 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:43 vm00 bash[20770]: audit 2026-03-09T17:38:42.920199+0000 mon.c (mon.2) 633 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:43 vm00 bash[20770]: audit 2026-03-09T17:38:42.920199+0000 mon.c (mon.2) 633 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:43 vm02 bash[23351]: cluster 2026-03-09T17:38:42.817209+0000 mgr.y (mgr.14505) 493 : cluster [DBG] pgmap v820: 300 pgs: 300 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T17:38:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:43 vm02 bash[23351]: cluster 2026-03-09T17:38:42.817209+0000 mgr.y (mgr.14505) 493 : cluster [DBG] pgmap v820: 300 pgs: 300 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T17:38:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:43 vm02 bash[23351]: cluster 2026-03-09T17:38:42.895850+0000 mon.a (mon.0) 2900 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T17:38:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:43 vm02 bash[23351]: cluster 2026-03-09T17:38:42.895850+0000 mon.a (mon.0) 2900 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T17:38:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:43 vm02 bash[23351]: audit 2026-03-09T17:38:42.920199+0000 mon.c (mon.2) 633 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:43 vm02 bash[23351]: audit 2026-03-09T17:38:42.920199+0000 mon.c (mon.2) 633 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: cluster 2026-03-09T17:38:43.911480+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: cluster 2026-03-09T17:38:43.911480+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.924907+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.924907+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.926323+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.926323+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.926422+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.926422+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.926842+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.926842+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.927296+0000 mon.a (mon.0) 2903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.927296+0000 mon.a (mon.0) 2903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.927809+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:44 vm00 bash[20770]: audit 2026-03-09T17:38:43.927809+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: cluster 2026-03-09T17:38:43.911480+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: cluster 2026-03-09T17:38:43.911480+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.924907+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.924907+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.926323+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.926323+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.926422+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.926422+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.926842+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.926842+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.927296+0000 mon.a (mon.0) 2903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.927296+0000 mon.a (mon.0) 2903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.927809+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:44 vm00 bash[28333]: audit 2026-03-09T17:38:43.927809+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: cluster 2026-03-09T17:38:43.911480+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T17:38:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: cluster 2026-03-09T17:38:43.911480+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T17:38:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.924907+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.924907+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.926323+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.926323+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.926422+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.926422+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.926842+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.926842+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.927296+0000 mon.a (mon.0) 2903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.927296+0000 mon.a (mon.0) 2903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.927809+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:45.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:44 vm02 bash[23351]: audit 2026-03-09T17:38:43.927809+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: cluster 2026-03-09T17:38:44.817632+0000 mgr.y (mgr.14505) 494 : cluster [DBG] pgmap v823: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: cluster 2026-03-09T17:38:44.817632+0000 mgr.y (mgr.14505) 494 : cluster [DBG] pgmap v823: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: audit 2026-03-09T17:38:44.913719+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: audit 2026-03-09T17:38:44.913719+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: audit 2026-03-09T17:38:44.920259+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: audit 2026-03-09T17:38:44.920259+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: cluster 2026-03-09T17:38:44.925346+0000 mon.a (mon.0) 2906 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: cluster 2026-03-09T17:38:44.925346+0000 mon.a (mon.0) 2906 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: audit 2026-03-09T17:38:44.926304+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:45 vm00 bash[28333]: audit 2026-03-09T17:38:44.926304+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: cluster 2026-03-09T17:38:44.817632+0000 mgr.y (mgr.14505) 494 : cluster [DBG] pgmap v823: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: cluster 2026-03-09T17:38:44.817632+0000 mgr.y (mgr.14505) 494 : cluster [DBG] pgmap v823: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: audit 2026-03-09T17:38:44.913719+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: audit 2026-03-09T17:38:44.913719+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: audit 2026-03-09T17:38:44.920259+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: audit 2026-03-09T17:38:44.920259+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: cluster 2026-03-09T17:38:44.925346+0000 mon.a (mon.0) 2906 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T17:38:46.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: cluster 2026-03-09T17:38:44.925346+0000 mon.a (mon.0) 2906 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T17:38:46.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: audit 2026-03-09T17:38:44.926304+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:45 vm00 bash[20770]: audit 2026-03-09T17:38:44.926304+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: cluster 2026-03-09T17:38:44.817632+0000 mgr.y (mgr.14505) 494 : cluster [DBG] pgmap v823: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: cluster 2026-03-09T17:38:44.817632+0000 mgr.y (mgr.14505) 494 : cluster [DBG] pgmap v823: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: audit 2026-03-09T17:38:44.913719+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: audit 2026-03-09T17:38:44.913719+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: audit 2026-03-09T17:38:44.920259+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: audit 2026-03-09T17:38:44.920259+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: cluster 2026-03-09T17:38:44.925346+0000 mon.a (mon.0) 2906 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: cluster 2026-03-09T17:38:44.925346+0000 mon.a (mon.0) 2906 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: audit 2026-03-09T17:38:44.926304+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:45 vm02 bash[23351]: audit 2026-03-09T17:38:44.926304+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:38:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:38:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:46 vm00 bash[28333]: cluster 2026-03-09T17:38:45.938021+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:46 vm00 bash[28333]: cluster 2026-03-09T17:38:45.938021+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:46 vm00 bash[28333]: audit 2026-03-09T17:38:46.921140+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:46 vm00 bash[28333]: audit 2026-03-09T17:38:46.921140+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:46 vm00 bash[28333]: cluster 2026-03-09T17:38:46.931485+0000 mon.a (mon.0) 2910 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:46 vm00 bash[28333]: cluster 2026-03-09T17:38:46.931485+0000 mon.a (mon.0) 2910 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:46 vm00 bash[20770]: cluster 2026-03-09T17:38:45.938021+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:46 vm00 bash[20770]: cluster 2026-03-09T17:38:45.938021+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:46 vm00 bash[20770]: audit 2026-03-09T17:38:46.921140+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:46 vm00 bash[20770]: audit 2026-03-09T17:38:46.921140+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:46 vm00 bash[20770]: cluster 2026-03-09T17:38:46.931485+0000 mon.a (mon.0) 2910 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T17:38:47.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:46 vm00 bash[20770]: cluster 2026-03-09T17:38:46.931485+0000 mon.a (mon.0) 2910 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T17:38:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:46 vm02 bash[23351]: cluster 2026-03-09T17:38:45.938021+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T17:38:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:46 vm02 bash[23351]: cluster 2026-03-09T17:38:45.938021+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T17:38:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:46 vm02 bash[23351]: audit 2026-03-09T17:38:46.921140+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:46 vm02 bash[23351]: audit 2026-03-09T17:38:46.921140+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:46 vm02 bash[23351]: cluster 2026-03-09T17:38:46.931485+0000 mon.a (mon.0) 2910 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T17:38:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:46 vm02 bash[23351]: cluster 2026-03-09T17:38:46.931485+0000 mon.a (mon.0) 2910 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T17:38:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:48 vm02 bash[23351]: cluster 2026-03-09T17:38:46.818019+0000 mgr.y (mgr.14505) 495 : cluster [DBG] pgmap v826: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:48 vm02 bash[23351]: cluster 2026-03-09T17:38:46.818019+0000 mgr.y (mgr.14505) 495 : cluster [DBG] pgmap v826: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:48 vm02 bash[23351]: cluster 2026-03-09T17:38:47.931134+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T17:38:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:48 vm02 bash[23351]: cluster 2026-03-09T17:38:47.931134+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T17:38:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:48 vm02 bash[23351]: audit 2026-03-09T17:38:47.931263+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:48 vm02 bash[23351]: audit 2026-03-09T17:38:47.931263+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:48 vm02 bash[23351]: audit 2026-03-09T17:38:47.932938+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:48 vm02 bash[23351]: audit 2026-03-09T17:38:47.932938+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:48 vm00 bash[20770]: cluster 2026-03-09T17:38:46.818019+0000 mgr.y (mgr.14505) 495 : cluster [DBG] pgmap v826: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:48 vm00 bash[20770]: cluster 2026-03-09T17:38:46.818019+0000 mgr.y (mgr.14505) 495 : cluster [DBG] pgmap v826: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:48 vm00 bash[20770]: cluster 2026-03-09T17:38:47.931134+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:48 vm00 bash[20770]: cluster 2026-03-09T17:38:47.931134+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:48 vm00 bash[20770]: audit 2026-03-09T17:38:47.931263+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:48 vm00 bash[20770]: audit 2026-03-09T17:38:47.931263+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:48 vm00 bash[20770]: audit 2026-03-09T17:38:47.932938+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:48 vm00 bash[20770]: audit 2026-03-09T17:38:47.932938+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:48 vm00 bash[28333]: cluster 2026-03-09T17:38:46.818019+0000 mgr.y (mgr.14505) 495 : cluster [DBG] pgmap v826: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:48 vm00 bash[28333]: cluster 2026-03-09T17:38:46.818019+0000 mgr.y (mgr.14505) 495 : cluster [DBG] pgmap v826: 236 pgs: 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:48 vm00 bash[28333]: cluster 2026-03-09T17:38:47.931134+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:48 vm00 bash[28333]: cluster 2026-03-09T17:38:47.931134+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:48 vm00 bash[28333]: audit 2026-03-09T17:38:47.931263+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:48 vm00 bash[28333]: audit 2026-03-09T17:38:47.931263+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:48 vm00 bash[28333]: audit 2026-03-09T17:38:47.932938+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:48 vm00 bash[28333]: audit 2026-03-09T17:38:47.932938+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: cluster 2026-03-09T17:38:48.818596+0000 mgr.y (mgr.14505) 496 : cluster [DBG] pgmap v829: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: cluster 2026-03-09T17:38:48.818596+0000 mgr.y (mgr.14505) 496 : cluster [DBG] pgmap v829: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: audit 2026-03-09T17:38:48.928557+0000 mon.a (mon.0) 2913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: audit 2026-03-09T17:38:48.928557+0000 mon.a (mon.0) 2913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: audit 2026-03-09T17:38:48.932337+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: audit 2026-03-09T17:38:48.932337+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: cluster 2026-03-09T17:38:48.933130+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: cluster 2026-03-09T17:38:48.933130+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: audit 2026-03-09T17:38:48.933749+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: audit 2026-03-09T17:38:48.933749+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: cluster 2026-03-09T17:38:49.055354+0000 mon.a (mon.0) 2916 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: cluster 2026-03-09T17:38:49.055354+0000 mon.a (mon.0) 2916 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: audit 2026-03-09T17:38:49.437755+0000 mon.c (mon.2) 634 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:38:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:50 vm02 bash[23351]: audit 2026-03-09T17:38:49.437755+0000 mon.c (mon.2) 634 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: cluster 2026-03-09T17:38:48.818596+0000 mgr.y (mgr.14505) 496 : cluster [DBG] pgmap v829: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: cluster 2026-03-09T17:38:48.818596+0000 mgr.y (mgr.14505) 496 : cluster [DBG] pgmap v829: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: audit 2026-03-09T17:38:48.928557+0000 mon.a (mon.0) 2913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: audit 2026-03-09T17:38:48.928557+0000 mon.a (mon.0) 2913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: audit 2026-03-09T17:38:48.932337+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: audit 2026-03-09T17:38:48.932337+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: cluster 2026-03-09T17:38:48.933130+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: cluster 2026-03-09T17:38:48.933130+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: audit 2026-03-09T17:38:48.933749+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: audit 2026-03-09T17:38:48.933749+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: cluster 2026-03-09T17:38:49.055354+0000 mon.a (mon.0) 2916 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: cluster 2026-03-09T17:38:49.055354+0000 mon.a (mon.0) 2916 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: audit 2026-03-09T17:38:49.437755+0000 mon.c (mon.2) 634 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:50 vm00 bash[20770]: audit 2026-03-09T17:38:49.437755+0000 mon.c (mon.2) 634 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: cluster 2026-03-09T17:38:48.818596+0000 mgr.y (mgr.14505) 496 : cluster [DBG] pgmap v829: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: cluster 2026-03-09T17:38:48.818596+0000 mgr.y (mgr.14505) 496 : cluster [DBG] pgmap v829: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: audit 2026-03-09T17:38:48.928557+0000 mon.a (mon.0) 2913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: audit 2026-03-09T17:38:48.928557+0000 mon.a (mon.0) 2913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: audit 2026-03-09T17:38:48.932337+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: audit 2026-03-09T17:38:48.932337+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: cluster 2026-03-09T17:38:48.933130+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T17:38:50.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: cluster 2026-03-09T17:38:48.933130+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T17:38:50.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: audit 2026-03-09T17:38:48.933749+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: audit 2026-03-09T17:38:48.933749+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:50.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: cluster 2026-03-09T17:38:49.055354+0000 mon.a (mon.0) 2916 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:50.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: cluster 2026-03-09T17:38:49.055354+0000 mon.a (mon.0) 2916 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:50.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: audit 2026-03-09T17:38:49.437755+0000 mon.c (mon.2) 634 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:38:50.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:50 vm00 bash[28333]: audit 2026-03-09T17:38:49.437755+0000 mon.c (mon.2) 634 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:38:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:51 vm02 bash[23351]: audit 2026-03-09T17:38:50.042043+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:51 vm02 bash[23351]: audit 2026-03-09T17:38:50.042043+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:51 vm02 bash[23351]: cluster 2026-03-09T17:38:50.044933+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T17:38:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:51 vm02 bash[23351]: cluster 2026-03-09T17:38:50.044933+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T17:38:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:51 vm02 bash[23351]: audit 2026-03-09T17:38:50.047799+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:51 vm02 bash[23351]: audit 2026-03-09T17:38:50.047799+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:51 vm02 bash[23351]: audit 2026-03-09T17:38:50.057919+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:51 vm02 bash[23351]: audit 2026-03-09T17:38:50.057919+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:51 vm00 bash[20770]: audit 2026-03-09T17:38:50.042043+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:51 vm00 bash[20770]: audit 2026-03-09T17:38:50.042043+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:51 vm00 bash[20770]: cluster 2026-03-09T17:38:50.044933+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:51 vm00 bash[20770]: cluster 2026-03-09T17:38:50.044933+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:51 vm00 bash[20770]: audit 2026-03-09T17:38:50.047799+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:51 vm00 bash[20770]: audit 2026-03-09T17:38:50.047799+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:51 vm00 bash[20770]: audit 2026-03-09T17:38:50.057919+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:51 vm00 bash[20770]: audit 2026-03-09T17:38:50.057919+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:51 vm00 bash[28333]: audit 2026-03-09T17:38:50.042043+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:51 vm00 bash[28333]: audit 2026-03-09T17:38:50.042043+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:51 vm00 bash[28333]: cluster 2026-03-09T17:38:50.044933+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:51 vm00 bash[28333]: cluster 2026-03-09T17:38:50.044933+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:51 vm00 bash[28333]: audit 2026-03-09T17:38:50.047799+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:51 vm00 bash[28333]: audit 2026-03-09T17:38:50.047799+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:51 vm00 bash[28333]: audit 2026-03-09T17:38:50.057919+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:51 vm00 bash[28333]: audit 2026-03-09T17:38:50.057919+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: cluster 2026-03-09T17:38:50.818910+0000 mgr.y (mgr.14505) 497 : cluster [DBG] pgmap v832: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: cluster 2026-03-09T17:38:50.818910+0000 mgr.y (mgr.14505) 497 : cluster [DBG] pgmap v832: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: audit 2026-03-09T17:38:51.066880+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: audit 2026-03-09T17:38:51.066880+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: audit 2026-03-09T17:38:51.076095+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: audit 2026-03-09T17:38:51.076095+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: cluster 2026-03-09T17:38:51.077506+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: cluster 2026-03-09T17:38:51.077506+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T17:38:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: audit 2026-03-09T17:38:51.078268+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:52 vm02 bash[23351]: audit 2026-03-09T17:38:51.078268+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:38:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: cluster 2026-03-09T17:38:50.818910+0000 mgr.y (mgr.14505) 497 : cluster [DBG] pgmap v832: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: cluster 2026-03-09T17:38:50.818910+0000 mgr.y (mgr.14505) 497 : cluster [DBG] pgmap v832: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: audit 2026-03-09T17:38:51.066880+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: audit 2026-03-09T17:38:51.066880+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: audit 2026-03-09T17:38:51.076095+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: audit 2026-03-09T17:38:51.076095+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: cluster 2026-03-09T17:38:51.077506+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: cluster 2026-03-09T17:38:51.077506+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: audit 2026-03-09T17:38:51.078268+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:52 vm00 bash[20770]: audit 2026-03-09T17:38:51.078268+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: cluster 2026-03-09T17:38:50.818910+0000 mgr.y (mgr.14505) 497 : cluster [DBG] pgmap v832: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: cluster 2026-03-09T17:38:50.818910+0000 mgr.y (mgr.14505) 497 : cluster [DBG] pgmap v832: 276 pgs: 3 creating+activating, 10 creating+peering, 27 unknown, 236 active+clean; 455 KiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: audit 2026-03-09T17:38:51.066880+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: audit 2026-03-09T17:38:51.066880+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-109", "overlaypool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: audit 2026-03-09T17:38:51.076095+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: audit 2026-03-09T17:38:51.076095+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: cluster 2026-03-09T17:38:51.077506+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T17:38:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: cluster 2026-03-09T17:38:51.077506+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T17:38:52.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: audit 2026-03-09T17:38:51.078268+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:52.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:52 vm00 bash[28333]: audit 2026-03-09T17:38:51.078268+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:52.039058+0000 mgr.y (mgr.14505) 498 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:52.039058+0000 mgr.y (mgr.14505) 498 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: cluster 2026-03-09T17:38:52.066780+0000 mon.a (mon.0) 2923 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: cluster 2026-03-09T17:38:52.066780+0000 mon.a (mon.0) 2923 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:52.070824+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:52.070824+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:52.077365+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:52.077365+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: cluster 2026-03-09T17:38:52.081390+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: cluster 2026-03-09T17:38:52.081390+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:52.082711+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:52.082711+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:53.081610+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:53.081610+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:53.085170+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:53.085170+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: cluster 2026-03-09T17:38:53.086967+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: cluster 2026-03-09T17:38:53.086967+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:53.088856+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:53 vm02 bash[23351]: audit 2026-03-09T17:38:53.088856+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:52.039058+0000 mgr.y (mgr.14505) 498 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:52.039058+0000 mgr.y (mgr.14505) 498 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: cluster 2026-03-09T17:38:52.066780+0000 mon.a (mon.0) 2923 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: cluster 2026-03-09T17:38:52.066780+0000 mon.a (mon.0) 2923 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:52.070824+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:52.070824+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:52.077365+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:52.077365+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: cluster 2026-03-09T17:38:52.081390+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: cluster 2026-03-09T17:38:52.081390+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:52.082711+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:52.082711+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:53.081610+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:53.081610+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:53.085170+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:53.085170+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: cluster 2026-03-09T17:38:53.086967+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: cluster 2026-03-09T17:38:53.086967+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:53.088856+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:53 vm00 bash[20770]: audit 2026-03-09T17:38:53.088856+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:52.039058+0000 mgr.y (mgr.14505) 498 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:52.039058+0000 mgr.y (mgr.14505) 498 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: cluster 2026-03-09T17:38:52.066780+0000 mon.a (mon.0) 2923 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: cluster 2026-03-09T17:38:52.066780+0000 mon.a (mon.0) 2923 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:52.070824+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:52.070824+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-109-cache", "mode": "writeback"}]': finished 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:52.077365+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:52.077365+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: cluster 2026-03-09T17:38:52.081390+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: cluster 2026-03-09T17:38:52.081390+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:52.082711+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:52.082711+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:53.081610+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:53.081610+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:53.085170+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:53.085170+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: cluster 2026-03-09T17:38:53.086967+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: cluster 2026-03-09T17:38:53.086967+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:53.088856+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:53.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:53 vm00 bash[28333]: audit 2026-03-09T17:38:53.088856+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: cluster 2026-03-09T17:38:52.819634+0000 mgr.y (mgr.14505) 499 : cluster [DBG] pgmap v835: 276 pgs: 3 creating+activating, 5 creating+peering, 268 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: cluster 2026-03-09T17:38:52.819634+0000 mgr.y (mgr.14505) 499 : cluster [DBG] pgmap v835: 276 pgs: 3 creating+activating, 5 creating+peering, 268 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: audit 2026-03-09T17:38:54.084807+0000 mon.a (mon.0) 2930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: audit 2026-03-09T17:38:54.084807+0000 mon.a (mon.0) 2930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: cluster 2026-03-09T17:38:54.087712+0000 mon.a (mon.0) 2931 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: cluster 2026-03-09T17:38:54.087712+0000 mon.a (mon.0) 2931 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: audit 2026-03-09T17:38:54.088527+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: audit 2026-03-09T17:38:54.088527+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: audit 2026-03-09T17:38:54.093520+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:54 vm02 bash[23351]: audit 2026-03-09T17:38:54.093520+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: cluster 2026-03-09T17:38:52.819634+0000 mgr.y (mgr.14505) 499 : cluster [DBG] pgmap v835: 276 pgs: 3 creating+activating, 5 creating+peering, 268 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: cluster 2026-03-09T17:38:52.819634+0000 mgr.y (mgr.14505) 499 : cluster [DBG] pgmap v835: 276 pgs: 3 creating+activating, 5 creating+peering, 268 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: audit 2026-03-09T17:38:54.084807+0000 mon.a (mon.0) 2930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: audit 2026-03-09T17:38:54.084807+0000 mon.a (mon.0) 2930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: cluster 2026-03-09T17:38:54.087712+0000 mon.a (mon.0) 2931 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: cluster 2026-03-09T17:38:54.087712+0000 mon.a (mon.0) 2931 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: audit 2026-03-09T17:38:54.088527+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: audit 2026-03-09T17:38:54.088527+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: audit 2026-03-09T17:38:54.093520+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:54 vm00 bash[28333]: audit 2026-03-09T17:38:54.093520+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: cluster 2026-03-09T17:38:52.819634+0000 mgr.y (mgr.14505) 499 : cluster [DBG] pgmap v835: 276 pgs: 3 creating+activating, 5 creating+peering, 268 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: cluster 2026-03-09T17:38:52.819634+0000 mgr.y (mgr.14505) 499 : cluster [DBG] pgmap v835: 276 pgs: 3 creating+activating, 5 creating+peering, 268 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: audit 2026-03-09T17:38:54.084807+0000 mon.a (mon.0) 2930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: audit 2026-03-09T17:38:54.084807+0000 mon.a (mon.0) 2930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:38:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: cluster 2026-03-09T17:38:54.087712+0000 mon.a (mon.0) 2931 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T17:38:54.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: cluster 2026-03-09T17:38:54.087712+0000 mon.a (mon.0) 2931 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T17:38:54.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: audit 2026-03-09T17:38:54.088527+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: audit 2026-03-09T17:38:54.088527+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: audit 2026-03-09T17:38:54.093520+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:54.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:54 vm00 bash[20770]: audit 2026-03-09T17:38:54.093520+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:54.619024+0000 mon.a (mon.0) 2933 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:54.619024+0000 mon.a (mon.0) 2933 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:54.631180+0000 mon.a (mon.0) 2934 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:54.631180+0000 mon.a (mon.0) 2934 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:54.810689+0000 mon.a (mon.0) 2935 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:54.810689+0000 mon.a (mon.0) 2935 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:54.819533+0000 mon.a (mon.0) 2936 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:54.819533+0000 mon.a (mon.0) 2936 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: cluster 2026-03-09T17:38:54.819975+0000 mgr.y (mgr.14505) 500 : cluster [DBG] pgmap v838: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: cluster 2026-03-09T17:38:54.819975+0000 mgr.y (mgr.14505) 500 : cluster [DBG] pgmap v838: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: cluster 2026-03-09T17:38:55.085070+0000 mon.a (mon.0) 2937 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: cluster 2026-03-09T17:38:55.085070+0000 mon.a (mon.0) 2937 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.088740+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.088740+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.094871+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.094871+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: cluster 2026-03-09T17:38:55.099562+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: cluster 2026-03-09T17:38:55.099562+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T17:38:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.101203+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.101203+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.133732+0000 mon.c (mon.2) 635 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:38:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.133732+0000 mon.c (mon.2) 635 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:38:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.134747+0000 mon.c (mon.2) 636 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:38:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.134747+0000 mon.c (mon.2) 636 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:38:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.140750+0000 mon.a (mon.0) 2941 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:55.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:55 vm02 bash[23351]: audit 2026-03-09T17:38:55.140750+0000 mon.a (mon.0) 2941 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:54.619024+0000 mon.a (mon.0) 2933 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:54.619024+0000 mon.a (mon.0) 2933 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:54.631180+0000 mon.a (mon.0) 2934 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:54.631180+0000 mon.a (mon.0) 2934 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:54.810689+0000 mon.a (mon.0) 2935 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:54.810689+0000 mon.a (mon.0) 2935 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:54.819533+0000 mon.a (mon.0) 2936 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:54.819533+0000 mon.a (mon.0) 2936 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: cluster 2026-03-09T17:38:54.819975+0000 mgr.y (mgr.14505) 500 : cluster [DBG] pgmap v838: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: cluster 2026-03-09T17:38:54.819975+0000 mgr.y (mgr.14505) 500 : cluster [DBG] pgmap v838: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: cluster 2026-03-09T17:38:55.085070+0000 mon.a (mon.0) 2937 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: cluster 2026-03-09T17:38:55.085070+0000 mon.a (mon.0) 2937 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.088740+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.088740+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.094871+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.094871+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: cluster 2026-03-09T17:38:55.099562+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: cluster 2026-03-09T17:38:55.099562+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.101203+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.101203+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.133732+0000 mon.c (mon.2) 635 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.133732+0000 mon.c (mon.2) 635 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.134747+0000 mon.c (mon.2) 636 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.134747+0000 mon.c (mon.2) 636 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.140750+0000 mon.a (mon.0) 2941 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:55 vm00 bash[20770]: audit 2026-03-09T17:38:55.140750+0000 mon.a (mon.0) 2941 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:54.619024+0000 mon.a (mon.0) 2933 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:54.619024+0000 mon.a (mon.0) 2933 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:54.631180+0000 mon.a (mon.0) 2934 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:54.631180+0000 mon.a (mon.0) 2934 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:54.810689+0000 mon.a (mon.0) 2935 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:54.810689+0000 mon.a (mon.0) 2935 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:54.819533+0000 mon.a (mon.0) 2936 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:54.819533+0000 mon.a (mon.0) 2936 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: cluster 2026-03-09T17:38:54.819975+0000 mgr.y (mgr.14505) 500 : cluster [DBG] pgmap v838: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: cluster 2026-03-09T17:38:54.819975+0000 mgr.y (mgr.14505) 500 : cluster [DBG] pgmap v838: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: cluster 2026-03-09T17:38:55.085070+0000 mon.a (mon.0) 2937 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: cluster 2026-03-09T17:38:55.085070+0000 mon.a (mon.0) 2937 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.088740+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.088740+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.094871+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.094871+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: cluster 2026-03-09T17:38:55.099562+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: cluster 2026-03-09T17:38:55.099562+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.101203+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.101203+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.133732+0000 mon.c (mon.2) 635 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.133732+0000 mon.c (mon.2) 635 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.134747+0000 mon.c (mon.2) 636 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.134747+0000 mon.c (mon.2) 636 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.140750+0000 mon.a (mon.0) 2941 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:55 vm00 bash[28333]: audit 2026-03-09T17:38:55.140750+0000 mon.a (mon.0) 2941 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:38:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:38:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:38:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: audit 2026-03-09T17:38:56.091760+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: audit 2026-03-09T17:38:56.091760+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: cluster 2026-03-09T17:38:56.094675+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: cluster 2026-03-09T17:38:56.094675+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: audit 2026-03-09T17:38:56.139506+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: audit 2026-03-09T17:38:56.139506+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: audit 2026-03-09T17:38:56.140642+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: audit 2026-03-09T17:38:56.140642+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: cluster 2026-03-09T17:38:56.775991+0000 mon.a (mon.0) 2945 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:57 vm02 bash[23351]: cluster 2026-03-09T17:38:56.775991+0000 mon.a (mon.0) 2945 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: audit 2026-03-09T17:38:56.091760+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: audit 2026-03-09T17:38:56.091760+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: cluster 2026-03-09T17:38:56.094675+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: cluster 2026-03-09T17:38:56.094675+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: audit 2026-03-09T17:38:56.139506+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: audit 2026-03-09T17:38:56.139506+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: audit 2026-03-09T17:38:56.140642+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: audit 2026-03-09T17:38:56.140642+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: cluster 2026-03-09T17:38:56.775991+0000 mon.a (mon.0) 2945 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:57 vm00 bash[28333]: cluster 2026-03-09T17:38:56.775991+0000 mon.a (mon.0) 2945 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: audit 2026-03-09T17:38:56.091760+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: audit 2026-03-09T17:38:56.091760+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: cluster 2026-03-09T17:38:56.094675+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: cluster 2026-03-09T17:38:56.094675+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: audit 2026-03-09T17:38:56.139506+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: audit 2026-03-09T17:38:56.139506+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: audit 2026-03-09T17:38:56.140642+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: audit 2026-03-09T17:38:56.140642+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: cluster 2026-03-09T17:38:56.775991+0000 mon.a (mon.0) 2945 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:57 vm00 bash[20770]: cluster 2026-03-09T17:38:56.775991+0000 mon.a (mon.0) 2945 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: cluster 2026-03-09T17:38:56.820388+0000 mgr.y (mgr.14505) 501 : cluster [DBG] pgmap v841: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: cluster 2026-03-09T17:38:56.820388+0000 mgr.y (mgr.14505) 501 : cluster [DBG] pgmap v841: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: audit 2026-03-09T17:38:57.102171+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: audit 2026-03-09T17:38:57.102171+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: audit 2026-03-09T17:38:57.109042+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: audit 2026-03-09T17:38:57.109042+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: cluster 2026-03-09T17:38:57.117422+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: cluster 2026-03-09T17:38:57.117422+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: audit 2026-03-09T17:38:57.118333+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: audit 2026-03-09T17:38:57.118333+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: audit 2026-03-09T17:38:57.928491+0000 mon.c (mon.2) 637 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:58 vm02 bash[23351]: audit 2026-03-09T17:38:57.928491+0000 mon.c (mon.2) 637 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: cluster 2026-03-09T17:38:56.820388+0000 mgr.y (mgr.14505) 501 : cluster [DBG] pgmap v841: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: cluster 2026-03-09T17:38:56.820388+0000 mgr.y (mgr.14505) 501 : cluster [DBG] pgmap v841: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: audit 2026-03-09T17:38:57.102171+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: audit 2026-03-09T17:38:57.102171+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: audit 2026-03-09T17:38:57.109042+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: audit 2026-03-09T17:38:57.109042+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: cluster 2026-03-09T17:38:57.117422+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: cluster 2026-03-09T17:38:57.117422+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: audit 2026-03-09T17:38:57.118333+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: audit 2026-03-09T17:38:57.118333+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: audit 2026-03-09T17:38:57.928491+0000 mon.c (mon.2) 637 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:58 vm00 bash[20770]: audit 2026-03-09T17:38:57.928491+0000 mon.c (mon.2) 637 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: cluster 2026-03-09T17:38:56.820388+0000 mgr.y (mgr.14505) 501 : cluster [DBG] pgmap v841: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: cluster 2026-03-09T17:38:56.820388+0000 mgr.y (mgr.14505) 501 : cluster [DBG] pgmap v841: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: audit 2026-03-09T17:38:57.102171+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: audit 2026-03-09T17:38:57.102171+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: audit 2026-03-09T17:38:57.109042+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: audit 2026-03-09T17:38:57.109042+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: cluster 2026-03-09T17:38:57.117422+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: cluster 2026-03-09T17:38:57.117422+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: audit 2026-03-09T17:38:57.118333+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: audit 2026-03-09T17:38:57.118333+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]: dispatch 2026-03-09T17:38:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: audit 2026-03-09T17:38:57.928491+0000 mon.c (mon.2) 637 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:58.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:58 vm00 bash[28333]: audit 2026-03-09T17:38:57.928491+0000 mon.c (mon.2) 637 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:38:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:59 vm02 bash[23351]: audit 2026-03-09T17:38:58.106217+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:59 vm02 bash[23351]: audit 2026-03-09T17:38:58.106217+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:59 vm02 bash[23351]: cluster 2026-03-09T17:38:58.111427+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T17:38:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:59 vm02 bash[23351]: cluster 2026-03-09T17:38:58.111427+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T17:38:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:59 vm02 bash[23351]: cluster 2026-03-09T17:38:59.120093+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T17:38:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:38:59 vm02 bash[23351]: cluster 2026-03-09T17:38:59.120093+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:59 vm00 bash[20770]: audit 2026-03-09T17:38:58.106217+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:59 vm00 bash[20770]: audit 2026-03-09T17:38:58.106217+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:59 vm00 bash[20770]: cluster 2026-03-09T17:38:58.111427+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:59 vm00 bash[20770]: cluster 2026-03-09T17:38:58.111427+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:59 vm00 bash[20770]: cluster 2026-03-09T17:38:59.120093+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:38:59 vm00 bash[20770]: cluster 2026-03-09T17:38:59.120093+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:59 vm00 bash[28333]: audit 2026-03-09T17:38:58.106217+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:59 vm00 bash[28333]: audit 2026-03-09T17:38:58.106217+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-109", "tierpool": "test-rados-api-vm00-60118-109-cache"}]': finished 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:59 vm00 bash[28333]: cluster 2026-03-09T17:38:58.111427+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:59 vm00 bash[28333]: cluster 2026-03-09T17:38:58.111427+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:59 vm00 bash[28333]: cluster 2026-03-09T17:38:59.120093+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T17:38:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:38:59 vm00 bash[28333]: cluster 2026-03-09T17:38:59.120093+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:00 vm00 bash[28333]: cluster 2026-03-09T17:38:58.820698+0000 mgr.y (mgr.14505) 502 : cluster [DBG] pgmap v844: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:00 vm00 bash[28333]: cluster 2026-03-09T17:38:58.820698+0000 mgr.y (mgr.14505) 502 : cluster [DBG] pgmap v844: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:00 vm00 bash[28333]: cluster 2026-03-09T17:39:00.118439+0000 mon.a (mon.0) 2952 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:00 vm00 bash[28333]: cluster 2026-03-09T17:39:00.118439+0000 mon.a (mon.0) 2952 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:00 vm00 bash[28333]: audit 2026-03-09T17:39:00.119921+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:00 vm00 bash[28333]: audit 2026-03-09T17:39:00.119921+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:00 vm00 bash[28333]: audit 2026-03-09T17:39:00.122059+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:00 vm00 bash[28333]: audit 2026-03-09T17:39:00.122059+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:00 vm00 bash[20770]: cluster 2026-03-09T17:38:58.820698+0000 mgr.y (mgr.14505) 502 : cluster [DBG] pgmap v844: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:00 vm00 bash[20770]: cluster 2026-03-09T17:38:58.820698+0000 mgr.y (mgr.14505) 502 : cluster [DBG] pgmap v844: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:00 vm00 bash[20770]: cluster 2026-03-09T17:39:00.118439+0000 mon.a (mon.0) 2952 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:00 vm00 bash[20770]: cluster 2026-03-09T17:39:00.118439+0000 mon.a (mon.0) 2952 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:00 vm00 bash[20770]: audit 2026-03-09T17:39:00.119921+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:00 vm00 bash[20770]: audit 2026-03-09T17:39:00.119921+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:00 vm00 bash[20770]: audit 2026-03-09T17:39:00.122059+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:00 vm00 bash[20770]: audit 2026-03-09T17:39:00.122059+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:00 vm02 bash[23351]: cluster 2026-03-09T17:38:58.820698+0000 mgr.y (mgr.14505) 502 : cluster [DBG] pgmap v844: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:39:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:00 vm02 bash[23351]: cluster 2026-03-09T17:38:58.820698+0000 mgr.y (mgr.14505) 502 : cluster [DBG] pgmap v844: 276 pgs: 276 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:39:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:00 vm02 bash[23351]: cluster 2026-03-09T17:39:00.118439+0000 mon.a (mon.0) 2952 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T17:39:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:00 vm02 bash[23351]: cluster 2026-03-09T17:39:00.118439+0000 mon.a (mon.0) 2952 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T17:39:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:00 vm02 bash[23351]: audit 2026-03-09T17:39:00.119921+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:00 vm02 bash[23351]: audit 2026-03-09T17:39:00.119921+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:00 vm02 bash[23351]: audit 2026-03-09T17:39:00.122059+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:00.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:00 vm02 bash[23351]: audit 2026-03-09T17:39:00.122059+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: cluster 2026-03-09T17:39:00.821016+0000 mgr.y (mgr.14505) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: cluster 2026-03-09T17:39:00.821016+0000 mgr.y (mgr.14505) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: audit 2026-03-09T17:39:01.117207+0000 mon.a (mon.0) 2954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: audit 2026-03-09T17:39:01.117207+0000 mon.a (mon.0) 2954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: cluster 2026-03-09T17:39:01.125846+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: cluster 2026-03-09T17:39:01.125846+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: audit 2026-03-09T17:39:01.126067+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: audit 2026-03-09T17:39:01.126067+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: audit 2026-03-09T17:39:01.127370+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: audit 2026-03-09T17:39:01.127370+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: cluster 2026-03-09T17:39:01.776689+0000 mon.a (mon.0) 2957 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:02 vm02 bash[23351]: cluster 2026-03-09T17:39:01.776689+0000 mon.a (mon.0) 2957 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:02.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:39:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: cluster 2026-03-09T17:39:00.821016+0000 mgr.y (mgr.14505) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: cluster 2026-03-09T17:39:00.821016+0000 mgr.y (mgr.14505) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: audit 2026-03-09T17:39:01.117207+0000 mon.a (mon.0) 2954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: audit 2026-03-09T17:39:01.117207+0000 mon.a (mon.0) 2954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: cluster 2026-03-09T17:39:01.125846+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: cluster 2026-03-09T17:39:01.125846+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: audit 2026-03-09T17:39:01.126067+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: audit 2026-03-09T17:39:01.126067+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: audit 2026-03-09T17:39:01.127370+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: audit 2026-03-09T17:39:01.127370+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: cluster 2026-03-09T17:39:01.776689+0000 mon.a (mon.0) 2957 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:02 vm00 bash[28333]: cluster 2026-03-09T17:39:01.776689+0000 mon.a (mon.0) 2957 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: cluster 2026-03-09T17:39:00.821016+0000 mgr.y (mgr.14505) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: cluster 2026-03-09T17:39:00.821016+0000 mgr.y (mgr.14505) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: audit 2026-03-09T17:39:01.117207+0000 mon.a (mon.0) 2954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: audit 2026-03-09T17:39:01.117207+0000 mon.a (mon.0) 2954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: cluster 2026-03-09T17:39:01.125846+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: cluster 2026-03-09T17:39:01.125846+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: audit 2026-03-09T17:39:01.126067+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: audit 2026-03-09T17:39:01.126067+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/3247834812' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: audit 2026-03-09T17:39:01.127370+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: audit 2026-03-09T17:39:01.127370+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]: dispatch 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: cluster 2026-03-09T17:39:01.776689+0000 mon.a (mon.0) 2957 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:02 vm00 bash[20770]: cluster 2026-03-09T17:39:01.776689+0000 mon.a (mon.0) 2957 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:03 vm02 bash[23351]: audit 2026-03-09T17:39:02.050005+0000 mgr.y (mgr.14505) 504 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:03 vm02 bash[23351]: audit 2026-03-09T17:39:02.050005+0000 mgr.y (mgr.14505) 504 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:03 vm02 bash[23351]: audit 2026-03-09T17:39:02.130088+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:03 vm02 bash[23351]: audit 2026-03-09T17:39:02.130088+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:03 vm02 bash[23351]: cluster 2026-03-09T17:39:02.141159+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T17:39:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:03 vm02 bash[23351]: cluster 2026-03-09T17:39:02.141159+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:03 vm00 bash[20770]: audit 2026-03-09T17:39:02.050005+0000 mgr.y (mgr.14505) 504 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:03 vm00 bash[20770]: audit 2026-03-09T17:39:02.050005+0000 mgr.y (mgr.14505) 504 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:03 vm00 bash[20770]: audit 2026-03-09T17:39:02.130088+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:03 vm00 bash[20770]: audit 2026-03-09T17:39:02.130088+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:03 vm00 bash[20770]: cluster 2026-03-09T17:39:02.141159+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:03 vm00 bash[20770]: cluster 2026-03-09T17:39:02.141159+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:03 vm00 bash[28333]: audit 2026-03-09T17:39:02.050005+0000 mgr.y (mgr.14505) 504 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:03 vm00 bash[28333]: audit 2026-03-09T17:39:02.050005+0000 mgr.y (mgr.14505) 504 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:03 vm00 bash[28333]: audit 2026-03-09T17:39:02.130088+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:03 vm00 bash[28333]: audit 2026-03-09T17:39:02.130088+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-109"}]': finished 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:03 vm00 bash[28333]: cluster 2026-03-09T17:39:02.141159+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T17:39:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:03 vm00 bash[28333]: cluster 2026-03-09T17:39:02.141159+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:04 vm00 bash[20770]: cluster 2026-03-09T17:39:02.821648+0000 mgr.y (mgr.14505) 505 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:04 vm00 bash[20770]: cluster 2026-03-09T17:39:02.821648+0000 mgr.y (mgr.14505) 505 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:04 vm00 bash[20770]: cluster 2026-03-09T17:39:03.153138+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:04 vm00 bash[20770]: cluster 2026-03-09T17:39:03.153138+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:04 vm00 bash[20770]: audit 2026-03-09T17:39:03.156840+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:04 vm00 bash[20770]: audit 2026-03-09T17:39:03.156840+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:04 vm00 bash[20770]: audit 2026-03-09T17:39:03.157199+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:04 vm00 bash[20770]: audit 2026-03-09T17:39:03.157199+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:04 vm00 bash[28333]: cluster 2026-03-09T17:39:02.821648+0000 mgr.y (mgr.14505) 505 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:04 vm00 bash[28333]: cluster 2026-03-09T17:39:02.821648+0000 mgr.y (mgr.14505) 505 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:04 vm00 bash[28333]: cluster 2026-03-09T17:39:03.153138+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:04 vm00 bash[28333]: cluster 2026-03-09T17:39:03.153138+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:04 vm00 bash[28333]: audit 2026-03-09T17:39:03.156840+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:04 vm00 bash[28333]: audit 2026-03-09T17:39:03.156840+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:04 vm00 bash[28333]: audit 2026-03-09T17:39:03.157199+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:04 vm00 bash[28333]: audit 2026-03-09T17:39:03.157199+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:04 vm02 bash[23351]: cluster 2026-03-09T17:39:02.821648+0000 mgr.y (mgr.14505) 505 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:04 vm02 bash[23351]: cluster 2026-03-09T17:39:02.821648+0000 mgr.y (mgr.14505) 505 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:04 vm02 bash[23351]: cluster 2026-03-09T17:39:03.153138+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T17:39:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:04 vm02 bash[23351]: cluster 2026-03-09T17:39:03.153138+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T17:39:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:04 vm02 bash[23351]: audit 2026-03-09T17:39:03.156840+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:04 vm02 bash[23351]: audit 2026-03-09T17:39:03.156840+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:04 vm02 bash[23351]: audit 2026-03-09T17:39:03.157199+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:04 vm02 bash[23351]: audit 2026-03-09T17:39:03.157199+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:05 vm00 bash[20770]: audit 2026-03-09T17:39:04.156381+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:05 vm00 bash[20770]: audit 2026-03-09T17:39:04.156381+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:05 vm00 bash[20770]: cluster 2026-03-09T17:39:04.162588+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:05 vm00 bash[20770]: cluster 2026-03-09T17:39:04.162588+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:05 vm00 bash[20770]: audit 2026-03-09T17:39:04.164315+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:05 vm00 bash[20770]: audit 2026-03-09T17:39:04.164315+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:05 vm00 bash[20770]: audit 2026-03-09T17:39:04.166807+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:05 vm00 bash[20770]: audit 2026-03-09T17:39:04.166807+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:05 vm00 bash[28333]: audit 2026-03-09T17:39:04.156381+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:05 vm00 bash[28333]: audit 2026-03-09T17:39:04.156381+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:05 vm00 bash[28333]: cluster 2026-03-09T17:39:04.162588+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:05 vm00 bash[28333]: cluster 2026-03-09T17:39:04.162588+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:05 vm00 bash[28333]: audit 2026-03-09T17:39:04.164315+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:05 vm00 bash[28333]: audit 2026-03-09T17:39:04.164315+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:05 vm00 bash[28333]: audit 2026-03-09T17:39:04.166807+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:05 vm00 bash[28333]: audit 2026-03-09T17:39:04.166807+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:05 vm02 bash[23351]: audit 2026-03-09T17:39:04.156381+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:05 vm02 bash[23351]: audit 2026-03-09T17:39:04.156381+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:05 vm02 bash[23351]: cluster 2026-03-09T17:39:04.162588+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T17:39:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:05 vm02 bash[23351]: cluster 2026-03-09T17:39:04.162588+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T17:39:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:05 vm02 bash[23351]: audit 2026-03-09T17:39:04.164315+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:05 vm02 bash[23351]: audit 2026-03-09T17:39:04.164315+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.100:0/831670455' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:05 vm02 bash[23351]: audit 2026-03-09T17:39:04.166807+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:05 vm02 bash[23351]: audit 2026-03-09T17:39:04.166807+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: cluster 2026-03-09T17:39:04.822014+0000 mgr.y (mgr.14505) 506 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: cluster 2026-03-09T17:39:04.822014+0000 mgr.y (mgr.14505) 506 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.164986+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.164986+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: cluster 2026-03-09T17:39:05.180341+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: cluster 2026-03-09T17:39:05.180341+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.187845+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.187845+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.188113+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.188113+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.188727+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.188727+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.188933+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.188933+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.189454+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.189454+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.189662+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:06 vm00 bash[20770]: audit 2026-03-09T17:39:05.189662+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:39:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:39:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: cluster 2026-03-09T17:39:04.822014+0000 mgr.y (mgr.14505) 506 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: cluster 2026-03-09T17:39:04.822014+0000 mgr.y (mgr.14505) 506 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.164986+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.164986+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: cluster 2026-03-09T17:39:05.180341+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: cluster 2026-03-09T17:39:05.180341+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.187845+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.187845+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.188113+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.188113+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.188727+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.188727+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.188933+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.188933+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.189454+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.189454+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.189662+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:06 vm00 bash[28333]: audit 2026-03-09T17:39:05.189662+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: cluster 2026-03-09T17:39:04.822014+0000 mgr.y (mgr.14505) 506 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: cluster 2026-03-09T17:39:04.822014+0000 mgr.y (mgr.14505) 506 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.164986+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.164986+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-60118-104"}]': finished 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: cluster 2026-03-09T17:39:05.180341+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: cluster 2026-03-09T17:39:05.180341+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.187845+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.187845+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.188113+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.188113+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.188727+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.188727+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.188933+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.188933+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.189454+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.189454+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.189662+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:06.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:06 vm02 bash[23351]: audit 2026-03-09T17:39:05.189662+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: audit 2026-03-09T17:39:06.169609+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: audit 2026-03-09T17:39:06.169609+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: cluster 2026-03-09T17:39:06.178447+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: cluster 2026-03-09T17:39:06.178447+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: audit 2026-03-09T17:39:06.180272+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: audit 2026-03-09T17:39:06.180272+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: audit 2026-03-09T17:39:06.181196+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: audit 2026-03-09T17:39:06.181196+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: cluster 2026-03-09T17:39:06.777414+0000 mon.a (mon.0) 2973 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:07 vm00 bash[20770]: cluster 2026-03-09T17:39:06.777414+0000 mon.a (mon.0) 2973 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: audit 2026-03-09T17:39:06.169609+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: audit 2026-03-09T17:39:06.169609+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: cluster 2026-03-09T17:39:06.178447+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: cluster 2026-03-09T17:39:06.178447+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: audit 2026-03-09T17:39:06.180272+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: audit 2026-03-09T17:39:06.180272+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: audit 2026-03-09T17:39:06.181196+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: audit 2026-03-09T17:39:06.181196+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: cluster 2026-03-09T17:39:06.777414+0000 mon.a (mon.0) 2973 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:07 vm00 bash[28333]: cluster 2026-03-09T17:39:06.777414+0000 mon.a (mon.0) 2973 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: audit 2026-03-09T17:39:06.169609+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: audit 2026-03-09T17:39:06.169609+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-60118-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: cluster 2026-03-09T17:39:06.178447+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: cluster 2026-03-09T17:39:06.178447+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: audit 2026-03-09T17:39:06.180272+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: audit 2026-03-09T17:39:06.180272+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: audit 2026-03-09T17:39:06.181196+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: audit 2026-03-09T17:39:06.181196+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: cluster 2026-03-09T17:39:06.777414+0000 mon.a (mon.0) 2973 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:07 vm02 bash[23351]: cluster 2026-03-09T17:39:06.777414+0000 mon.a (mon.0) 2973 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:08 vm00 bash[20770]: cluster 2026-03-09T17:39:06.822352+0000 mgr.y (mgr.14505) 507 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:08 vm00 bash[20770]: cluster 2026-03-09T17:39:06.822352+0000 mgr.y (mgr.14505) 507 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:08 vm00 bash[20770]: cluster 2026-03-09T17:39:07.190277+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:08 vm00 bash[20770]: cluster 2026-03-09T17:39:07.190277+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:08 vm00 bash[20770]: audit 2026-03-09T17:39:08.177210+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:08 vm00 bash[20770]: audit 2026-03-09T17:39:08.177210+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:08 vm00 bash[20770]: cluster 2026-03-09T17:39:08.190263+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:08 vm00 bash[20770]: cluster 2026-03-09T17:39:08.190263+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:08 vm00 bash[28333]: cluster 2026-03-09T17:39:06.822352+0000 mgr.y (mgr.14505) 507 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:08 vm00 bash[28333]: cluster 2026-03-09T17:39:06.822352+0000 mgr.y (mgr.14505) 507 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:08 vm00 bash[28333]: cluster 2026-03-09T17:39:07.190277+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:08 vm00 bash[28333]: cluster 2026-03-09T17:39:07.190277+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:08 vm00 bash[28333]: audit 2026-03-09T17:39:08.177210+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:08 vm00 bash[28333]: audit 2026-03-09T17:39:08.177210+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:08 vm00 bash[28333]: cluster 2026-03-09T17:39:08.190263+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T17:39:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:08 vm00 bash[28333]: cluster 2026-03-09T17:39:08.190263+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T17:39:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:08 vm02 bash[23351]: cluster 2026-03-09T17:39:06.822352+0000 mgr.y (mgr.14505) 507 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:08 vm02 bash[23351]: cluster 2026-03-09T17:39:06.822352+0000 mgr.y (mgr.14505) 507 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:08 vm02 bash[23351]: cluster 2026-03-09T17:39:07.190277+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T17:39:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:08 vm02 bash[23351]: cluster 2026-03-09T17:39:07.190277+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T17:39:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:08 vm02 bash[23351]: audit 2026-03-09T17:39:08.177210+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:08 vm02 bash[23351]: audit 2026-03-09T17:39:08.177210+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60118-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:08 vm02 bash[23351]: cluster 2026-03-09T17:39:08.190263+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T17:39:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:08 vm02 bash[23351]: cluster 2026-03-09T17:39:08.190263+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:10 vm00 bash[20770]: cluster 2026-03-09T17:39:08.822655+0000 mgr.y (mgr.14505) 508 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:10 vm00 bash[20770]: cluster 2026-03-09T17:39:08.822655+0000 mgr.y (mgr.14505) 508 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:10 vm00 bash[20770]: audit 2026-03-09T17:39:09.208011+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:10 vm00 bash[20770]: audit 2026-03-09T17:39:09.208011+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:10 vm00 bash[20770]: cluster 2026-03-09T17:39:09.208357+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:10 vm00 bash[20770]: cluster 2026-03-09T17:39:09.208357+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:10 vm00 bash[20770]: audit 2026-03-09T17:39:09.210773+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:10 vm00 bash[20770]: audit 2026-03-09T17:39:09.210773+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:10 vm00 bash[28333]: cluster 2026-03-09T17:39:08.822655+0000 mgr.y (mgr.14505) 508 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:10 vm00 bash[28333]: cluster 2026-03-09T17:39:08.822655+0000 mgr.y (mgr.14505) 508 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:10 vm00 bash[28333]: audit 2026-03-09T17:39:09.208011+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:10 vm00 bash[28333]: audit 2026-03-09T17:39:09.208011+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:10 vm00 bash[28333]: cluster 2026-03-09T17:39:09.208357+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:10 vm00 bash[28333]: cluster 2026-03-09T17:39:09.208357+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:10 vm00 bash[28333]: audit 2026-03-09T17:39:09.210773+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:10 vm00 bash[28333]: audit 2026-03-09T17:39:09.210773+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:10 vm02 bash[23351]: cluster 2026-03-09T17:39:08.822655+0000 mgr.y (mgr.14505) 508 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:10 vm02 bash[23351]: cluster 2026-03-09T17:39:08.822655+0000 mgr.y (mgr.14505) 508 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:10 vm02 bash[23351]: audit 2026-03-09T17:39:09.208011+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:10 vm02 bash[23351]: audit 2026-03-09T17:39:09.208011+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:10 vm02 bash[23351]: cluster 2026-03-09T17:39:09.208357+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T17:39:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:10 vm02 bash[23351]: cluster 2026-03-09T17:39:09.208357+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T17:39:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:10 vm02 bash[23351]: audit 2026-03-09T17:39:09.210773+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:10 vm02 bash[23351]: audit 2026-03-09T17:39:09.210773+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:11 vm00 bash[28333]: audit 2026-03-09T17:39:10.183662+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:11 vm00 bash[28333]: audit 2026-03-09T17:39:10.183662+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:11 vm00 bash[28333]: cluster 2026-03-09T17:39:10.188542+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:11 vm00 bash[28333]: cluster 2026-03-09T17:39:10.188542+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:11 vm00 bash[28333]: audit 2026-03-09T17:39:10.237468+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:11 vm00 bash[28333]: audit 2026-03-09T17:39:10.237468+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:11 vm00 bash[28333]: audit 2026-03-09T17:39:10.237734+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:11 vm00 bash[28333]: audit 2026-03-09T17:39:10.237734+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:11 vm00 bash[20770]: audit 2026-03-09T17:39:10.183662+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:11 vm00 bash[20770]: audit 2026-03-09T17:39:10.183662+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:11 vm00 bash[20770]: cluster 2026-03-09T17:39:10.188542+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:11 vm00 bash[20770]: cluster 2026-03-09T17:39:10.188542+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:11 vm00 bash[20770]: audit 2026-03-09T17:39:10.237468+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:11 vm00 bash[20770]: audit 2026-03-09T17:39:10.237468+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:11 vm00 bash[20770]: audit 2026-03-09T17:39:10.237734+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:11 vm00 bash[20770]: audit 2026-03-09T17:39:10.237734+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:11 vm02 bash[23351]: audit 2026-03-09T17:39:10.183662+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:11 vm02 bash[23351]: audit 2026-03-09T17:39:10.183662+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:11 vm02 bash[23351]: cluster 2026-03-09T17:39:10.188542+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T17:39:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:11 vm02 bash[23351]: cluster 2026-03-09T17:39:10.188542+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T17:39:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:11 vm02 bash[23351]: audit 2026-03-09T17:39:10.237468+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:11 vm02 bash[23351]: audit 2026-03-09T17:39:10.237468+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:11 vm02 bash[23351]: audit 2026-03-09T17:39:10.237734+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:11.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:11 vm02 bash[23351]: audit 2026-03-09T17:39:10.237734+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: cluster 2026-03-09T17:39:10.823045+0000 mgr.y (mgr.14505) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: cluster 2026-03-09T17:39:10.823045+0000 mgr.y (mgr.14505) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: audit 2026-03-09T17:39:11.209874+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: audit 2026-03-09T17:39:11.209874+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: cluster 2026-03-09T17:39:11.213531+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: cluster 2026-03-09T17:39:11.213531+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: audit 2026-03-09T17:39:11.221725+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: audit 2026-03-09T17:39:11.221725+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: audit 2026-03-09T17:39:11.229789+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: audit 2026-03-09T17:39:11.229789+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: cluster 2026-03-09T17:39:11.777989+0000 mon.a (mon.0) 2985 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:12 vm02 bash[23351]: cluster 2026-03-09T17:39:11.777989+0000 mon.a (mon.0) 2985 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:12.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:39:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: cluster 2026-03-09T17:39:10.823045+0000 mgr.y (mgr.14505) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: cluster 2026-03-09T17:39:10.823045+0000 mgr.y (mgr.14505) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: audit 2026-03-09T17:39:11.209874+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: audit 2026-03-09T17:39:11.209874+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: cluster 2026-03-09T17:39:11.213531+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: cluster 2026-03-09T17:39:11.213531+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: audit 2026-03-09T17:39:11.221725+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: audit 2026-03-09T17:39:11.221725+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: audit 2026-03-09T17:39:11.229789+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: audit 2026-03-09T17:39:11.229789+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: cluster 2026-03-09T17:39:11.777989+0000 mon.a (mon.0) 2985 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:12 vm00 bash[28333]: cluster 2026-03-09T17:39:11.777989+0000 mon.a (mon.0) 2985 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: cluster 2026-03-09T17:39:10.823045+0000 mgr.y (mgr.14505) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: cluster 2026-03-09T17:39:10.823045+0000 mgr.y (mgr.14505) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: audit 2026-03-09T17:39:11.209874+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: audit 2026-03-09T17:39:11.209874+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: cluster 2026-03-09T17:39:11.213531+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: cluster 2026-03-09T17:39:11.213531+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: audit 2026-03-09T17:39:11.221725+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: audit 2026-03-09T17:39:11.221725+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: audit 2026-03-09T17:39:11.229789+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: audit 2026-03-09T17:39:11.229789+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: cluster 2026-03-09T17:39:11.777989+0000 mon.a (mon.0) 2985 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:12 vm00 bash[20770]: cluster 2026-03-09T17:39:11.777989+0000 mon.a (mon.0) 2985 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.060733+0000 mgr.y (mgr.14505) 510 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.060733+0000 mgr.y (mgr.14505) 510 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.235752+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.235752+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: cluster 2026-03-09T17:39:12.240025+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: cluster 2026-03-09T17:39:12.240025+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.266453+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.266453+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.266740+0000 mon.a (mon.0) 2988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.266740+0000 mon.a (mon.0) 2988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.935261+0000 mon.c (mon.2) 648 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:13 vm00 bash[20770]: audit 2026-03-09T17:39:12.935261+0000 mon.c (mon.2) 648 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.060733+0000 mgr.y (mgr.14505) 510 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.060733+0000 mgr.y (mgr.14505) 510 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.235752+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.235752+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: cluster 2026-03-09T17:39:12.240025+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: cluster 2026-03-09T17:39:12.240025+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.266453+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.266453+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.266740+0000 mon.a (mon.0) 2988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.266740+0000 mon.a (mon.0) 2988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.935261+0000 mon.c (mon.2) 648 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:13 vm00 bash[28333]: audit 2026-03-09T17:39:12.935261+0000 mon.c (mon.2) 648 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.060733+0000 mgr.y (mgr.14505) 510 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.060733+0000 mgr.y (mgr.14505) 510 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.235752+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.235752+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: cluster 2026-03-09T17:39:12.240025+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: cluster 2026-03-09T17:39:12.240025+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.266453+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.266453+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.266740+0000 mon.a (mon.0) 2988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.266740+0000 mon.a (mon.0) 2988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.935261+0000 mon.c (mon.2) 648 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:13 vm02 bash[23351]: audit 2026-03-09T17:39:12.935261+0000 mon.c (mon.2) 648 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: cluster 2026-03-09T17:39:12.823912+0000 mgr.y (mgr.14505) 511 : cluster [DBG] pgmap v865: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: cluster 2026-03-09T17:39:12.823912+0000 mgr.y (mgr.14505) 511 : cluster [DBG] pgmap v865: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: audit 2026-03-09T17:39:13.238982+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: audit 2026-03-09T17:39:13.238982+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: audit 2026-03-09T17:39:13.249493+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: audit 2026-03-09T17:39:13.249493+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: cluster 2026-03-09T17:39:13.257856+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: cluster 2026-03-09T17:39:13.257856+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: audit 2026-03-09T17:39:13.259562+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: audit 2026-03-09T17:39:13.259562+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: audit 2026-03-09T17:39:14.242006+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: audit 2026-03-09T17:39:14.242006+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: cluster 2026-03-09T17:39:14.249899+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:14 vm00 bash[20770]: cluster 2026-03-09T17:39:14.249899+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: cluster 2026-03-09T17:39:12.823912+0000 mgr.y (mgr.14505) 511 : cluster [DBG] pgmap v865: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: cluster 2026-03-09T17:39:12.823912+0000 mgr.y (mgr.14505) 511 : cluster [DBG] pgmap v865: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: audit 2026-03-09T17:39:13.238982+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: audit 2026-03-09T17:39:13.238982+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: audit 2026-03-09T17:39:13.249493+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: audit 2026-03-09T17:39:13.249493+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: cluster 2026-03-09T17:39:13.257856+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: cluster 2026-03-09T17:39:13.257856+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: audit 2026-03-09T17:39:13.259562+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: audit 2026-03-09T17:39:13.259562+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: audit 2026-03-09T17:39:14.242006+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: audit 2026-03-09T17:39:14.242006+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: cluster 2026-03-09T17:39:14.249899+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T17:39:14.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:14 vm00 bash[28333]: cluster 2026-03-09T17:39:14.249899+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: cluster 2026-03-09T17:39:12.823912+0000 mgr.y (mgr.14505) 511 : cluster [DBG] pgmap v865: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: cluster 2026-03-09T17:39:12.823912+0000 mgr.y (mgr.14505) 511 : cluster [DBG] pgmap v865: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: audit 2026-03-09T17:39:13.238982+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: audit 2026-03-09T17:39:13.238982+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: audit 2026-03-09T17:39:13.249493+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: audit 2026-03-09T17:39:13.249493+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: cluster 2026-03-09T17:39:13.257856+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: cluster 2026-03-09T17:39:13.257856+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: audit 2026-03-09T17:39:13.259562+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: audit 2026-03-09T17:39:13.259562+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]: dispatch 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: audit 2026-03-09T17:39:14.242006+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: audit 2026-03-09T17:39:14.242006+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-112"}]': finished 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: cluster 2026-03-09T17:39:14.249899+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T17:39:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:14 vm02 bash[23351]: cluster 2026-03-09T17:39:14.249899+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T17:39:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:16 vm02 bash[23351]: cluster 2026-03-09T17:39:14.824270+0000 mgr.y (mgr.14505) 512 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:16 vm02 bash[23351]: cluster 2026-03-09T17:39:14.824270+0000 mgr.y (mgr.14505) 512 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:16 vm02 bash[23351]: cluster 2026-03-09T17:39:15.287075+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T17:39:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:16 vm02 bash[23351]: cluster 2026-03-09T17:39:15.287075+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:16 vm00 bash[28333]: cluster 2026-03-09T17:39:14.824270+0000 mgr.y (mgr.14505) 512 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:16 vm00 bash[28333]: cluster 2026-03-09T17:39:14.824270+0000 mgr.y (mgr.14505) 512 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:16 vm00 bash[28333]: cluster 2026-03-09T17:39:15.287075+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:16 vm00 bash[28333]: cluster 2026-03-09T17:39:15.287075+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:16 vm00 bash[20770]: cluster 2026-03-09T17:39:14.824270+0000 mgr.y (mgr.14505) 512 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:16 vm00 bash[20770]: cluster 2026-03-09T17:39:14.824270+0000 mgr.y (mgr.14505) 512 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:16 vm00 bash[20770]: cluster 2026-03-09T17:39:15.287075+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:16 vm00 bash[20770]: cluster 2026-03-09T17:39:15.287075+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T17:39:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:39:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:39:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:39:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:17 vm02 bash[23351]: cluster 2026-03-09T17:39:16.310913+0000 mon.a (mon.0) 2995 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T17:39:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:17 vm02 bash[23351]: cluster 2026-03-09T17:39:16.310913+0000 mon.a (mon.0) 2995 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T17:39:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:17 vm02 bash[23351]: audit 2026-03-09T17:39:16.329103+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:17 vm02 bash[23351]: audit 2026-03-09T17:39:16.329103+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:17 vm02 bash[23351]: audit 2026-03-09T17:39:16.329651+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:17 vm02 bash[23351]: audit 2026-03-09T17:39:16.329651+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:17 vm02 bash[23351]: cluster 2026-03-09T17:39:16.778635+0000 mon.a (mon.0) 2997 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:17 vm02 bash[23351]: cluster 2026-03-09T17:39:16.778635+0000 mon.a (mon.0) 2997 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:17 vm00 bash[28333]: cluster 2026-03-09T17:39:16.310913+0000 mon.a (mon.0) 2995 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:17 vm00 bash[28333]: cluster 2026-03-09T17:39:16.310913+0000 mon.a (mon.0) 2995 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:17 vm00 bash[28333]: audit 2026-03-09T17:39:16.329103+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:17 vm00 bash[28333]: audit 2026-03-09T17:39:16.329103+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:17 vm00 bash[28333]: audit 2026-03-09T17:39:16.329651+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:17 vm00 bash[28333]: audit 2026-03-09T17:39:16.329651+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:17 vm00 bash[28333]: cluster 2026-03-09T17:39:16.778635+0000 mon.a (mon.0) 2997 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:17 vm00 bash[28333]: cluster 2026-03-09T17:39:16.778635+0000 mon.a (mon.0) 2997 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:17 vm00 bash[20770]: cluster 2026-03-09T17:39:16.310913+0000 mon.a (mon.0) 2995 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:17 vm00 bash[20770]: cluster 2026-03-09T17:39:16.310913+0000 mon.a (mon.0) 2995 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:17 vm00 bash[20770]: audit 2026-03-09T17:39:16.329103+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:17 vm00 bash[20770]: audit 2026-03-09T17:39:16.329103+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:17 vm00 bash[20770]: audit 2026-03-09T17:39:16.329651+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:17 vm00 bash[20770]: audit 2026-03-09T17:39:16.329651+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:17 vm00 bash[20770]: cluster 2026-03-09T17:39:16.778635+0000 mon.a (mon.0) 2997 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:17.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:17 vm00 bash[20770]: cluster 2026-03-09T17:39:16.778635+0000 mon.a (mon.0) 2997 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: cluster 2026-03-09T17:39:16.824614+0000 mgr.y (mgr.14505) 513 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: cluster 2026-03-09T17:39:16.824614+0000 mgr.y (mgr.14505) 513 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: audit 2026-03-09T17:39:17.320357+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: audit 2026-03-09T17:39:17.320357+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: cluster 2026-03-09T17:39:17.327291+0000 mon.a (mon.0) 2999 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: cluster 2026-03-09T17:39:17.327291+0000 mon.a (mon.0) 2999 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: audit 2026-03-09T17:39:17.361435+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: audit 2026-03-09T17:39:17.361435+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: audit 2026-03-09T17:39:17.363793+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:18 vm00 bash[28333]: audit 2026-03-09T17:39:17.363793+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: cluster 2026-03-09T17:39:16.824614+0000 mgr.y (mgr.14505) 513 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: cluster 2026-03-09T17:39:16.824614+0000 mgr.y (mgr.14505) 513 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: audit 2026-03-09T17:39:17.320357+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: audit 2026-03-09T17:39:17.320357+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: cluster 2026-03-09T17:39:17.327291+0000 mon.a (mon.0) 2999 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: cluster 2026-03-09T17:39:17.327291+0000 mon.a (mon.0) 2999 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: audit 2026-03-09T17:39:17.361435+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: audit 2026-03-09T17:39:17.361435+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: audit 2026-03-09T17:39:17.363793+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:18 vm00 bash[20770]: audit 2026-03-09T17:39:17.363793+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: cluster 2026-03-09T17:39:16.824614+0000 mgr.y (mgr.14505) 513 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: cluster 2026-03-09T17:39:16.824614+0000 mgr.y (mgr.14505) 513 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: audit 2026-03-09T17:39:17.320357+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: audit 2026-03-09T17:39:17.320357+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: cluster 2026-03-09T17:39:17.327291+0000 mon.a (mon.0) 2999 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: cluster 2026-03-09T17:39:17.327291+0000 mon.a (mon.0) 2999 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: audit 2026-03-09T17:39:17.361435+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: audit 2026-03-09T17:39:17.361435+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: audit 2026-03-09T17:39:17.363793+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:18 vm02 bash[23351]: audit 2026-03-09T17:39:17.363793+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: audit 2026-03-09T17:39:18.415881+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: audit 2026-03-09T17:39:18.415881+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: cluster 2026-03-09T17:39:18.419515+0000 mon.a (mon.0) 3002 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: cluster 2026-03-09T17:39:18.419515+0000 mon.a (mon.0) 3002 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: audit 2026-03-09T17:39:18.426359+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: audit 2026-03-09T17:39:18.426359+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: audit 2026-03-09T17:39:18.437044+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: audit 2026-03-09T17:39:18.437044+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: cluster 2026-03-09T17:39:18.824963+0000 mgr.y (mgr.14505) 514 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:19 vm00 bash[28333]: cluster 2026-03-09T17:39:18.824963+0000 mgr.y (mgr.14505) 514 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: audit 2026-03-09T17:39:18.415881+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: audit 2026-03-09T17:39:18.415881+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: cluster 2026-03-09T17:39:18.419515+0000 mon.a (mon.0) 3002 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: cluster 2026-03-09T17:39:18.419515+0000 mon.a (mon.0) 3002 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: audit 2026-03-09T17:39:18.426359+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: audit 2026-03-09T17:39:18.426359+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: audit 2026-03-09T17:39:18.437044+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: audit 2026-03-09T17:39:18.437044+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: cluster 2026-03-09T17:39:18.824963+0000 mgr.y (mgr.14505) 514 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:39:19.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:19 vm00 bash[20770]: cluster 2026-03-09T17:39:18.824963+0000 mgr.y (mgr.14505) 514 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: audit 2026-03-09T17:39:18.415881+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: audit 2026-03-09T17:39:18.415881+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: cluster 2026-03-09T17:39:18.419515+0000 mon.a (mon.0) 3002 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: cluster 2026-03-09T17:39:18.419515+0000 mon.a (mon.0) 3002 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: audit 2026-03-09T17:39:18.426359+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: audit 2026-03-09T17:39:18.426359+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: audit 2026-03-09T17:39:18.437044+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: audit 2026-03-09T17:39:18.437044+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: cluster 2026-03-09T17:39:18.824963+0000 mgr.y (mgr.14505) 514 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:39:19.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:19 vm02 bash[23351]: cluster 2026-03-09T17:39:18.824963+0000 mgr.y (mgr.14505) 514 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:20 vm00 bash[20770]: audit 2026-03-09T17:39:19.436971+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:20 vm00 bash[20770]: audit 2026-03-09T17:39:19.436971+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:20 vm00 bash[20770]: audit 2026-03-09T17:39:19.449790+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:20 vm00 bash[20770]: audit 2026-03-09T17:39:19.449790+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:20 vm00 bash[20770]: cluster 2026-03-09T17:39:19.454678+0000 mon.a (mon.0) 3005 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:20 vm00 bash[20770]: cluster 2026-03-09T17:39:19.454678+0000 mon.a (mon.0) 3005 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:20 vm00 bash[20770]: audit 2026-03-09T17:39:19.458499+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:20 vm00 bash[20770]: audit 2026-03-09T17:39:19.458499+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:20 vm00 bash[28333]: audit 2026-03-09T17:39:19.436971+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:20 vm00 bash[28333]: audit 2026-03-09T17:39:19.436971+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:20 vm00 bash[28333]: audit 2026-03-09T17:39:19.449790+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:20 vm00 bash[28333]: audit 2026-03-09T17:39:19.449790+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:20 vm00 bash[28333]: cluster 2026-03-09T17:39:19.454678+0000 mon.a (mon.0) 3005 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:20 vm00 bash[28333]: cluster 2026-03-09T17:39:19.454678+0000 mon.a (mon.0) 3005 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:20 vm00 bash[28333]: audit 2026-03-09T17:39:19.458499+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:20 vm00 bash[28333]: audit 2026-03-09T17:39:19.458499+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:20 vm02 bash[23351]: audit 2026-03-09T17:39:19.436971+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:20 vm02 bash[23351]: audit 2026-03-09T17:39:19.436971+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:20 vm02 bash[23351]: audit 2026-03-09T17:39:19.449790+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:20 vm02 bash[23351]: audit 2026-03-09T17:39:19.449790+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:20 vm02 bash[23351]: cluster 2026-03-09T17:39:19.454678+0000 mon.a (mon.0) 3005 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T17:39:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:20 vm02 bash[23351]: cluster 2026-03-09T17:39:19.454678+0000 mon.a (mon.0) 3005 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T17:39:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:20 vm02 bash[23351]: audit 2026-03-09T17:39:19.458499+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:20 vm02 bash[23351]: audit 2026-03-09T17:39:19.458499+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]: dispatch 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: cluster 2026-03-09T17:39:20.437195+0000 mon.a (mon.0) 3007 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: cluster 2026-03-09T17:39:20.437195+0000 mon.a (mon.0) 3007 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: audit 2026-03-09T17:39:20.440379+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]': finished 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: audit 2026-03-09T17:39:20.440379+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]': finished 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: cluster 2026-03-09T17:39:20.443696+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: cluster 2026-03-09T17:39:20.443696+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: audit 2026-03-09T17:39:20.502823+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: audit 2026-03-09T17:39:20.502823+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: audit 2026-03-09T17:39:20.503345+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: audit 2026-03-09T17:39:20.503345+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: cluster 2026-03-09T17:39:20.825264+0000 mgr.y (mgr.14505) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:39:21.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:21 vm02 bash[23351]: cluster 2026-03-09T17:39:20.825264+0000 mgr.y (mgr.14505) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: cluster 2026-03-09T17:39:20.437195+0000 mon.a (mon.0) 3007 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: cluster 2026-03-09T17:39:20.437195+0000 mon.a (mon.0) 3007 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: audit 2026-03-09T17:39:20.440379+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]': finished 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: audit 2026-03-09T17:39:20.440379+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]': finished 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: cluster 2026-03-09T17:39:20.443696+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: cluster 2026-03-09T17:39:20.443696+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: audit 2026-03-09T17:39:20.502823+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: audit 2026-03-09T17:39:20.502823+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: audit 2026-03-09T17:39:20.503345+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: audit 2026-03-09T17:39:20.503345+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: cluster 2026-03-09T17:39:20.825264+0000 mgr.y (mgr.14505) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:21 vm00 bash[20770]: cluster 2026-03-09T17:39:20.825264+0000 mgr.y (mgr.14505) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: cluster 2026-03-09T17:39:20.437195+0000 mon.a (mon.0) 3007 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: cluster 2026-03-09T17:39:20.437195+0000 mon.a (mon.0) 3007 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: audit 2026-03-09T17:39:20.440379+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]': finished 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: audit 2026-03-09T17:39:20.440379+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-114", "mode": "writeback"}]': finished 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: cluster 2026-03-09T17:39:20.443696+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: cluster 2026-03-09T17:39:20.443696+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: audit 2026-03-09T17:39:20.502823+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: audit 2026-03-09T17:39:20.502823+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: audit 2026-03-09T17:39:20.503345+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: audit 2026-03-09T17:39:20.503345+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: cluster 2026-03-09T17:39:20.825264+0000 mgr.y (mgr.14505) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:39:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:21 vm00 bash[28333]: cluster 2026-03-09T17:39:20.825264+0000 mgr.y (mgr.14505) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:39:22.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:39:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: audit 2026-03-09T17:39:21.550677+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: audit 2026-03-09T17:39:21.550677+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: cluster 2026-03-09T17:39:21.557078+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: cluster 2026-03-09T17:39:21.557078+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: audit 2026-03-09T17:39:21.562385+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: audit 2026-03-09T17:39:21.562385+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: audit 2026-03-09T17:39:21.562559+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: audit 2026-03-09T17:39:21.562559+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: cluster 2026-03-09T17:39:21.779278+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: cluster 2026-03-09T17:39:21.779278+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: audit 2026-03-09T17:39:22.064855+0000 mgr.y (mgr.14505) 516 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:22 vm00 bash[28333]: audit 2026-03-09T17:39:22.064855+0000 mgr.y (mgr.14505) 516 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: audit 2026-03-09T17:39:21.550677+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: audit 2026-03-09T17:39:21.550677+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: cluster 2026-03-09T17:39:21.557078+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: cluster 2026-03-09T17:39:21.557078+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: audit 2026-03-09T17:39:21.562385+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: audit 2026-03-09T17:39:21.562385+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: audit 2026-03-09T17:39:21.562559+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: audit 2026-03-09T17:39:21.562559+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: cluster 2026-03-09T17:39:21.779278+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: cluster 2026-03-09T17:39:21.779278+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: audit 2026-03-09T17:39:22.064855+0000 mgr.y (mgr.14505) 516 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:22 vm00 bash[20770]: audit 2026-03-09T17:39:22.064855+0000 mgr.y (mgr.14505) 516 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: audit 2026-03-09T17:39:21.550677+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: audit 2026-03-09T17:39:21.550677+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: cluster 2026-03-09T17:39:21.557078+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: cluster 2026-03-09T17:39:21.557078+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: audit 2026-03-09T17:39:21.562385+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: audit 2026-03-09T17:39:21.562385+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: audit 2026-03-09T17:39:21.562559+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: audit 2026-03-09T17:39:21.562559+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]: dispatch 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: cluster 2026-03-09T17:39:21.779278+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: cluster 2026-03-09T17:39:21.779278+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: audit 2026-03-09T17:39:22.064855+0000 mgr.y (mgr.14505) 516 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:23.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:22 vm02 bash[23351]: audit 2026-03-09T17:39:22.064855+0000 mgr.y (mgr.14505) 516 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:23 vm00 bash[28333]: cluster 2026-03-09T17:39:22.550820+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:23 vm00 bash[28333]: cluster 2026-03-09T17:39:22.550820+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:23 vm00 bash[28333]: audit 2026-03-09T17:39:22.652001+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:23 vm00 bash[28333]: audit 2026-03-09T17:39:22.652001+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:23 vm00 bash[28333]: cluster 2026-03-09T17:39:22.693938+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:23 vm00 bash[28333]: cluster 2026-03-09T17:39:22.693938+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:23 vm00 bash[28333]: cluster 2026-03-09T17:39:22.825840+0000 mgr.y (mgr.14505) 517 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:23 vm00 bash[28333]: cluster 2026-03-09T17:39:22.825840+0000 mgr.y (mgr.14505) 517 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:23 vm00 bash[20770]: cluster 2026-03-09T17:39:22.550820+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:23 vm00 bash[20770]: cluster 2026-03-09T17:39:22.550820+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:23 vm00 bash[20770]: audit 2026-03-09T17:39:22.652001+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:23 vm00 bash[20770]: audit 2026-03-09T17:39:22.652001+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:23 vm00 bash[20770]: cluster 2026-03-09T17:39:22.693938+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:23 vm00 bash[20770]: cluster 2026-03-09T17:39:22.693938+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:23 vm00 bash[20770]: cluster 2026-03-09T17:39:22.825840+0000 mgr.y (mgr.14505) 517 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:39:24.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:23 vm00 bash[20770]: cluster 2026-03-09T17:39:22.825840+0000 mgr.y (mgr.14505) 517 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:39:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:23 vm02 bash[23351]: cluster 2026-03-09T17:39:22.550820+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:23 vm02 bash[23351]: cluster 2026-03-09T17:39:22.550820+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:23 vm02 bash[23351]: audit 2026-03-09T17:39:22.652001+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:23 vm02 bash[23351]: audit 2026-03-09T17:39:22.652001+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-114"}]': finished 2026-03-09T17:39:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:23 vm02 bash[23351]: cluster 2026-03-09T17:39:22.693938+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T17:39:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:23 vm02 bash[23351]: cluster 2026-03-09T17:39:22.693938+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T17:39:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:23 vm02 bash[23351]: cluster 2026-03-09T17:39:22.825840+0000 mgr.y (mgr.14505) 517 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:39:24.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:23 vm02 bash[23351]: cluster 2026-03-09T17:39:22.825840+0000 mgr.y (mgr.14505) 517 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:39:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:24 vm00 bash[28333]: cluster 2026-03-09T17:39:23.770965+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T17:39:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:24 vm00 bash[28333]: cluster 2026-03-09T17:39:23.770965+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T17:39:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:24 vm00 bash[20770]: cluster 2026-03-09T17:39:23.770965+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T17:39:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:24 vm00 bash[20770]: cluster 2026-03-09T17:39:23.770965+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T17:39:25.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:24 vm02 bash[23351]: cluster 2026-03-09T17:39:23.770965+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T17:39:25.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:24 vm02 bash[23351]: cluster 2026-03-09T17:39:23.770965+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:25 vm00 bash[28333]: cluster 2026-03-09T17:39:24.788502+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:25 vm00 bash[28333]: cluster 2026-03-09T17:39:24.788502+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:25 vm00 bash[28333]: audit 2026-03-09T17:39:24.802457+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:25 vm00 bash[28333]: audit 2026-03-09T17:39:24.802457+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:25 vm00 bash[28333]: audit 2026-03-09T17:39:24.802963+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:25 vm00 bash[28333]: audit 2026-03-09T17:39:24.802963+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:25 vm00 bash[28333]: cluster 2026-03-09T17:39:24.826210+0000 mgr.y (mgr.14505) 518 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:25 vm00 bash[28333]: cluster 2026-03-09T17:39:24.826210+0000 mgr.y (mgr.14505) 518 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:25 vm00 bash[20770]: cluster 2026-03-09T17:39:24.788502+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:25 vm00 bash[20770]: cluster 2026-03-09T17:39:24.788502+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:25 vm00 bash[20770]: audit 2026-03-09T17:39:24.802457+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:25 vm00 bash[20770]: audit 2026-03-09T17:39:24.802457+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:25 vm00 bash[20770]: audit 2026-03-09T17:39:24.802963+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:25 vm00 bash[20770]: audit 2026-03-09T17:39:24.802963+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:25 vm00 bash[20770]: cluster 2026-03-09T17:39:24.826210+0000 mgr.y (mgr.14505) 518 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:26.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:25 vm00 bash[20770]: cluster 2026-03-09T17:39:24.826210+0000 mgr.y (mgr.14505) 518 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:25 vm02 bash[23351]: cluster 2026-03-09T17:39:24.788502+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T17:39:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:25 vm02 bash[23351]: cluster 2026-03-09T17:39:24.788502+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T17:39:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:25 vm02 bash[23351]: audit 2026-03-09T17:39:24.802457+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:25 vm02 bash[23351]: audit 2026-03-09T17:39:24.802457+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:25 vm02 bash[23351]: audit 2026-03-09T17:39:24.802963+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:25 vm02 bash[23351]: audit 2026-03-09T17:39:24.802963+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:25 vm02 bash[23351]: cluster 2026-03-09T17:39:24.826210+0000 mgr.y (mgr.14505) 518 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:26.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:25 vm02 bash[23351]: cluster 2026-03-09T17:39:24.826210+0000 mgr.y (mgr.14505) 518 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:26.787 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:39:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:39:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:26 vm00 bash[20770]: audit 2026-03-09T17:39:25.785023+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:26 vm00 bash[20770]: audit 2026-03-09T17:39:25.785023+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:26 vm00 bash[20770]: cluster 2026-03-09T17:39:25.798635+0000 mon.a (mon.0) 3022 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:26 vm00 bash[20770]: cluster 2026-03-09T17:39:25.798635+0000 mon.a (mon.0) 3022 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:26 vm00 bash[20770]: cluster 2026-03-09T17:39:26.779792+0000 mon.a (mon.0) 3023 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:26 vm00 bash[20770]: cluster 2026-03-09T17:39:26.779792+0000 mon.a (mon.0) 3023 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:26 vm00 bash[28333]: audit 2026-03-09T17:39:25.785023+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:26 vm00 bash[28333]: audit 2026-03-09T17:39:25.785023+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:26 vm00 bash[28333]: cluster 2026-03-09T17:39:25.798635+0000 mon.a (mon.0) 3022 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:26 vm00 bash[28333]: cluster 2026-03-09T17:39:25.798635+0000 mon.a (mon.0) 3022 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:26 vm00 bash[28333]: cluster 2026-03-09T17:39:26.779792+0000 mon.a (mon.0) 3023 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:26 vm00 bash[28333]: cluster 2026-03-09T17:39:26.779792+0000 mon.a (mon.0) 3023 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:26 vm02 bash[23351]: audit 2026-03-09T17:39:25.785023+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:26 vm02 bash[23351]: audit 2026-03-09T17:39:25.785023+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:26 vm02 bash[23351]: cluster 2026-03-09T17:39:25.798635+0000 mon.a (mon.0) 3022 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T17:39:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:26 vm02 bash[23351]: cluster 2026-03-09T17:39:25.798635+0000 mon.a (mon.0) 3022 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T17:39:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:26 vm02 bash[23351]: cluster 2026-03-09T17:39:26.779792+0000 mon.a (mon.0) 3023 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:27.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:26 vm02 bash[23351]: cluster 2026-03-09T17:39:26.779792+0000 mon.a (mon.0) 3023 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:27 vm02 bash[23351]: cluster 2026-03-09T17:39:26.797685+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T17:39:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:27 vm02 bash[23351]: cluster 2026-03-09T17:39:26.797685+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T17:39:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:27 vm02 bash[23351]: audit 2026-03-09T17:39:26.819022+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:27 vm02 bash[23351]: audit 2026-03-09T17:39:26.819022+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:27 vm02 bash[23351]: audit 2026-03-09T17:39:26.819232+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:27 vm02 bash[23351]: audit 2026-03-09T17:39:26.819232+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:27 vm02 bash[23351]: cluster 2026-03-09T17:39:26.826490+0000 mgr.y (mgr.14505) 519 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:28.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:27 vm02 bash[23351]: cluster 2026-03-09T17:39:26.826490+0000 mgr.y (mgr.14505) 519 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:27 vm00 bash[20770]: cluster 2026-03-09T17:39:26.797685+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:27 vm00 bash[20770]: cluster 2026-03-09T17:39:26.797685+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:27 vm00 bash[20770]: audit 2026-03-09T17:39:26.819022+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:27 vm00 bash[20770]: audit 2026-03-09T17:39:26.819022+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:27 vm00 bash[20770]: audit 2026-03-09T17:39:26.819232+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:27 vm00 bash[20770]: audit 2026-03-09T17:39:26.819232+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:27 vm00 bash[20770]: cluster 2026-03-09T17:39:26.826490+0000 mgr.y (mgr.14505) 519 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:27 vm00 bash[20770]: cluster 2026-03-09T17:39:26.826490+0000 mgr.y (mgr.14505) 519 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:27 vm00 bash[28333]: cluster 2026-03-09T17:39:26.797685+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:27 vm00 bash[28333]: cluster 2026-03-09T17:39:26.797685+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:27 vm00 bash[28333]: audit 2026-03-09T17:39:26.819022+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:27 vm00 bash[28333]: audit 2026-03-09T17:39:26.819022+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:27 vm00 bash[28333]: audit 2026-03-09T17:39:26.819232+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:27 vm00 bash[28333]: audit 2026-03-09T17:39:26.819232+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:27 vm00 bash[28333]: cluster 2026-03-09T17:39:26.826490+0000 mgr.y (mgr.14505) 519 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:27 vm00 bash[28333]: cluster 2026-03-09T17:39:26.826490+0000 mgr.y (mgr.14505) 519 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: audit 2026-03-09T17:39:27.805534+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: audit 2026-03-09T17:39:27.805534+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: cluster 2026-03-09T17:39:27.816082+0000 mon.a (mon.0) 3027 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: cluster 2026-03-09T17:39:27.816082+0000 mon.a (mon.0) 3027 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: audit 2026-03-09T17:39:27.825079+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: audit 2026-03-09T17:39:27.825079+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: audit 2026-03-09T17:39:27.825405+0000 mon.a (mon.0) 3028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: audit 2026-03-09T17:39:27.825405+0000 mon.a (mon.0) 3028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: audit 2026-03-09T17:39:27.942083+0000 mon.c (mon.2) 659 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:29.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:28 vm02 bash[23351]: audit 2026-03-09T17:39:27.942083+0000 mon.c (mon.2) 659 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: audit 2026-03-09T17:39:27.805534+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: audit 2026-03-09T17:39:27.805534+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: cluster 2026-03-09T17:39:27.816082+0000 mon.a (mon.0) 3027 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: cluster 2026-03-09T17:39:27.816082+0000 mon.a (mon.0) 3027 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: audit 2026-03-09T17:39:27.825079+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: audit 2026-03-09T17:39:27.825079+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: audit 2026-03-09T17:39:27.825405+0000 mon.a (mon.0) 3028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: audit 2026-03-09T17:39:27.825405+0000 mon.a (mon.0) 3028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: audit 2026-03-09T17:39:27.942083+0000 mon.c (mon.2) 659 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:28 vm00 bash[20770]: audit 2026-03-09T17:39:27.942083+0000 mon.c (mon.2) 659 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: audit 2026-03-09T17:39:27.805534+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: audit 2026-03-09T17:39:27.805534+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: cluster 2026-03-09T17:39:27.816082+0000 mon.a (mon.0) 3027 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T17:39:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: cluster 2026-03-09T17:39:27.816082+0000 mon.a (mon.0) 3027 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T17:39:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: audit 2026-03-09T17:39:27.825079+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: audit 2026-03-09T17:39:27.825079+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: audit 2026-03-09T17:39:27.825405+0000 mon.a (mon.0) 3028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: audit 2026-03-09T17:39:27.825405+0000 mon.a (mon.0) 3028 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: audit 2026-03-09T17:39:27.942083+0000 mon.c (mon.2) 659 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:29.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:28 vm00 bash[28333]: audit 2026-03-09T17:39:27.942083+0000 mon.c (mon.2) 659 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: audit 2026-03-09T17:39:28.817917+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: audit 2026-03-09T17:39:28.817917+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: cluster 2026-03-09T17:39:28.819421+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: cluster 2026-03-09T17:39:28.819421+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: cluster 2026-03-09T17:39:28.826820+0000 mgr.y (mgr.14505) 520 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: cluster 2026-03-09T17:39:28.826820+0000 mgr.y (mgr.14505) 520 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: audit 2026-03-09T17:39:28.829236+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: audit 2026-03-09T17:39:28.829236+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: audit 2026-03-09T17:39:28.829644+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:29 vm02 bash[23351]: audit 2026-03-09T17:39:28.829644+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: audit 2026-03-09T17:39:28.817917+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: audit 2026-03-09T17:39:28.817917+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: cluster 2026-03-09T17:39:28.819421+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: cluster 2026-03-09T17:39:28.819421+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: cluster 2026-03-09T17:39:28.826820+0000 mgr.y (mgr.14505) 520 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: cluster 2026-03-09T17:39:28.826820+0000 mgr.y (mgr.14505) 520 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: audit 2026-03-09T17:39:28.829236+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: audit 2026-03-09T17:39:28.829236+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: audit 2026-03-09T17:39:28.829644+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:29 vm00 bash[28333]: audit 2026-03-09T17:39:28.829644+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: audit 2026-03-09T17:39:28.817917+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: audit 2026-03-09T17:39:28.817917+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: cluster 2026-03-09T17:39:28.819421+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: cluster 2026-03-09T17:39:28.819421+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: cluster 2026-03-09T17:39:28.826820+0000 mgr.y (mgr.14505) 520 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: cluster 2026-03-09T17:39:28.826820+0000 mgr.y (mgr.14505) 520 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: audit 2026-03-09T17:39:28.829236+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: audit 2026-03-09T17:39:28.829236+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: audit 2026-03-09T17:39:28.829644+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:29 vm00 bash[20770]: audit 2026-03-09T17:39:28.829644+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]: dispatch 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: cluster 2026-03-09T17:39:29.819128+0000 mon.a (mon.0) 3032 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: cluster 2026-03-09T17:39:29.819128+0000 mon.a (mon.0) 3032 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: audit 2026-03-09T17:39:29.827417+0000 mon.a (mon.0) 3033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]': finished 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: audit 2026-03-09T17:39:29.827417+0000 mon.a (mon.0) 3033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]': finished 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: cluster 2026-03-09T17:39:29.832710+0000 mon.a (mon.0) 3034 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: cluster 2026-03-09T17:39:29.832710+0000 mon.a (mon.0) 3034 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: audit 2026-03-09T17:39:29.864531+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: audit 2026-03-09T17:39:29.864531+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: audit 2026-03-09T17:39:29.864712+0000 mgr.y (mgr.14505) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:30 vm02 bash[23351]: audit 2026-03-09T17:39:29.864712+0000 mgr.y (mgr.14505) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: cluster 2026-03-09T17:39:29.819128+0000 mon.a (mon.0) 3032 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: cluster 2026-03-09T17:39:29.819128+0000 mon.a (mon.0) 3032 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: audit 2026-03-09T17:39:29.827417+0000 mon.a (mon.0) 3033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]': finished 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: audit 2026-03-09T17:39:29.827417+0000 mon.a (mon.0) 3033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]': finished 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: cluster 2026-03-09T17:39:29.832710+0000 mon.a (mon.0) 3034 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: cluster 2026-03-09T17:39:29.832710+0000 mon.a (mon.0) 3034 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: audit 2026-03-09T17:39:29.864531+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: audit 2026-03-09T17:39:29.864531+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: audit 2026-03-09T17:39:29.864712+0000 mgr.y (mgr.14505) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:30 vm00 bash[20770]: audit 2026-03-09T17:39:29.864712+0000 mgr.y (mgr.14505) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: cluster 2026-03-09T17:39:29.819128+0000 mon.a (mon.0) 3032 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: cluster 2026-03-09T17:39:29.819128+0000 mon.a (mon.0) 3032 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: audit 2026-03-09T17:39:29.827417+0000 mon.a (mon.0) 3033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]': finished 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: audit 2026-03-09T17:39:29.827417+0000 mon.a (mon.0) 3033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-116", "mode": "writeback"}]': finished 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: cluster 2026-03-09T17:39:29.832710+0000 mon.a (mon.0) 3034 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: cluster 2026-03-09T17:39:29.832710+0000 mon.a (mon.0) 3034 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: audit 2026-03-09T17:39:29.864531+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: audit 2026-03-09T17:39:29.864531+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: audit 2026-03-09T17:39:29.864712+0000 mgr.y (mgr.14505) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:31.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:30 vm00 bash[28333]: audit 2026-03-09T17:39:29.864712+0000 mgr.y (mgr.14505) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "309.4"}]: dispatch 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:31 vm00 bash[20770]: cluster 2026-03-09T17:39:30.570029+0000 osd.6 (osd.6) 21 : cluster [DBG] 309.4 scrub starts 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:31 vm00 bash[20770]: cluster 2026-03-09T17:39:30.570029+0000 osd.6 (osd.6) 21 : cluster [DBG] 309.4 scrub starts 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:31 vm00 bash[20770]: cluster 2026-03-09T17:39:30.571077+0000 osd.6 (osd.6) 22 : cluster [DBG] 309.4 scrub ok 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:31 vm00 bash[20770]: cluster 2026-03-09T17:39:30.571077+0000 osd.6 (osd.6) 22 : cluster [DBG] 309.4 scrub ok 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:31 vm00 bash[20770]: cluster 2026-03-09T17:39:30.827153+0000 mgr.y (mgr.14505) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:31 vm00 bash[20770]: cluster 2026-03-09T17:39:30.827153+0000 mgr.y (mgr.14505) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:31 vm00 bash[20770]: cluster 2026-03-09T17:39:31.780382+0000 mon.a (mon.0) 3035 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:31 vm00 bash[20770]: cluster 2026-03-09T17:39:31.780382+0000 mon.a (mon.0) 3035 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:31 vm00 bash[28333]: cluster 2026-03-09T17:39:30.570029+0000 osd.6 (osd.6) 21 : cluster [DBG] 309.4 scrub starts 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:31 vm00 bash[28333]: cluster 2026-03-09T17:39:30.570029+0000 osd.6 (osd.6) 21 : cluster [DBG] 309.4 scrub starts 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:31 vm00 bash[28333]: cluster 2026-03-09T17:39:30.571077+0000 osd.6 (osd.6) 22 : cluster [DBG] 309.4 scrub ok 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:31 vm00 bash[28333]: cluster 2026-03-09T17:39:30.571077+0000 osd.6 (osd.6) 22 : cluster [DBG] 309.4 scrub ok 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:31 vm00 bash[28333]: cluster 2026-03-09T17:39:30.827153+0000 mgr.y (mgr.14505) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:31 vm00 bash[28333]: cluster 2026-03-09T17:39:30.827153+0000 mgr.y (mgr.14505) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:31 vm00 bash[28333]: cluster 2026-03-09T17:39:31.780382+0000 mon.a (mon.0) 3035 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:31 vm00 bash[28333]: cluster 2026-03-09T17:39:31.780382+0000 mon.a (mon.0) 3035 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:31 vm02 bash[23351]: cluster 2026-03-09T17:39:30.570029+0000 osd.6 (osd.6) 21 : cluster [DBG] 309.4 scrub starts 2026-03-09T17:39:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:31 vm02 bash[23351]: cluster 2026-03-09T17:39:30.570029+0000 osd.6 (osd.6) 21 : cluster [DBG] 309.4 scrub starts 2026-03-09T17:39:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:31 vm02 bash[23351]: cluster 2026-03-09T17:39:30.571077+0000 osd.6 (osd.6) 22 : cluster [DBG] 309.4 scrub ok 2026-03-09T17:39:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:31 vm02 bash[23351]: cluster 2026-03-09T17:39:30.571077+0000 osd.6 (osd.6) 22 : cluster [DBG] 309.4 scrub ok 2026-03-09T17:39:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:31 vm02 bash[23351]: cluster 2026-03-09T17:39:30.827153+0000 mgr.y (mgr.14505) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:31 vm02 bash[23351]: cluster 2026-03-09T17:39:30.827153+0000 mgr.y (mgr.14505) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:39:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:31 vm02 bash[23351]: cluster 2026-03-09T17:39:31.780382+0000 mon.a (mon.0) 3035 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:32.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:31 vm02 bash[23351]: cluster 2026-03-09T17:39:31.780382+0000 mon.a (mon.0) 3035 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:32.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:39:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:39:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:32 vm00 bash[20770]: audit 2026-03-09T17:39:32.074021+0000 mgr.y (mgr.14505) 523 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:32 vm00 bash[20770]: audit 2026-03-09T17:39:32.074021+0000 mgr.y (mgr.14505) 523 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:32 vm00 bash[28333]: audit 2026-03-09T17:39:32.074021+0000 mgr.y (mgr.14505) 523 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:32 vm00 bash[28333]: audit 2026-03-09T17:39:32.074021+0000 mgr.y (mgr.14505) 523 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:32 vm02 bash[23351]: audit 2026-03-09T17:39:32.074021+0000 mgr.y (mgr.14505) 523 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:32 vm02 bash[23351]: audit 2026-03-09T17:39:32.074021+0000 mgr.y (mgr.14505) 523 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:33 vm00 bash[20770]: cluster 2026-03-09T17:39:32.827699+0000 mgr.y (mgr.14505) 524 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:39:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:33 vm00 bash[20770]: cluster 2026-03-09T17:39:32.827699+0000 mgr.y (mgr.14505) 524 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:39:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:33 vm00 bash[28333]: cluster 2026-03-09T17:39:32.827699+0000 mgr.y (mgr.14505) 524 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:39:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:33 vm00 bash[28333]: cluster 2026-03-09T17:39:32.827699+0000 mgr.y (mgr.14505) 524 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:39:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:33 vm02 bash[23351]: cluster 2026-03-09T17:39:32.827699+0000 mgr.y (mgr.14505) 524 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:39:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:33 vm02 bash[23351]: cluster 2026-03-09T17:39:32.827699+0000 mgr.y (mgr.14505) 524 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:39:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:35 vm00 bash[20770]: cluster 2026-03-09T17:39:34.828292+0000 mgr.y (mgr.14505) 525 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1021 B/s rd, 1 op/s 2026-03-09T17:39:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:35 vm00 bash[20770]: cluster 2026-03-09T17:39:34.828292+0000 mgr.y (mgr.14505) 525 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1021 B/s rd, 1 op/s 2026-03-09T17:39:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:35 vm00 bash[28333]: cluster 2026-03-09T17:39:34.828292+0000 mgr.y (mgr.14505) 525 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1021 B/s rd, 1 op/s 2026-03-09T17:39:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:35 vm00 bash[28333]: cluster 2026-03-09T17:39:34.828292+0000 mgr.y (mgr.14505) 525 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1021 B/s rd, 1 op/s 2026-03-09T17:39:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:35 vm02 bash[23351]: cluster 2026-03-09T17:39:34.828292+0000 mgr.y (mgr.14505) 525 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1021 B/s rd, 1 op/s 2026-03-09T17:39:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:35 vm02 bash[23351]: cluster 2026-03-09T17:39:34.828292+0000 mgr.y (mgr.14505) 525 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1021 B/s rd, 1 op/s 2026-03-09T17:39:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:39:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:39:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:39:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:37 vm00 bash[20770]: cluster 2026-03-09T17:39:36.828615+0000 mgr.y (mgr.14505) 526 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-09T17:39:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:37 vm00 bash[20770]: cluster 2026-03-09T17:39:36.828615+0000 mgr.y (mgr.14505) 526 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-09T17:39:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:37 vm00 bash[28333]: cluster 2026-03-09T17:39:36.828615+0000 mgr.y (mgr.14505) 526 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-09T17:39:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:37 vm00 bash[28333]: cluster 2026-03-09T17:39:36.828615+0000 mgr.y (mgr.14505) 526 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-09T17:39:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:37 vm02 bash[23351]: cluster 2026-03-09T17:39:36.828615+0000 mgr.y (mgr.14505) 526 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-09T17:39:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:37 vm02 bash[23351]: cluster 2026-03-09T17:39:36.828615+0000 mgr.y (mgr.14505) 526 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-09T17:39:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:39 vm00 bash[20770]: cluster 2026-03-09T17:39:38.829437+0000 mgr.y (mgr.14505) 527 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:39 vm00 bash[20770]: cluster 2026-03-09T17:39:38.829437+0000 mgr.y (mgr.14505) 527 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:39 vm00 bash[28333]: cluster 2026-03-09T17:39:38.829437+0000 mgr.y (mgr.14505) 527 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:39 vm00 bash[28333]: cluster 2026-03-09T17:39:38.829437+0000 mgr.y (mgr.14505) 527 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:39 vm02 bash[23351]: cluster 2026-03-09T17:39:38.829437+0000 mgr.y (mgr.14505) 527 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:39 vm02 bash[23351]: cluster 2026-03-09T17:39:38.829437+0000 mgr.y (mgr.14505) 527 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:39:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:41 vm00 bash[28333]: cluster 2026-03-09T17:39:40.829770+0000 mgr.y (mgr.14505) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:39:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:41 vm00 bash[28333]: cluster 2026-03-09T17:39:40.829770+0000 mgr.y (mgr.14505) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:39:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:41 vm00 bash[20770]: cluster 2026-03-09T17:39:40.829770+0000 mgr.y (mgr.14505) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:39:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:41 vm00 bash[20770]: cluster 2026-03-09T17:39:40.829770+0000 mgr.y (mgr.14505) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:39:42.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:39:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:39:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:41 vm02 bash[23351]: cluster 2026-03-09T17:39:40.829770+0000 mgr.y (mgr.14505) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:39:42.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:41 vm02 bash[23351]: cluster 2026-03-09T17:39:40.829770+0000 mgr.y (mgr.14505) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:39:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:43 vm00 bash[28333]: audit 2026-03-09T17:39:42.078794+0000 mgr.y (mgr.14505) 529 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:43 vm00 bash[28333]: audit 2026-03-09T17:39:42.078794+0000 mgr.y (mgr.14505) 529 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:43 vm00 bash[20770]: audit 2026-03-09T17:39:42.078794+0000 mgr.y (mgr.14505) 529 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:43 vm00 bash[20770]: audit 2026-03-09T17:39:42.078794+0000 mgr.y (mgr.14505) 529 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:43 vm02 bash[23351]: audit 2026-03-09T17:39:42.078794+0000 mgr.y (mgr.14505) 529 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:43 vm02 bash[23351]: audit 2026-03-09T17:39:42.078794+0000 mgr.y (mgr.14505) 529 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:44 vm00 bash[20770]: cluster 2026-03-09T17:39:42.830551+0000 mgr.y (mgr.14505) 530 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:44 vm00 bash[20770]: cluster 2026-03-09T17:39:42.830551+0000 mgr.y (mgr.14505) 530 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:44 vm00 bash[20770]: audit 2026-03-09T17:39:43.230911+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:44 vm00 bash[20770]: audit 2026-03-09T17:39:43.230911+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:44 vm00 bash[20770]: audit 2026-03-09T17:39:43.234982+0000 mon.c (mon.2) 662 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:44 vm00 bash[20770]: audit 2026-03-09T17:39:43.234982+0000 mon.c (mon.2) 662 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:44 vm00 bash[28333]: cluster 2026-03-09T17:39:42.830551+0000 mgr.y (mgr.14505) 530 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:44 vm00 bash[28333]: cluster 2026-03-09T17:39:42.830551+0000 mgr.y (mgr.14505) 530 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:44 vm00 bash[28333]: audit 2026-03-09T17:39:43.230911+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:44 vm00 bash[28333]: audit 2026-03-09T17:39:43.230911+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:44 vm00 bash[28333]: audit 2026-03-09T17:39:43.234982+0000 mon.c (mon.2) 662 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:44 vm00 bash[28333]: audit 2026-03-09T17:39:43.234982+0000 mon.c (mon.2) 662 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:44 vm02 bash[23351]: cluster 2026-03-09T17:39:42.830551+0000 mgr.y (mgr.14505) 530 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:39:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:44 vm02 bash[23351]: cluster 2026-03-09T17:39:42.830551+0000 mgr.y (mgr.14505) 530 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T17:39:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:44 vm02 bash[23351]: audit 2026-03-09T17:39:43.230911+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:44 vm02 bash[23351]: audit 2026-03-09T17:39:43.230911+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:44 vm02 bash[23351]: audit 2026-03-09T17:39:43.234982+0000 mon.c (mon.2) 662 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:44 vm02 bash[23351]: audit 2026-03-09T17:39:43.234982+0000 mon.c (mon.2) 662 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:46 vm00 bash[28333]: cluster 2026-03-09T17:39:44.831058+0000 mgr.y (mgr.14505) 531 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:46 vm00 bash[28333]: cluster 2026-03-09T17:39:44.831058+0000 mgr.y (mgr.14505) 531 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:46 vm00 bash[28333]: cluster 2026-03-09T17:39:45.248528+0000 mon.a (mon.0) 3037 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:46 vm00 bash[28333]: cluster 2026-03-09T17:39:45.248528+0000 mon.a (mon.0) 3037 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:46 vm00 bash[28333]: audit 2026-03-09T17:39:45.278052+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:46 vm00 bash[28333]: audit 2026-03-09T17:39:45.278052+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:46 vm00 bash[28333]: audit 2026-03-09T17:39:45.278302+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:46 vm00 bash[28333]: audit 2026-03-09T17:39:45.278302+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:39:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:39:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:46 vm00 bash[20770]: cluster 2026-03-09T17:39:44.831058+0000 mgr.y (mgr.14505) 531 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:46 vm00 bash[20770]: cluster 2026-03-09T17:39:44.831058+0000 mgr.y (mgr.14505) 531 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:46 vm00 bash[20770]: cluster 2026-03-09T17:39:45.248528+0000 mon.a (mon.0) 3037 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:46 vm00 bash[20770]: cluster 2026-03-09T17:39:45.248528+0000 mon.a (mon.0) 3037 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:46 vm00 bash[20770]: audit 2026-03-09T17:39:45.278052+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:46 vm00 bash[20770]: audit 2026-03-09T17:39:45.278052+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:46 vm00 bash[20770]: audit 2026-03-09T17:39:45.278302+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:46 vm00 bash[20770]: audit 2026-03-09T17:39:45.278302+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:46 vm02 bash[23351]: cluster 2026-03-09T17:39:44.831058+0000 mgr.y (mgr.14505) 531 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:39:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:46 vm02 bash[23351]: cluster 2026-03-09T17:39:44.831058+0000 mgr.y (mgr.14505) 531 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:39:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:46 vm02 bash[23351]: cluster 2026-03-09T17:39:45.248528+0000 mon.a (mon.0) 3037 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T17:39:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:46 vm02 bash[23351]: cluster 2026-03-09T17:39:45.248528+0000 mon.a (mon.0) 3037 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T17:39:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:46 vm02 bash[23351]: audit 2026-03-09T17:39:45.278052+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:46 vm02 bash[23351]: audit 2026-03-09T17:39:45.278052+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:46 vm02 bash[23351]: audit 2026-03-09T17:39:45.278302+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:46 vm02 bash[23351]: audit 2026-03-09T17:39:45.278302+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: audit 2026-03-09T17:39:46.264359+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: audit 2026-03-09T17:39:46.264359+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: cluster 2026-03-09T17:39:46.268348+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: cluster 2026-03-09T17:39:46.268348+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: audit 2026-03-09T17:39:46.269759+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: audit 2026-03-09T17:39:46.269759+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: audit 2026-03-09T17:39:46.273632+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: audit 2026-03-09T17:39:46.273632+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: cluster 2026-03-09T17:39:47.264347+0000 mon.a (mon.0) 3042 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: cluster 2026-03-09T17:39:47.264347+0000 mon.a (mon.0) 3042 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: audit 2026-03-09T17:39:47.268064+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: audit 2026-03-09T17:39:47.268064+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:47.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: cluster 2026-03-09T17:39:47.284483+0000 mon.a (mon.0) 3044 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T17:39:47.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:47 vm02 bash[23351]: cluster 2026-03-09T17:39:47.284483+0000 mon.a (mon.0) 3044 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: audit 2026-03-09T17:39:46.264359+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: audit 2026-03-09T17:39:46.264359+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: cluster 2026-03-09T17:39:46.268348+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: cluster 2026-03-09T17:39:46.268348+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: audit 2026-03-09T17:39:46.269759+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: audit 2026-03-09T17:39:46.269759+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: audit 2026-03-09T17:39:46.273632+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: audit 2026-03-09T17:39:46.273632+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: cluster 2026-03-09T17:39:47.264347+0000 mon.a (mon.0) 3042 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: cluster 2026-03-09T17:39:47.264347+0000 mon.a (mon.0) 3042 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: audit 2026-03-09T17:39:47.268064+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: audit 2026-03-09T17:39:47.268064+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: cluster 2026-03-09T17:39:47.284483+0000 mon.a (mon.0) 3044 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:47 vm00 bash[28333]: cluster 2026-03-09T17:39:47.284483+0000 mon.a (mon.0) 3044 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: audit 2026-03-09T17:39:46.264359+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: audit 2026-03-09T17:39:46.264359+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: cluster 2026-03-09T17:39:46.268348+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: cluster 2026-03-09T17:39:46.268348+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: audit 2026-03-09T17:39:46.269759+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: audit 2026-03-09T17:39:46.269759+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: audit 2026-03-09T17:39:46.273632+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: audit 2026-03-09T17:39:46.273632+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]: dispatch 2026-03-09T17:39:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: cluster 2026-03-09T17:39:47.264347+0000 mon.a (mon.0) 3042 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: cluster 2026-03-09T17:39:47.264347+0000 mon.a (mon.0) 3042 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: audit 2026-03-09T17:39:47.268064+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: audit 2026-03-09T17:39:47.268064+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-116"}]': finished 2026-03-09T17:39:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: cluster 2026-03-09T17:39:47.284483+0000 mon.a (mon.0) 3044 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T17:39:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:47 vm00 bash[20770]: cluster 2026-03-09T17:39:47.284483+0000 mon.a (mon.0) 3044 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T17:39:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:48 vm00 bash[28333]: cluster 2026-03-09T17:39:46.831366+0000 mgr.y (mgr.14505) 532 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:39:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:48 vm00 bash[28333]: cluster 2026-03-09T17:39:46.831366+0000 mgr.y (mgr.14505) 532 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:39:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:48 vm00 bash[20770]: cluster 2026-03-09T17:39:46.831366+0000 mgr.y (mgr.14505) 532 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:39:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:48 vm00 bash[20770]: cluster 2026-03-09T17:39:46.831366+0000 mgr.y (mgr.14505) 532 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:39:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:48 vm02 bash[23351]: cluster 2026-03-09T17:39:46.831366+0000 mgr.y (mgr.14505) 532 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:39:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:48 vm02 bash[23351]: cluster 2026-03-09T17:39:46.831366+0000 mgr.y (mgr.14505) 532 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T17:39:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:49 vm00 bash[28333]: cluster 2026-03-09T17:39:48.494673+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T17:39:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:49 vm00 bash[28333]: cluster 2026-03-09T17:39:48.494673+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T17:39:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:49 vm00 bash[28333]: cluster 2026-03-09T17:39:48.831686+0000 mgr.y (mgr.14505) 533 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:49 vm00 bash[28333]: cluster 2026-03-09T17:39:48.831686+0000 mgr.y (mgr.14505) 533 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:49 vm00 bash[20770]: cluster 2026-03-09T17:39:48.494673+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T17:39:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:49 vm00 bash[20770]: cluster 2026-03-09T17:39:48.494673+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T17:39:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:49 vm00 bash[20770]: cluster 2026-03-09T17:39:48.831686+0000 mgr.y (mgr.14505) 533 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:49 vm00 bash[20770]: cluster 2026-03-09T17:39:48.831686+0000 mgr.y (mgr.14505) 533 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:49 vm02 bash[23351]: cluster 2026-03-09T17:39:48.494673+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T17:39:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:49 vm02 bash[23351]: cluster 2026-03-09T17:39:48.494673+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T17:39:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:49 vm02 bash[23351]: cluster 2026-03-09T17:39:48.831686+0000 mgr.y (mgr.14505) 533 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:49 vm02 bash[23351]: cluster 2026-03-09T17:39:48.831686+0000 mgr.y (mgr.14505) 533 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:50 vm00 bash[28333]: cluster 2026-03-09T17:39:49.520122+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:50 vm00 bash[28333]: cluster 2026-03-09T17:39:49.520122+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:50 vm00 bash[28333]: audit 2026-03-09T17:39:49.541915+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:50 vm00 bash[28333]: audit 2026-03-09T17:39:49.541915+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:50 vm00 bash[28333]: audit 2026-03-09T17:39:49.542305+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:50 vm00 bash[28333]: audit 2026-03-09T17:39:49.542305+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:50 vm00 bash[20770]: cluster 2026-03-09T17:39:49.520122+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:50 vm00 bash[20770]: cluster 2026-03-09T17:39:49.520122+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:50 vm00 bash[20770]: audit 2026-03-09T17:39:49.541915+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:50 vm00 bash[20770]: audit 2026-03-09T17:39:49.541915+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:50 vm00 bash[20770]: audit 2026-03-09T17:39:49.542305+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:50 vm00 bash[20770]: audit 2026-03-09T17:39:49.542305+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:50 vm02 bash[23351]: cluster 2026-03-09T17:39:49.520122+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T17:39:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:50 vm02 bash[23351]: cluster 2026-03-09T17:39:49.520122+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T17:39:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:50 vm02 bash[23351]: audit 2026-03-09T17:39:49.541915+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:50 vm02 bash[23351]: audit 2026-03-09T17:39:49.541915+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:50 vm02 bash[23351]: audit 2026-03-09T17:39:49.542305+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:50 vm02 bash[23351]: audit 2026-03-09T17:39:49.542305+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:39:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:51 vm02 bash[23351]: audit 2026-03-09T17:39:50.514120+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:51 vm02 bash[23351]: audit 2026-03-09T17:39:50.514120+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:51 vm02 bash[23351]: cluster 2026-03-09T17:39:50.517219+0000 mon.a (mon.0) 3049 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T17:39:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:51 vm02 bash[23351]: cluster 2026-03-09T17:39:50.517219+0000 mon.a (mon.0) 3049 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T17:39:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:51 vm02 bash[23351]: cluster 2026-03-09T17:39:50.832071+0000 mgr.y (mgr.14505) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:51 vm02 bash[23351]: cluster 2026-03-09T17:39:50.832071+0000 mgr.y (mgr.14505) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:51 vm00 bash[28333]: audit 2026-03-09T17:39:50.514120+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:51 vm00 bash[28333]: audit 2026-03-09T17:39:50.514120+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:51 vm00 bash[28333]: cluster 2026-03-09T17:39:50.517219+0000 mon.a (mon.0) 3049 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:51 vm00 bash[28333]: cluster 2026-03-09T17:39:50.517219+0000 mon.a (mon.0) 3049 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:51 vm00 bash[28333]: cluster 2026-03-09T17:39:50.832071+0000 mgr.y (mgr.14505) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:51 vm00 bash[28333]: cluster 2026-03-09T17:39:50.832071+0000 mgr.y (mgr.14505) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:51 vm00 bash[20770]: audit 2026-03-09T17:39:50.514120+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:51 vm00 bash[20770]: audit 2026-03-09T17:39:50.514120+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:51 vm00 bash[20770]: cluster 2026-03-09T17:39:50.517219+0000 mon.a (mon.0) 3049 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:51 vm00 bash[20770]: cluster 2026-03-09T17:39:50.517219+0000 mon.a (mon.0) 3049 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:51 vm00 bash[20770]: cluster 2026-03-09T17:39:50.832071+0000 mgr.y (mgr.14505) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:52.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:51 vm00 bash[20770]: cluster 2026-03-09T17:39:50.832071+0000 mgr.y (mgr.14505) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 231 active+clean; 455 KiB data, 946 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T17:39:52.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:39:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:39:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:52 vm02 bash[23351]: cluster 2026-03-09T17:39:51.600319+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T17:39:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:52 vm02 bash[23351]: cluster 2026-03-09T17:39:51.600319+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T17:39:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:52 vm02 bash[23351]: audit 2026-03-09T17:39:51.610824+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:52 vm02 bash[23351]: audit 2026-03-09T17:39:51.610824+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:52 vm02 bash[23351]: audit 2026-03-09T17:39:51.614918+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:52 vm02 bash[23351]: audit 2026-03-09T17:39:51.614918+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:52 vm02 bash[23351]: audit 2026-03-09T17:39:52.089674+0000 mgr.y (mgr.14505) 535 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:52 vm02 bash[23351]: audit 2026-03-09T17:39:52.089674+0000 mgr.y (mgr.14505) 535 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:52 vm00 bash[28333]: cluster 2026-03-09T17:39:51.600319+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:52 vm00 bash[28333]: cluster 2026-03-09T17:39:51.600319+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:52 vm00 bash[28333]: audit 2026-03-09T17:39:51.610824+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:52 vm00 bash[28333]: audit 2026-03-09T17:39:51.610824+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:52 vm00 bash[28333]: audit 2026-03-09T17:39:51.614918+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:52 vm00 bash[28333]: audit 2026-03-09T17:39:51.614918+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:52 vm00 bash[28333]: audit 2026-03-09T17:39:52.089674+0000 mgr.y (mgr.14505) 535 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:52 vm00 bash[28333]: audit 2026-03-09T17:39:52.089674+0000 mgr.y (mgr.14505) 535 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:52 vm00 bash[20770]: cluster 2026-03-09T17:39:51.600319+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:52 vm00 bash[20770]: cluster 2026-03-09T17:39:51.600319+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:52 vm00 bash[20770]: audit 2026-03-09T17:39:51.610824+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:53.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:52 vm00 bash[20770]: audit 2026-03-09T17:39:51.610824+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:52 vm00 bash[20770]: audit 2026-03-09T17:39:51.614918+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:52 vm00 bash[20770]: audit 2026-03-09T17:39:51.614918+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:39:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:52 vm00 bash[20770]: audit 2026-03-09T17:39:52.089674+0000 mgr.y (mgr.14505) 535 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:53.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:52 vm00 bash[20770]: audit 2026-03-09T17:39:52.089674+0000 mgr.y (mgr.14505) 535 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:39:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: audit 2026-03-09T17:39:52.610098+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: audit 2026-03-09T17:39:52.610098+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: cluster 2026-03-09T17:39:52.612640+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T17:39:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: cluster 2026-03-09T17:39:52.612640+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T17:39:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: audit 2026-03-09T17:39:52.616704+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: audit 2026-03-09T17:39:52.616704+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: audit 2026-03-09T17:39:52.630765+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: audit 2026-03-09T17:39:52.630765+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: cluster 2026-03-09T17:39:52.832723+0000 mgr.y (mgr.14505) 536 : cluster [DBG] pgmap v910: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 950 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:53 vm02 bash[23351]: cluster 2026-03-09T17:39:52.832723+0000 mgr.y (mgr.14505) 536 : cluster [DBG] pgmap v910: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 950 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: audit 2026-03-09T17:39:52.610098+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: audit 2026-03-09T17:39:52.610098+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: cluster 2026-03-09T17:39:52.612640+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: cluster 2026-03-09T17:39:52.612640+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: audit 2026-03-09T17:39:52.616704+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: audit 2026-03-09T17:39:52.616704+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: audit 2026-03-09T17:39:52.630765+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: audit 2026-03-09T17:39:52.630765+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: cluster 2026-03-09T17:39:52.832723+0000 mgr.y (mgr.14505) 536 : cluster [DBG] pgmap v910: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 950 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:53 vm00 bash[28333]: cluster 2026-03-09T17:39:52.832723+0000 mgr.y (mgr.14505) 536 : cluster [DBG] pgmap v910: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 950 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: audit 2026-03-09T17:39:52.610098+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: audit 2026-03-09T17:39:52.610098+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: cluster 2026-03-09T17:39:52.612640+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: cluster 2026-03-09T17:39:52.612640+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: audit 2026-03-09T17:39:52.616704+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: audit 2026-03-09T17:39:52.616704+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: audit 2026-03-09T17:39:52.630765+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: audit 2026-03-09T17:39:52.630765+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: cluster 2026-03-09T17:39:52.832723+0000 mgr.y (mgr.14505) 536 : cluster [DBG] pgmap v910: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 950 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:53 vm00 bash[20770]: cluster 2026-03-09T17:39:52.832723+0000 mgr.y (mgr.14505) 536 : cluster [DBG] pgmap v910: 268 pgs: 4 unknown, 264 active+clean; 455 KiB data, 950 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:54 vm00 bash[28333]: audit 2026-03-09T17:39:53.639030+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:54 vm00 bash[28333]: audit 2026-03-09T17:39:53.639030+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:54 vm00 bash[28333]: cluster 2026-03-09T17:39:53.648777+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:54 vm00 bash[28333]: cluster 2026-03-09T17:39:53.648777+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:54 vm00 bash[28333]: audit 2026-03-09T17:39:53.651450+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:54 vm00 bash[28333]: audit 2026-03-09T17:39:53.651450+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:54 vm00 bash[28333]: audit 2026-03-09T17:39:53.651891+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:54 vm00 bash[28333]: audit 2026-03-09T17:39:53.651891+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:54 vm00 bash[20770]: audit 2026-03-09T17:39:53.639030+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:54 vm00 bash[20770]: audit 2026-03-09T17:39:53.639030+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:54 vm00 bash[20770]: cluster 2026-03-09T17:39:53.648777+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:54 vm00 bash[20770]: cluster 2026-03-09T17:39:53.648777+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:54 vm00 bash[20770]: audit 2026-03-09T17:39:53.651450+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:54 vm00 bash[20770]: audit 2026-03-09T17:39:53.651450+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:54 vm00 bash[20770]: audit 2026-03-09T17:39:53.651891+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:54 vm00 bash[20770]: audit 2026-03-09T17:39:53.651891+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:54 vm02 bash[23351]: audit 2026-03-09T17:39:53.639030+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:54 vm02 bash[23351]: audit 2026-03-09T17:39:53.639030+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:54 vm02 bash[23351]: cluster 2026-03-09T17:39:53.648777+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T17:39:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:54 vm02 bash[23351]: cluster 2026-03-09T17:39:53.648777+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T17:39:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:54 vm02 bash[23351]: audit 2026-03-09T17:39:53.651450+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:54 vm02 bash[23351]: audit 2026-03-09T17:39:53.651450+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:54 vm02 bash[23351]: audit 2026-03-09T17:39:53.651891+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:54 vm02 bash[23351]: audit 2026-03-09T17:39:53.651891+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]: dispatch 2026-03-09T17:39:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: cluster 2026-03-09T17:39:54.639293+0000 mon.a (mon.0) 3058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: cluster 2026-03-09T17:39:54.639293+0000 mon.a (mon.0) 3058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:54.642208+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]': finished 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:54.642208+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]': finished 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: cluster 2026-03-09T17:39:54.651811+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: cluster 2026-03-09T17:39:54.651811+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: cluster 2026-03-09T17:39:54.833095+0000 mgr.y (mgr.14505) 537 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: cluster 2026-03-09T17:39:54.833095+0000 mgr.y (mgr.14505) 537 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:55.184934+0000 mon.c (mon.2) 669 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:55.184934+0000 mon.c (mon.2) 669 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:55.522286+0000 mon.c (mon.2) 670 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:55.522286+0000 mon.c (mon.2) 670 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:55.523550+0000 mon.c (mon.2) 671 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:55.523550+0000 mon.c (mon.2) 671 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:55.530143+0000 mon.a (mon.0) 3061 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: audit 2026-03-09T17:39:55.530143+0000 mon.a (mon.0) 3061 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: cluster 2026-03-09T17:39:55.648226+0000 mon.a (mon.0) 3062 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:55 vm00 bash[20770]: cluster 2026-03-09T17:39:55.648226+0000 mon.a (mon.0) 3062 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: cluster 2026-03-09T17:39:54.639293+0000 mon.a (mon.0) 3058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: cluster 2026-03-09T17:39:54.639293+0000 mon.a (mon.0) 3058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:54.642208+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]': finished 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:54.642208+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]': finished 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: cluster 2026-03-09T17:39:54.651811+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: cluster 2026-03-09T17:39:54.651811+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: cluster 2026-03-09T17:39:54.833095+0000 mgr.y (mgr.14505) 537 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: cluster 2026-03-09T17:39:54.833095+0000 mgr.y (mgr.14505) 537 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:55.184934+0000 mon.c (mon.2) 669 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:55.184934+0000 mon.c (mon.2) 669 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:55.522286+0000 mon.c (mon.2) 670 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:55.522286+0000 mon.c (mon.2) 670 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:55.523550+0000 mon.c (mon.2) 671 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:55.523550+0000 mon.c (mon.2) 671 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:55.530143+0000 mon.a (mon.0) 3061 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: audit 2026-03-09T17:39:55.530143+0000 mon.a (mon.0) 3061 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: cluster 2026-03-09T17:39:55.648226+0000 mon.a (mon.0) 3062 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T17:39:56.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:55 vm00 bash[28333]: cluster 2026-03-09T17:39:55.648226+0000 mon.a (mon.0) 3062 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: cluster 2026-03-09T17:39:54.639293+0000 mon.a (mon.0) 3058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: cluster 2026-03-09T17:39:54.639293+0000 mon.a (mon.0) 3058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:54.642208+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]': finished 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:54.642208+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-118", "mode": "writeback"}]': finished 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: cluster 2026-03-09T17:39:54.651811+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: cluster 2026-03-09T17:39:54.651811+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: cluster 2026-03-09T17:39:54.833095+0000 mgr.y (mgr.14505) 537 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: cluster 2026-03-09T17:39:54.833095+0000 mgr.y (mgr.14505) 537 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:55.184934+0000 mon.c (mon.2) 669 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:55.184934+0000 mon.c (mon.2) 669 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:55.522286+0000 mon.c (mon.2) 670 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:39:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:55.522286+0000 mon.c (mon.2) 670 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:39:56.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:55.523550+0000 mon.c (mon.2) 671 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:39:56.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:55.523550+0000 mon.c (mon.2) 671 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:39:56.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:55.530143+0000 mon.a (mon.0) 3061 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:56.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: audit 2026-03-09T17:39:55.530143+0000 mon.a (mon.0) 3061 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:56.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: cluster 2026-03-09T17:39:55.648226+0000 mon.a (mon.0) 3062 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T17:39:56.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:55 vm02 bash[23351]: cluster 2026-03-09T17:39:55.648226+0000 mon.a (mon.0) 3062 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T17:39:56.696 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:39:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:39:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:39:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:56 vm00 bash[28333]: audit 2026-03-09T17:39:55.710635+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:56 vm00 bash[28333]: audit 2026-03-09T17:39:55.710635+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:56 vm00 bash[28333]: audit 2026-03-09T17:39:55.711068+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:56 vm00 bash[28333]: audit 2026-03-09T17:39:55.711068+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:56 vm00 bash[20770]: audit 2026-03-09T17:39:55.710635+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:56 vm00 bash[20770]: audit 2026-03-09T17:39:55.710635+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:56 vm00 bash[20770]: audit 2026-03-09T17:39:55.711068+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:56 vm00 bash[20770]: audit 2026-03-09T17:39:55.711068+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:56 vm02 bash[23351]: audit 2026-03-09T17:39:55.710635+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:56 vm02 bash[23351]: audit 2026-03-09T17:39:55.710635+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:56 vm02 bash[23351]: audit 2026-03-09T17:39:55.711068+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:56 vm02 bash[23351]: audit 2026-03-09T17:39:55.711068+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: audit 2026-03-09T17:39:56.681223+0000 mon.a (mon.0) 3064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: audit 2026-03-09T17:39:56.681223+0000 mon.a (mon.0) 3064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: cluster 2026-03-09T17:39:56.684670+0000 mon.a (mon.0) 3065 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: cluster 2026-03-09T17:39:56.684670+0000 mon.a (mon.0) 3065 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: audit 2026-03-09T17:39:56.689557+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: audit 2026-03-09T17:39:56.689557+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: audit 2026-03-09T17:39:56.703131+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: audit 2026-03-09T17:39:56.703131+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: cluster 2026-03-09T17:39:56.784270+0000 mon.a (mon.0) 3067 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: cluster 2026-03-09T17:39:56.784270+0000 mon.a (mon.0) 3067 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: audit 2026-03-09T17:39:56.786832+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: audit 2026-03-09T17:39:56.786832+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: cluster 2026-03-09T17:39:56.797396+0000 mon.a (mon.0) 3069 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: cluster 2026-03-09T17:39:56.797396+0000 mon.a (mon.0) 3069 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: cluster 2026-03-09T17:39:56.833459+0000 mgr.y (mgr.14505) 538 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:57 vm00 bash[28333]: cluster 2026-03-09T17:39:56.833459+0000 mgr.y (mgr.14505) 538 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: audit 2026-03-09T17:39:56.681223+0000 mon.a (mon.0) 3064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: audit 2026-03-09T17:39:56.681223+0000 mon.a (mon.0) 3064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: cluster 2026-03-09T17:39:56.684670+0000 mon.a (mon.0) 3065 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: cluster 2026-03-09T17:39:56.684670+0000 mon.a (mon.0) 3065 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: audit 2026-03-09T17:39:56.689557+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: audit 2026-03-09T17:39:56.689557+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: audit 2026-03-09T17:39:56.703131+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: audit 2026-03-09T17:39:56.703131+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: cluster 2026-03-09T17:39:56.784270+0000 mon.a (mon.0) 3067 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: cluster 2026-03-09T17:39:56.784270+0000 mon.a (mon.0) 3067 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: audit 2026-03-09T17:39:56.786832+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: audit 2026-03-09T17:39:56.786832+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: cluster 2026-03-09T17:39:56.797396+0000 mon.a (mon.0) 3069 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T17:39:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: cluster 2026-03-09T17:39:56.797396+0000 mon.a (mon.0) 3069 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T17:39:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: cluster 2026-03-09T17:39:56.833459+0000 mgr.y (mgr.14505) 538 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:57 vm00 bash[20770]: cluster 2026-03-09T17:39:56.833459+0000 mgr.y (mgr.14505) 538 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: audit 2026-03-09T17:39:56.681223+0000 mon.a (mon.0) 3064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: audit 2026-03-09T17:39:56.681223+0000 mon.a (mon.0) 3064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: cluster 2026-03-09T17:39:56.684670+0000 mon.a (mon.0) 3065 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: cluster 2026-03-09T17:39:56.684670+0000 mon.a (mon.0) 3065 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: audit 2026-03-09T17:39:56.689557+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: audit 2026-03-09T17:39:56.689557+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: audit 2026-03-09T17:39:56.703131+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: audit 2026-03-09T17:39:56.703131+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]: dispatch 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: cluster 2026-03-09T17:39:56.784270+0000 mon.a (mon.0) 3067 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: cluster 2026-03-09T17:39:56.784270+0000 mon.a (mon.0) 3067 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: audit 2026-03-09T17:39:56.786832+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: audit 2026-03-09T17:39:56.786832+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-118"}]': finished 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: cluster 2026-03-09T17:39:56.797396+0000 mon.a (mon.0) 3069 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: cluster 2026-03-09T17:39:56.797396+0000 mon.a (mon.0) 3069 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: cluster 2026-03-09T17:39:56.833459+0000 mgr.y (mgr.14505) 538 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:57 vm02 bash[23351]: cluster 2026-03-09T17:39:56.833459+0000 mgr.y (mgr.14505) 538 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:39:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:58 vm00 bash[28333]: cluster 2026-03-09T17:39:57.786664+0000 mon.a (mon.0) 3070 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:58 vm00 bash[28333]: cluster 2026-03-09T17:39:57.786664+0000 mon.a (mon.0) 3070 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:58 vm00 bash[28333]: cluster 2026-03-09T17:39:57.808149+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T17:39:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:58 vm00 bash[28333]: cluster 2026-03-09T17:39:57.808149+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T17:39:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:58 vm00 bash[28333]: audit 2026-03-09T17:39:58.246201+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:58 vm00 bash[28333]: audit 2026-03-09T17:39:58.246201+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:58 vm00 bash[28333]: audit 2026-03-09T17:39:58.249747+0000 mon.c (mon.2) 674 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:58 vm00 bash[28333]: audit 2026-03-09T17:39:58.249747+0000 mon.c (mon.2) 674 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:58 vm00 bash[20770]: cluster 2026-03-09T17:39:57.786664+0000 mon.a (mon.0) 3070 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:58 vm00 bash[20770]: cluster 2026-03-09T17:39:57.786664+0000 mon.a (mon.0) 3070 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:58 vm00 bash[20770]: cluster 2026-03-09T17:39:57.808149+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T17:39:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:58 vm00 bash[20770]: cluster 2026-03-09T17:39:57.808149+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T17:39:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:58 vm00 bash[20770]: audit 2026-03-09T17:39:58.246201+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:58 vm00 bash[20770]: audit 2026-03-09T17:39:58.246201+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:58 vm00 bash[20770]: audit 2026-03-09T17:39:58.249747+0000 mon.c (mon.2) 674 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:58 vm00 bash[20770]: audit 2026-03-09T17:39:58.249747+0000 mon.c (mon.2) 674 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:58 vm02 bash[23351]: cluster 2026-03-09T17:39:57.786664+0000 mon.a (mon.0) 3070 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:58 vm02 bash[23351]: cluster 2026-03-09T17:39:57.786664+0000 mon.a (mon.0) 3070 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:39:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:58 vm02 bash[23351]: cluster 2026-03-09T17:39:57.808149+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T17:39:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:58 vm02 bash[23351]: cluster 2026-03-09T17:39:57.808149+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T17:39:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:58 vm02 bash[23351]: audit 2026-03-09T17:39:58.246201+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:58 vm02 bash[23351]: audit 2026-03-09T17:39:58.246201+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:39:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:58 vm02 bash[23351]: audit 2026-03-09T17:39:58.249747+0000 mon.c (mon.2) 674 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:39:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:58 vm02 bash[23351]: audit 2026-03-09T17:39:58.249747+0000 mon.c (mon.2) 674 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:59 vm02 bash[23351]: cluster 2026-03-09T17:39:58.807010+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T17:40:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:59 vm02 bash[23351]: cluster 2026-03-09T17:39:58.807010+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T17:40:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:59 vm02 bash[23351]: audit 2026-03-09T17:39:58.812625+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:59 vm02 bash[23351]: audit 2026-03-09T17:39:58.812625+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:59 vm02 bash[23351]: audit 2026-03-09T17:39:58.814635+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:59 vm02 bash[23351]: audit 2026-03-09T17:39:58.814635+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:59 vm02 bash[23351]: cluster 2026-03-09T17:39:58.833895+0000 mgr.y (mgr.14505) 539 : cluster [DBG] pgmap v920: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-09T17:40:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:39:59 vm02 bash[23351]: cluster 2026-03-09T17:39:58.833895+0000 mgr.y (mgr.14505) 539 : cluster [DBG] pgmap v920: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:59 vm00 bash[28333]: cluster 2026-03-09T17:39:58.807010+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:59 vm00 bash[28333]: cluster 2026-03-09T17:39:58.807010+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:59 vm00 bash[28333]: audit 2026-03-09T17:39:58.812625+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:59 vm00 bash[28333]: audit 2026-03-09T17:39:58.812625+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:59 vm00 bash[28333]: audit 2026-03-09T17:39:58.814635+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:59 vm00 bash[28333]: audit 2026-03-09T17:39:58.814635+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:59 vm00 bash[28333]: cluster 2026-03-09T17:39:58.833895+0000 mgr.y (mgr.14505) 539 : cluster [DBG] pgmap v920: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:39:59 vm00 bash[28333]: cluster 2026-03-09T17:39:58.833895+0000 mgr.y (mgr.14505) 539 : cluster [DBG] pgmap v920: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-09T17:40:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:59 vm00 bash[20770]: cluster 2026-03-09T17:39:58.807010+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T17:40:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:59 vm00 bash[20770]: cluster 2026-03-09T17:39:58.807010+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T17:40:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:59 vm00 bash[20770]: audit 2026-03-09T17:39:58.812625+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:59 vm00 bash[20770]: audit 2026-03-09T17:39:58.812625+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:59 vm00 bash[20770]: audit 2026-03-09T17:39:58.814635+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:59 vm00 bash[20770]: audit 2026-03-09T17:39:58.814635+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:59 vm00 bash[20770]: cluster 2026-03-09T17:39:58.833895+0000 mgr.y (mgr.14505) 539 : cluster [DBG] pgmap v920: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-09T17:40:00.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:39:59 vm00 bash[20770]: cluster 2026-03-09T17:39:58.833895+0000 mgr.y (mgr.14505) 539 : cluster [DBG] pgmap v920: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:39:59.800809+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:39:59.800809+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:39:59.813108+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:39:59.813108+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:39:59.862245+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:39:59.862245+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:39:59.862509+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:39:59.862509+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000120+0000 mon.a (mon.0) 3078 : cluster [WRN] Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000120+0000 mon.a (mon.0) 3078 : cluster [WRN] Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000143+0000 mon.a (mon.0) 3079 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000143+0000 mon.a (mon.0) 3079 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000149+0000 mon.a (mon.0) 3080 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000149+0000 mon.a (mon.0) 3080 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000155+0000 mon.a (mon.0) 3081 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000155+0000 mon.a (mon.0) 3081 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000159+0000 mon.a (mon.0) 3082 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000159+0000 mon.a (mon.0) 3082 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000164+0000 mon.a (mon.0) 3083 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-111' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000164+0000 mon.a (mon.0) 3083 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-111' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000168+0000 mon.a (mon.0) 3084 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-120' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000168+0000 mon.a (mon.0) 3084 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-120' 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000173+0000 mon.a (mon.0) 3085 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.000173+0000 mon.a (mon.0) 3085 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:40:00.803740+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:40:00.803740+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:40:00.809826+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:40:00.809826+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.809832+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: cluster 2026-03-09T17:40:00.809832+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:40:00.810922+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:00 vm02 bash[23351]: audit 2026-03-09T17:40:00.810922+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:39:59.800809+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:39:59.800809+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:39:59.813108+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:39:59.813108+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:39:59.862245+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:39:59.862245+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:39:59.862509+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:39:59.862509+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000120+0000 mon.a (mon.0) 3078 : cluster [WRN] Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000120+0000 mon.a (mon.0) 3078 : cluster [WRN] Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000143+0000 mon.a (mon.0) 3079 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000143+0000 mon.a (mon.0) 3079 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000149+0000 mon.a (mon.0) 3080 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000149+0000 mon.a (mon.0) 3080 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000155+0000 mon.a (mon.0) 3081 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000155+0000 mon.a (mon.0) 3081 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000159+0000 mon.a (mon.0) 3082 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000159+0000 mon.a (mon.0) 3082 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000164+0000 mon.a (mon.0) 3083 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-111' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000164+0000 mon.a (mon.0) 3083 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-111' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000168+0000 mon.a (mon.0) 3084 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-120' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000168+0000 mon.a (mon.0) 3084 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-120' 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000173+0000 mon.a (mon.0) 3085 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.000173+0000 mon.a (mon.0) 3085 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:40:00.803740+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:40:00.803740+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:40:00.809826+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:40:00.809826+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.809832+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: cluster 2026-03-09T17:40:00.809832+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:40:00.810922+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:00 vm00 bash[28333]: audit 2026-03-09T17:40:00.810922+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:39:59.800809+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:39:59.800809+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:39:59.813108+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:39:59.813108+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:39:59.862245+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:39:59.862245+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:39:59.862509+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:39:59.862509+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000120+0000 mon.a (mon.0) 3078 : cluster [WRN] Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000120+0000 mon.a (mon.0) 3078 : cluster [WRN] Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000143+0000 mon.a (mon.0) 3079 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000143+0000 mon.a (mon.0) 3079 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000149+0000 mon.a (mon.0) 3080 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000149+0000 mon.a (mon.0) 3080 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000155+0000 mon.a (mon.0) 3081 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000155+0000 mon.a (mon.0) 3081 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000159+0000 mon.a (mon.0) 3082 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000159+0000 mon.a (mon.0) 3082 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000164+0000 mon.a (mon.0) 3083 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-111' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000164+0000 mon.a (mon.0) 3083 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-111' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000168+0000 mon.a (mon.0) 3084 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-120' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000168+0000 mon.a (mon.0) 3084 : cluster [WRN] application not enabled on pool 'test-rados-api-vm00-60118-120' 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000173+0000 mon.a (mon.0) 3085 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.000173+0000 mon.a (mon.0) 3085 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:40:00.803740+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:40:00.803740+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:40:00.809826+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:40:00.809826+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.809832+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: cluster 2026-03-09T17:40:00.809832+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:40:00.810922+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:01.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:00 vm00 bash[20770]: audit 2026-03-09T17:40:00.810922+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: cluster 2026-03-09T17:40:00.834259+0000 mgr.y (mgr.14505) 540 : cluster [DBG] pgmap v923: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: cluster 2026-03-09T17:40:00.834259+0000 mgr.y (mgr.14505) 540 : cluster [DBG] pgmap v923: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: audit 2026-03-09T17:40:01.806529+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: audit 2026-03-09T17:40:01.806529+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: audit 2026-03-09T17:40:01.815046+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: audit 2026-03-09T17:40:01.815046+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: cluster 2026-03-09T17:40:01.816778+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: cluster 2026-03-09T17:40:01.816778+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: audit 2026-03-09T17:40:01.821342+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.098 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:01 vm02 bash[23351]: audit 2026-03-09T17:40:01.821342+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: cluster 2026-03-09T17:40:00.834259+0000 mgr.y (mgr.14505) 540 : cluster [DBG] pgmap v923: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: cluster 2026-03-09T17:40:00.834259+0000 mgr.y (mgr.14505) 540 : cluster [DBG] pgmap v923: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: audit 2026-03-09T17:40:01.806529+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: audit 2026-03-09T17:40:01.806529+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: audit 2026-03-09T17:40:01.815046+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: audit 2026-03-09T17:40:01.815046+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: cluster 2026-03-09T17:40:01.816778+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: cluster 2026-03-09T17:40:01.816778+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: audit 2026-03-09T17:40:01.821342+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:01 vm00 bash[28333]: audit 2026-03-09T17:40:01.821342+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: cluster 2026-03-09T17:40:00.834259+0000 mgr.y (mgr.14505) 540 : cluster [DBG] pgmap v923: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: cluster 2026-03-09T17:40:00.834259+0000 mgr.y (mgr.14505) 540 : cluster [DBG] pgmap v923: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 951 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: audit 2026-03-09T17:40:01.806529+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: audit 2026-03-09T17:40:01.806529+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: audit 2026-03-09T17:40:01.815046+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: audit 2026-03-09T17:40:01.815046+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: cluster 2026-03-09T17:40:01.816778+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T17:40:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: cluster 2026-03-09T17:40:01.816778+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T17:40:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: audit 2026-03-09T17:40:01.821342+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:01 vm00 bash[20770]: audit 2026-03-09T17:40:01.821342+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]: dispatch 2026-03-09T17:40:02.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:40:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:40:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:02 vm02 bash[23351]: audit 2026-03-09T17:40:02.097682+0000 mgr.y (mgr.14505) 541 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:02 vm02 bash[23351]: audit 2026-03-09T17:40:02.097682+0000 mgr.y (mgr.14505) 541 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:02 vm02 bash[23351]: cluster 2026-03-09T17:40:02.806625+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:02 vm02 bash[23351]: cluster 2026-03-09T17:40:02.806625+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:02 vm02 bash[23351]: audit 2026-03-09T17:40:02.811054+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]': finished 2026-03-09T17:40:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:02 vm02 bash[23351]: audit 2026-03-09T17:40:02.811054+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]': finished 2026-03-09T17:40:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:02 vm02 bash[23351]: cluster 2026-03-09T17:40:02.813559+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T17:40:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:02 vm02 bash[23351]: cluster 2026-03-09T17:40:02.813559+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:02 vm00 bash[28333]: audit 2026-03-09T17:40:02.097682+0000 mgr.y (mgr.14505) 541 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:02 vm00 bash[28333]: audit 2026-03-09T17:40:02.097682+0000 mgr.y (mgr.14505) 541 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:02 vm00 bash[28333]: cluster 2026-03-09T17:40:02.806625+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:02 vm00 bash[28333]: cluster 2026-03-09T17:40:02.806625+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:02 vm00 bash[28333]: audit 2026-03-09T17:40:02.811054+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]': finished 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:02 vm00 bash[28333]: audit 2026-03-09T17:40:02.811054+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]': finished 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:02 vm00 bash[28333]: cluster 2026-03-09T17:40:02.813559+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:02 vm00 bash[28333]: cluster 2026-03-09T17:40:02.813559+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:02 vm00 bash[20770]: audit 2026-03-09T17:40:02.097682+0000 mgr.y (mgr.14505) 541 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:02 vm00 bash[20770]: audit 2026-03-09T17:40:02.097682+0000 mgr.y (mgr.14505) 541 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:02 vm00 bash[20770]: cluster 2026-03-09T17:40:02.806625+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:02 vm00 bash[20770]: cluster 2026-03-09T17:40:02.806625+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:02 vm00 bash[20770]: audit 2026-03-09T17:40:02.811054+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]': finished 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:02 vm00 bash[20770]: audit 2026-03-09T17:40:02.811054+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-120", "mode": "writeback"}]': finished 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:02 vm00 bash[20770]: cluster 2026-03-09T17:40:02.813559+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T17:40:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:02 vm00 bash[20770]: cluster 2026-03-09T17:40:02.813559+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T17:40:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:03 vm02 bash[23351]: cluster 2026-03-09T17:40:02.834578+0000 mgr.y (mgr.14505) 542 : cluster [DBG] pgmap v926: 268 pgs: 7 unknown, 261 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:03 vm02 bash[23351]: cluster 2026-03-09T17:40:02.834578+0000 mgr.y (mgr.14505) 542 : cluster [DBG] pgmap v926: 268 pgs: 7 unknown, 261 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:03 vm02 bash[23351]: audit 2026-03-09T17:40:02.880899+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:03 vm02 bash[23351]: audit 2026-03-09T17:40:02.880899+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:03 vm02 bash[23351]: audit 2026-03-09T17:40:02.881289+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:03 vm02 bash[23351]: audit 2026-03-09T17:40:02.881289+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:03 vm00 bash[28333]: cluster 2026-03-09T17:40:02.834578+0000 mgr.y (mgr.14505) 542 : cluster [DBG] pgmap v926: 268 pgs: 7 unknown, 261 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:03 vm00 bash[28333]: cluster 2026-03-09T17:40:02.834578+0000 mgr.y (mgr.14505) 542 : cluster [DBG] pgmap v926: 268 pgs: 7 unknown, 261 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:03 vm00 bash[28333]: audit 2026-03-09T17:40:02.880899+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:03 vm00 bash[28333]: audit 2026-03-09T17:40:02.880899+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:03 vm00 bash[28333]: audit 2026-03-09T17:40:02.881289+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:03 vm00 bash[28333]: audit 2026-03-09T17:40:02.881289+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:03 vm00 bash[20770]: cluster 2026-03-09T17:40:02.834578+0000 mgr.y (mgr.14505) 542 : cluster [DBG] pgmap v926: 268 pgs: 7 unknown, 261 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:03 vm00 bash[20770]: cluster 2026-03-09T17:40:02.834578+0000 mgr.y (mgr.14505) 542 : cluster [DBG] pgmap v926: 268 pgs: 7 unknown, 261 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:03 vm00 bash[20770]: audit 2026-03-09T17:40:02.880899+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:03 vm00 bash[20770]: audit 2026-03-09T17:40:02.880899+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:03 vm00 bash[20770]: audit 2026-03-09T17:40:02.881289+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:03 vm00 bash[20770]: audit 2026-03-09T17:40:02.881289+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: audit 2026-03-09T17:40:03.845926+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: audit 2026-03-09T17:40:03.845926+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: cluster 2026-03-09T17:40:03.855403+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: cluster 2026-03-09T17:40:03.855403+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: audit 2026-03-09T17:40:03.856559+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: audit 2026-03-09T17:40:03.856559+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: audit 2026-03-09T17:40:03.856953+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: audit 2026-03-09T17:40:03.856953+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: cluster 2026-03-09T17:40:04.846057+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: cluster 2026-03-09T17:40:04.846057+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: audit 2026-03-09T17:40:04.849246+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:04 vm02 bash[23351]: audit 2026-03-09T17:40:04.849246+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: audit 2026-03-09T17:40:03.845926+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: audit 2026-03-09T17:40:03.845926+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: cluster 2026-03-09T17:40:03.855403+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: cluster 2026-03-09T17:40:03.855403+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: audit 2026-03-09T17:40:03.856559+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: audit 2026-03-09T17:40:03.856559+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: audit 2026-03-09T17:40:03.856953+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: audit 2026-03-09T17:40:03.856953+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: cluster 2026-03-09T17:40:04.846057+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: cluster 2026-03-09T17:40:04.846057+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: audit 2026-03-09T17:40:04.849246+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:04 vm00 bash[28333]: audit 2026-03-09T17:40:04.849246+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: audit 2026-03-09T17:40:03.845926+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: audit 2026-03-09T17:40:03.845926+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: cluster 2026-03-09T17:40:03.855403+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: cluster 2026-03-09T17:40:03.855403+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: audit 2026-03-09T17:40:03.856559+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: audit 2026-03-09T17:40:03.856559+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: audit 2026-03-09T17:40:03.856953+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: audit 2026-03-09T17:40:03.856953+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]: dispatch 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: cluster 2026-03-09T17:40:04.846057+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: cluster 2026-03-09T17:40:04.846057+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: audit 2026-03-09T17:40:04.849246+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:05.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:04 vm00 bash[20770]: audit 2026-03-09T17:40:04.849246+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-120"}]': finished 2026-03-09T17:40:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:05 vm02 bash[23351]: cluster 2026-03-09T17:40:04.834902+0000 mgr.y (mgr.14505) 543 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:05 vm02 bash[23351]: cluster 2026-03-09T17:40:04.834902+0000 mgr.y (mgr.14505) 543 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:05 vm02 bash[23351]: cluster 2026-03-09T17:40:04.859403+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T17:40:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:05 vm02 bash[23351]: cluster 2026-03-09T17:40:04.859403+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T17:40:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:05 vm00 bash[20770]: cluster 2026-03-09T17:40:04.834902+0000 mgr.y (mgr.14505) 543 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:05 vm00 bash[20770]: cluster 2026-03-09T17:40:04.834902+0000 mgr.y (mgr.14505) 543 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:05 vm00 bash[20770]: cluster 2026-03-09T17:40:04.859403+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T17:40:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:05 vm00 bash[20770]: cluster 2026-03-09T17:40:04.859403+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T17:40:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:05 vm00 bash[28333]: cluster 2026-03-09T17:40:04.834902+0000 mgr.y (mgr.14505) 543 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:05 vm00 bash[28333]: cluster 2026-03-09T17:40:04.834902+0000 mgr.y (mgr.14505) 543 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:05 vm00 bash[28333]: cluster 2026-03-09T17:40:04.859403+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T17:40:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:05 vm00 bash[28333]: cluster 2026-03-09T17:40:04.859403+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T17:40:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:40:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:40:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: cluster 2026-03-09T17:40:05.893953+0000 mon.a (mon.0) 3102 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: cluster 2026-03-09T17:40:05.893953+0000 mon.a (mon.0) 3102 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: cluster 2026-03-09T17:40:06.786116+0000 mon.a (mon.0) 3103 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: cluster 2026-03-09T17:40:06.786116+0000 mon.a (mon.0) 3103 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: cluster 2026-03-09T17:40:06.827311+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: cluster 2026-03-09T17:40:06.827311+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: audit 2026-03-09T17:40:06.829074+0000 mon.c (mon.2) 681 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: audit 2026-03-09T17:40:06.829074+0000 mon.c (mon.2) 681 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: audit 2026-03-09T17:40:06.831375+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:06 vm02 bash[23351]: audit 2026-03-09T17:40:06.831375+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: cluster 2026-03-09T17:40:05.893953+0000 mon.a (mon.0) 3102 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: cluster 2026-03-09T17:40:05.893953+0000 mon.a (mon.0) 3102 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: cluster 2026-03-09T17:40:06.786116+0000 mon.a (mon.0) 3103 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: cluster 2026-03-09T17:40:06.786116+0000 mon.a (mon.0) 3103 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: cluster 2026-03-09T17:40:06.827311+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: cluster 2026-03-09T17:40:06.827311+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: audit 2026-03-09T17:40:06.829074+0000 mon.c (mon.2) 681 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: audit 2026-03-09T17:40:06.829074+0000 mon.c (mon.2) 681 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: audit 2026-03-09T17:40:06.831375+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:06 vm00 bash[20770]: audit 2026-03-09T17:40:06.831375+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: cluster 2026-03-09T17:40:05.893953+0000 mon.a (mon.0) 3102 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: cluster 2026-03-09T17:40:05.893953+0000 mon.a (mon.0) 3102 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: cluster 2026-03-09T17:40:06.786116+0000 mon.a (mon.0) 3103 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: cluster 2026-03-09T17:40:06.786116+0000 mon.a (mon.0) 3103 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: cluster 2026-03-09T17:40:06.827311+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: cluster 2026-03-09T17:40:06.827311+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: audit 2026-03-09T17:40:06.829074+0000 mon.c (mon.2) 681 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: audit 2026-03-09T17:40:06.829074+0000 mon.c (mon.2) 681 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: audit 2026-03-09T17:40:06.831375+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:06 vm00 bash[28333]: audit 2026-03-09T17:40:06.831375+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: cluster 2026-03-09T17:40:06.835224+0000 mgr.y (mgr.14505) 544 : cluster [DBG] pgmap v932: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: cluster 2026-03-09T17:40:06.835224+0000 mgr.y (mgr.14505) 544 : cluster [DBG] pgmap v932: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: audit 2026-03-09T17:40:07.820503+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: audit 2026-03-09T17:40:07.820503+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: cluster 2026-03-09T17:40:07.827106+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: cluster 2026-03-09T17:40:07.827106+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: audit 2026-03-09T17:40:07.854043+0000 mon.c (mon.2) 682 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: audit 2026-03-09T17:40:07.854043+0000 mon.c (mon.2) 682 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: audit 2026-03-09T17:40:07.854308+0000 mon.a (mon.0) 3108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:07 vm00 bash[28333]: audit 2026-03-09T17:40:07.854308+0000 mon.a (mon.0) 3108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: cluster 2026-03-09T17:40:06.835224+0000 mgr.y (mgr.14505) 544 : cluster [DBG] pgmap v932: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: cluster 2026-03-09T17:40:06.835224+0000 mgr.y (mgr.14505) 544 : cluster [DBG] pgmap v932: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: audit 2026-03-09T17:40:07.820503+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: audit 2026-03-09T17:40:07.820503+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: cluster 2026-03-09T17:40:07.827106+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: cluster 2026-03-09T17:40:07.827106+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: audit 2026-03-09T17:40:07.854043+0000 mon.c (mon.2) 682 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: audit 2026-03-09T17:40:07.854043+0000 mon.c (mon.2) 682 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: audit 2026-03-09T17:40:07.854308+0000 mon.a (mon.0) 3108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:07 vm00 bash[20770]: audit 2026-03-09T17:40:07.854308+0000 mon.a (mon.0) 3108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: cluster 2026-03-09T17:40:06.835224+0000 mgr.y (mgr.14505) 544 : cluster [DBG] pgmap v932: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: cluster 2026-03-09T17:40:06.835224+0000 mgr.y (mgr.14505) 544 : cluster [DBG] pgmap v932: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: audit 2026-03-09T17:40:07.820503+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: audit 2026-03-09T17:40:07.820503+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: cluster 2026-03-09T17:40:07.827106+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: cluster 2026-03-09T17:40:07.827106+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: audit 2026-03-09T17:40:07.854043+0000 mon.c (mon.2) 682 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: audit 2026-03-09T17:40:07.854043+0000 mon.c (mon.2) 682 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: audit 2026-03-09T17:40:07.854308+0000 mon.a (mon.0) 3108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:07 vm02 bash[23351]: audit 2026-03-09T17:40:07.854308+0000 mon.a (mon.0) 3108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: cluster 2026-03-09T17:40:08.835602+0000 mgr.y (mgr.14505) 545 : cluster [DBG] pgmap v934: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: cluster 2026-03-09T17:40:08.835602+0000 mgr.y (mgr.14505) 545 : cluster [DBG] pgmap v934: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: audit 2026-03-09T17:40:08.900448+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: audit 2026-03-09T17:40:08.900448+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: cluster 2026-03-09T17:40:08.903741+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: cluster 2026-03-09T17:40:08.903741+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: audit 2026-03-09T17:40:08.906095+0000 mon.c (mon.2) 683 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: audit 2026-03-09T17:40:08.906095+0000 mon.c (mon.2) 683 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: audit 2026-03-09T17:40:08.908012+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:09 vm00 bash[20770]: audit 2026-03-09T17:40:08.908012+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: cluster 2026-03-09T17:40:08.835602+0000 mgr.y (mgr.14505) 545 : cluster [DBG] pgmap v934: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: cluster 2026-03-09T17:40:08.835602+0000 mgr.y (mgr.14505) 545 : cluster [DBG] pgmap v934: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: audit 2026-03-09T17:40:08.900448+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: audit 2026-03-09T17:40:08.900448+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: cluster 2026-03-09T17:40:08.903741+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: cluster 2026-03-09T17:40:08.903741+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: audit 2026-03-09T17:40:08.906095+0000 mon.c (mon.2) 683 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: audit 2026-03-09T17:40:08.906095+0000 mon.c (mon.2) 683 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: audit 2026-03-09T17:40:08.908012+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:09 vm00 bash[28333]: audit 2026-03-09T17:40:08.908012+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: cluster 2026-03-09T17:40:08.835602+0000 mgr.y (mgr.14505) 545 : cluster [DBG] pgmap v934: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: cluster 2026-03-09T17:40:08.835602+0000 mgr.y (mgr.14505) 545 : cluster [DBG] pgmap v934: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: audit 2026-03-09T17:40:08.900448+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: audit 2026-03-09T17:40:08.900448+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: cluster 2026-03-09T17:40:08.903741+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: cluster 2026-03-09T17:40:08.903741+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: audit 2026-03-09T17:40:08.906095+0000 mon.c (mon.2) 683 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: audit 2026-03-09T17:40:08.906095+0000 mon.c (mon.2) 683 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: audit 2026-03-09T17:40:08.908012+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:09 vm02 bash[23351]: audit 2026-03-09T17:40:08.908012+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:10 vm00 bash[20770]: audit 2026-03-09T17:40:09.903627+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:10 vm00 bash[20770]: audit 2026-03-09T17:40:09.903627+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:10 vm00 bash[20770]: cluster 2026-03-09T17:40:09.912787+0000 mon.a (mon.0) 3113 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:10 vm00 bash[20770]: cluster 2026-03-09T17:40:09.912787+0000 mon.a (mon.0) 3113 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:10 vm00 bash[20770]: audit 2026-03-09T17:40:09.917135+0000 mon.c (mon.2) 684 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:10 vm00 bash[20770]: audit 2026-03-09T17:40:09.917135+0000 mon.c (mon.2) 684 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:10 vm00 bash[20770]: audit 2026-03-09T17:40:09.926712+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:10 vm00 bash[20770]: audit 2026-03-09T17:40:09.926712+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:10 vm00 bash[28333]: audit 2026-03-09T17:40:09.903627+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:10 vm00 bash[28333]: audit 2026-03-09T17:40:09.903627+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:10 vm00 bash[28333]: cluster 2026-03-09T17:40:09.912787+0000 mon.a (mon.0) 3113 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:10 vm00 bash[28333]: cluster 2026-03-09T17:40:09.912787+0000 mon.a (mon.0) 3113 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:10 vm00 bash[28333]: audit 2026-03-09T17:40:09.917135+0000 mon.c (mon.2) 684 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:10 vm00 bash[28333]: audit 2026-03-09T17:40:09.917135+0000 mon.c (mon.2) 684 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:10 vm00 bash[28333]: audit 2026-03-09T17:40:09.926712+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:10 vm00 bash[28333]: audit 2026-03-09T17:40:09.926712+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:10 vm02 bash[23351]: audit 2026-03-09T17:40:09.903627+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:10 vm02 bash[23351]: audit 2026-03-09T17:40:09.903627+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:10 vm02 bash[23351]: cluster 2026-03-09T17:40:09.912787+0000 mon.a (mon.0) 3113 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T17:40:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:10 vm02 bash[23351]: cluster 2026-03-09T17:40:09.912787+0000 mon.a (mon.0) 3113 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T17:40:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:10 vm02 bash[23351]: audit 2026-03-09T17:40:09.917135+0000 mon.c (mon.2) 684 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:10 vm02 bash[23351]: audit 2026-03-09T17:40:09.917135+0000 mon.c (mon.2) 684 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:10 vm02 bash[23351]: audit 2026-03-09T17:40:09.926712+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:10 vm02 bash[23351]: audit 2026-03-09T17:40:09.926712+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]: dispatch 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: cluster 2026-03-09T17:40:10.835936+0000 mgr.y (mgr.14505) 546 : cluster [DBG] pgmap v937: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: cluster 2026-03-09T17:40:10.835936+0000 mgr.y (mgr.14505) 546 : cluster [DBG] pgmap v937: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: cluster 2026-03-09T17:40:10.918708+0000 mon.a (mon.0) 3115 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: cluster 2026-03-09T17:40:10.918708+0000 mon.a (mon.0) 3115 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: audit 2026-03-09T17:40:10.926451+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]': finished 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: audit 2026-03-09T17:40:10.926451+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]': finished 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: cluster 2026-03-09T17:40:10.933593+0000 mon.a (mon.0) 3117 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: cluster 2026-03-09T17:40:10.933593+0000 mon.a (mon.0) 3117 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: audit 2026-03-09T17:40:10.995847+0000 mon.c (mon.2) 685 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: audit 2026-03-09T17:40:10.995847+0000 mon.c (mon.2) 685 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: audit 2026-03-09T17:40:10.996228+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:11 vm00 bash[28333]: audit 2026-03-09T17:40:10.996228+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: cluster 2026-03-09T17:40:10.835936+0000 mgr.y (mgr.14505) 546 : cluster [DBG] pgmap v937: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: cluster 2026-03-09T17:40:10.835936+0000 mgr.y (mgr.14505) 546 : cluster [DBG] pgmap v937: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: cluster 2026-03-09T17:40:10.918708+0000 mon.a (mon.0) 3115 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: cluster 2026-03-09T17:40:10.918708+0000 mon.a (mon.0) 3115 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: audit 2026-03-09T17:40:10.926451+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]': finished 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: audit 2026-03-09T17:40:10.926451+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]': finished 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: cluster 2026-03-09T17:40:10.933593+0000 mon.a (mon.0) 3117 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: cluster 2026-03-09T17:40:10.933593+0000 mon.a (mon.0) 3117 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: audit 2026-03-09T17:40:10.995847+0000 mon.c (mon.2) 685 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: audit 2026-03-09T17:40:10.995847+0000 mon.c (mon.2) 685 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: audit 2026-03-09T17:40:10.996228+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:11 vm00 bash[20770]: audit 2026-03-09T17:40:10.996228+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: cluster 2026-03-09T17:40:10.835936+0000 mgr.y (mgr.14505) 546 : cluster [DBG] pgmap v937: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: cluster 2026-03-09T17:40:10.835936+0000 mgr.y (mgr.14505) 546 : cluster [DBG] pgmap v937: 268 pgs: 29 creating+peering, 239 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: cluster 2026-03-09T17:40:10.918708+0000 mon.a (mon.0) 3115 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: cluster 2026-03-09T17:40:10.918708+0000 mon.a (mon.0) 3115 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: audit 2026-03-09T17:40:10.926451+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]': finished 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: audit 2026-03-09T17:40:10.926451+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-122", "mode": "writeback"}]': finished 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: cluster 2026-03-09T17:40:10.933593+0000 mon.a (mon.0) 3117 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: cluster 2026-03-09T17:40:10.933593+0000 mon.a (mon.0) 3117 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: audit 2026-03-09T17:40:10.995847+0000 mon.c (mon.2) 685 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: audit 2026-03-09T17:40:10.995847+0000 mon.c (mon.2) 685 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: audit 2026-03-09T17:40:10.996228+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:11 vm02 bash[23351]: audit 2026-03-09T17:40:10.996228+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:12.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:40:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: audit 2026-03-09T17:40:11.940478+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: audit 2026-03-09T17:40:11.940478+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: cluster 2026-03-09T17:40:11.946661+0000 mon.a (mon.0) 3120 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: cluster 2026-03-09T17:40:11.946661+0000 mon.a (mon.0) 3120 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: audit 2026-03-09T17:40:11.950520+0000 mon.c (mon.2) 686 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: audit 2026-03-09T17:40:11.950520+0000 mon.c (mon.2) 686 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: audit 2026-03-09T17:40:11.952147+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: audit 2026-03-09T17:40:11.952147+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: audit 2026-03-09T17:40:12.108621+0000 mgr.y (mgr.14505) 547 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:13 vm00 bash[20770]: audit 2026-03-09T17:40:12.108621+0000 mgr.y (mgr.14505) 547 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: audit 2026-03-09T17:40:11.940478+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: audit 2026-03-09T17:40:11.940478+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: cluster 2026-03-09T17:40:11.946661+0000 mon.a (mon.0) 3120 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: cluster 2026-03-09T17:40:11.946661+0000 mon.a (mon.0) 3120 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: audit 2026-03-09T17:40:11.950520+0000 mon.c (mon.2) 686 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: audit 2026-03-09T17:40:11.950520+0000 mon.c (mon.2) 686 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: audit 2026-03-09T17:40:11.952147+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: audit 2026-03-09T17:40:11.952147+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: audit 2026-03-09T17:40:12.108621+0000 mgr.y (mgr.14505) 547 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:13 vm00 bash[28333]: audit 2026-03-09T17:40:12.108621+0000 mgr.y (mgr.14505) 547 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: audit 2026-03-09T17:40:11.940478+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: audit 2026-03-09T17:40:11.940478+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: cluster 2026-03-09T17:40:11.946661+0000 mon.a (mon.0) 3120 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: cluster 2026-03-09T17:40:11.946661+0000 mon.a (mon.0) 3120 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: audit 2026-03-09T17:40:11.950520+0000 mon.c (mon.2) 686 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: audit 2026-03-09T17:40:11.950520+0000 mon.c (mon.2) 686 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: audit 2026-03-09T17:40:11.952147+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: audit 2026-03-09T17:40:11.952147+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]: dispatch 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: audit 2026-03-09T17:40:12.108621+0000 mgr.y (mgr.14505) 547 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:13 vm02 bash[23351]: audit 2026-03-09T17:40:12.108621+0000 mgr.y (mgr.14505) 547 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: cluster 2026-03-09T17:40:12.836490+0000 mgr.y (mgr.14505) 548 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: cluster 2026-03-09T17:40:12.836490+0000 mgr.y (mgr.14505) 548 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: cluster 2026-03-09T17:40:12.941522+0000 mon.a (mon.0) 3122 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: cluster 2026-03-09T17:40:12.941522+0000 mon.a (mon.0) 3122 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: audit 2026-03-09T17:40:13.004341+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: audit 2026-03-09T17:40:13.004341+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: cluster 2026-03-09T17:40:13.008064+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: cluster 2026-03-09T17:40:13.008064+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: audit 2026-03-09T17:40:13.255824+0000 mon.c (mon.2) 687 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:14 vm00 bash[28333]: audit 2026-03-09T17:40:13.255824+0000 mon.c (mon.2) 687 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: cluster 2026-03-09T17:40:12.836490+0000 mgr.y (mgr.14505) 548 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: cluster 2026-03-09T17:40:12.836490+0000 mgr.y (mgr.14505) 548 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: cluster 2026-03-09T17:40:12.941522+0000 mon.a (mon.0) 3122 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: cluster 2026-03-09T17:40:12.941522+0000 mon.a (mon.0) 3122 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: audit 2026-03-09T17:40:13.004341+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: audit 2026-03-09T17:40:13.004341+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: cluster 2026-03-09T17:40:13.008064+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T17:40:14.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: cluster 2026-03-09T17:40:13.008064+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T17:40:14.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: audit 2026-03-09T17:40:13.255824+0000 mon.c (mon.2) 687 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:14.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:14 vm00 bash[20770]: audit 2026-03-09T17:40:13.255824+0000 mon.c (mon.2) 687 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: cluster 2026-03-09T17:40:12.836490+0000 mgr.y (mgr.14505) 548 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: cluster 2026-03-09T17:40:12.836490+0000 mgr.y (mgr.14505) 548 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: cluster 2026-03-09T17:40:12.941522+0000 mon.a (mon.0) 3122 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: cluster 2026-03-09T17:40:12.941522+0000 mon.a (mon.0) 3122 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: audit 2026-03-09T17:40:13.004341+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: audit 2026-03-09T17:40:13.004341+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-122"}]': finished 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: cluster 2026-03-09T17:40:13.008064+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: cluster 2026-03-09T17:40:13.008064+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: audit 2026-03-09T17:40:13.255824+0000 mon.c (mon.2) 687 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:14 vm02 bash[23351]: audit 2026-03-09T17:40:13.255824+0000 mon.c (mon.2) 687 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:15 vm00 bash[28333]: cluster 2026-03-09T17:40:14.023027+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T17:40:15.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:15 vm00 bash[28333]: cluster 2026-03-09T17:40:14.023027+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T17:40:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:15 vm00 bash[20770]: cluster 2026-03-09T17:40:14.023027+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T17:40:15.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:15 vm00 bash[20770]: cluster 2026-03-09T17:40:14.023027+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T17:40:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:15 vm02 bash[23351]: cluster 2026-03-09T17:40:14.023027+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T17:40:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:15 vm02 bash[23351]: cluster 2026-03-09T17:40:14.023027+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T17:40:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:16 vm02 bash[23351]: cluster 2026-03-09T17:40:14.836912+0000 mgr.y (mgr.14505) 549 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:40:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:16 vm02 bash[23351]: cluster 2026-03-09T17:40:14.836912+0000 mgr.y (mgr.14505) 549 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:40:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:16 vm02 bash[23351]: cluster 2026-03-09T17:40:15.035629+0000 mon.a (mon.0) 3126 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T17:40:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:16 vm02 bash[23351]: cluster 2026-03-09T17:40:15.035629+0000 mon.a (mon.0) 3126 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T17:40:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:16 vm02 bash[23351]: audit 2026-03-09T17:40:15.037435+0000 mon.c (mon.2) 688 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:16 vm02 bash[23351]: audit 2026-03-09T17:40:15.037435+0000 mon.c (mon.2) 688 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:16 vm02 bash[23351]: audit 2026-03-09T17:40:15.040628+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:16 vm02 bash[23351]: audit 2026-03-09T17:40:15.040628+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:16 vm00 bash[20770]: cluster 2026-03-09T17:40:14.836912+0000 mgr.y (mgr.14505) 549 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:16 vm00 bash[20770]: cluster 2026-03-09T17:40:14.836912+0000 mgr.y (mgr.14505) 549 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:16 vm00 bash[20770]: cluster 2026-03-09T17:40:15.035629+0000 mon.a (mon.0) 3126 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:16 vm00 bash[20770]: cluster 2026-03-09T17:40:15.035629+0000 mon.a (mon.0) 3126 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:16 vm00 bash[20770]: audit 2026-03-09T17:40:15.037435+0000 mon.c (mon.2) 688 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:16 vm00 bash[20770]: audit 2026-03-09T17:40:15.037435+0000 mon.c (mon.2) 688 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:16 vm00 bash[20770]: audit 2026-03-09T17:40:15.040628+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:16 vm00 bash[20770]: audit 2026-03-09T17:40:15.040628+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:16 vm00 bash[28333]: cluster 2026-03-09T17:40:14.836912+0000 mgr.y (mgr.14505) 549 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:16 vm00 bash[28333]: cluster 2026-03-09T17:40:14.836912+0000 mgr.y (mgr.14505) 549 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:16 vm00 bash[28333]: cluster 2026-03-09T17:40:15.035629+0000 mon.a (mon.0) 3126 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:16 vm00 bash[28333]: cluster 2026-03-09T17:40:15.035629+0000 mon.a (mon.0) 3126 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T17:40:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:16 vm00 bash[28333]: audit 2026-03-09T17:40:15.037435+0000 mon.c (mon.2) 688 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:16 vm00 bash[28333]: audit 2026-03-09T17:40:15.037435+0000 mon.c (mon.2) 688 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:16 vm00 bash[28333]: audit 2026-03-09T17:40:15.040628+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:16 vm00 bash[28333]: audit 2026-03-09T17:40:15.040628+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:16.539 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:40:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:40:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:40:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:17 vm02 bash[23351]: audit 2026-03-09T17:40:16.026136+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:17 vm02 bash[23351]: audit 2026-03-09T17:40:16.026136+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:17 vm02 bash[23351]: cluster 2026-03-09T17:40:16.031093+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T17:40:17.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:17 vm02 bash[23351]: cluster 2026-03-09T17:40:16.031093+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T17:40:17.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:17 vm00 bash[28333]: audit 2026-03-09T17:40:16.026136+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:17.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:17 vm00 bash[28333]: audit 2026-03-09T17:40:16.026136+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:17.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:17 vm00 bash[28333]: cluster 2026-03-09T17:40:16.031093+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T17:40:17.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:17 vm00 bash[28333]: cluster 2026-03-09T17:40:16.031093+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T17:40:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:17 vm00 bash[20770]: audit 2026-03-09T17:40:16.026136+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:17 vm00 bash[20770]: audit 2026-03-09T17:40:16.026136+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:17 vm00 bash[20770]: cluster 2026-03-09T17:40:16.031093+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T17:40:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:17 vm00 bash[20770]: cluster 2026-03-09T17:40:16.031093+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T17:40:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:18 vm02 bash[23351]: cluster 2026-03-09T17:40:16.837298+0000 mgr.y (mgr.14505) 550 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:18 vm02 bash[23351]: cluster 2026-03-09T17:40:16.837298+0000 mgr.y (mgr.14505) 550 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:18 vm02 bash[23351]: cluster 2026-03-09T17:40:17.071995+0000 mon.a (mon.0) 3130 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T17:40:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:18 vm02 bash[23351]: cluster 2026-03-09T17:40:17.071995+0000 mon.a (mon.0) 3130 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T17:40:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:18 vm02 bash[23351]: audit 2026-03-09T17:40:17.102793+0000 mon.c (mon.2) 689 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:18 vm02 bash[23351]: audit 2026-03-09T17:40:17.102793+0000 mon.c (mon.2) 689 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:18 vm02 bash[23351]: audit 2026-03-09T17:40:17.103308+0000 mon.a (mon.0) 3131 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:18 vm02 bash[23351]: audit 2026-03-09T17:40:17.103308+0000 mon.a (mon.0) 3131 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:18 vm00 bash[28333]: cluster 2026-03-09T17:40:16.837298+0000 mgr.y (mgr.14505) 550 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:18 vm00 bash[28333]: cluster 2026-03-09T17:40:16.837298+0000 mgr.y (mgr.14505) 550 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:18 vm00 bash[28333]: cluster 2026-03-09T17:40:17.071995+0000 mon.a (mon.0) 3130 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:18 vm00 bash[28333]: cluster 2026-03-09T17:40:17.071995+0000 mon.a (mon.0) 3130 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:18 vm00 bash[28333]: audit 2026-03-09T17:40:17.102793+0000 mon.c (mon.2) 689 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:18 vm00 bash[28333]: audit 2026-03-09T17:40:17.102793+0000 mon.c (mon.2) 689 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:18 vm00 bash[28333]: audit 2026-03-09T17:40:17.103308+0000 mon.a (mon.0) 3131 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:18 vm00 bash[28333]: audit 2026-03-09T17:40:17.103308+0000 mon.a (mon.0) 3131 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:18 vm00 bash[20770]: cluster 2026-03-09T17:40:16.837298+0000 mgr.y (mgr.14505) 550 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:18 vm00 bash[20770]: cluster 2026-03-09T17:40:16.837298+0000 mgr.y (mgr.14505) 550 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:18 vm00 bash[20770]: cluster 2026-03-09T17:40:17.071995+0000 mon.a (mon.0) 3130 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:18 vm00 bash[20770]: cluster 2026-03-09T17:40:17.071995+0000 mon.a (mon.0) 3130 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:18 vm00 bash[20770]: audit 2026-03-09T17:40:17.102793+0000 mon.c (mon.2) 689 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:18 vm00 bash[20770]: audit 2026-03-09T17:40:17.102793+0000 mon.c (mon.2) 689 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:18 vm00 bash[20770]: audit 2026-03-09T17:40:17.103308+0000 mon.a (mon.0) 3131 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:18 vm00 bash[20770]: audit 2026-03-09T17:40:17.103308+0000 mon.a (mon.0) 3131 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:19.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:19 vm02 bash[23351]: audit 2026-03-09T17:40:18.075870+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:19.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:19 vm02 bash[23351]: audit 2026-03-09T17:40:18.075870+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:19.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:19 vm02 bash[23351]: cluster 2026-03-09T17:40:18.081580+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T17:40:19.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:19 vm02 bash[23351]: cluster 2026-03-09T17:40:18.081580+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T17:40:19.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:19 vm02 bash[23351]: audit 2026-03-09T17:40:18.085910+0000 mon.c (mon.2) 690 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:19 vm02 bash[23351]: audit 2026-03-09T17:40:18.085910+0000 mon.c (mon.2) 690 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:19 vm02 bash[23351]: audit 2026-03-09T17:40:18.090474+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:19 vm02 bash[23351]: audit 2026-03-09T17:40:18.090474+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:19 vm00 bash[20770]: audit 2026-03-09T17:40:18.075870+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:19 vm00 bash[20770]: audit 2026-03-09T17:40:18.075870+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:19 vm00 bash[20770]: cluster 2026-03-09T17:40:18.081580+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:19 vm00 bash[20770]: cluster 2026-03-09T17:40:18.081580+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:19 vm00 bash[20770]: audit 2026-03-09T17:40:18.085910+0000 mon.c (mon.2) 690 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:19 vm00 bash[20770]: audit 2026-03-09T17:40:18.085910+0000 mon.c (mon.2) 690 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:19 vm00 bash[20770]: audit 2026-03-09T17:40:18.090474+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:19 vm00 bash[20770]: audit 2026-03-09T17:40:18.090474+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:19 vm00 bash[28333]: audit 2026-03-09T17:40:18.075870+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:19 vm00 bash[28333]: audit 2026-03-09T17:40:18.075870+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:19 vm00 bash[28333]: cluster 2026-03-09T17:40:18.081580+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T17:40:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:19 vm00 bash[28333]: cluster 2026-03-09T17:40:18.081580+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T17:40:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:19 vm00 bash[28333]: audit 2026-03-09T17:40:18.085910+0000 mon.c (mon.2) 690 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:19 vm00 bash[28333]: audit 2026-03-09T17:40:18.085910+0000 mon.c (mon.2) 690 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:19 vm00 bash[28333]: audit 2026-03-09T17:40:18.090474+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:19.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:19 vm00 bash[28333]: audit 2026-03-09T17:40:18.090474+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: cluster 2026-03-09T17:40:18.837663+0000 mgr.y (mgr.14505) 551 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: cluster 2026-03-09T17:40:18.837663+0000 mgr.y (mgr.14505) 551 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: audit 2026-03-09T17:40:19.095125+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: audit 2026-03-09T17:40:19.095125+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: audit 2026-03-09T17:40:19.108779+0000 mon.c (mon.2) 691 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: audit 2026-03-09T17:40:19.108779+0000 mon.c (mon.2) 691 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: cluster 2026-03-09T17:40:19.112824+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: cluster 2026-03-09T17:40:19.112824+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: audit 2026-03-09T17:40:19.115039+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:20 vm02 bash[23351]: audit 2026-03-09T17:40:19.115039+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: cluster 2026-03-09T17:40:18.837663+0000 mgr.y (mgr.14505) 551 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: cluster 2026-03-09T17:40:18.837663+0000 mgr.y (mgr.14505) 551 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: audit 2026-03-09T17:40:19.095125+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: audit 2026-03-09T17:40:19.095125+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: audit 2026-03-09T17:40:19.108779+0000 mon.c (mon.2) 691 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: audit 2026-03-09T17:40:19.108779+0000 mon.c (mon.2) 691 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: cluster 2026-03-09T17:40:19.112824+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: cluster 2026-03-09T17:40:19.112824+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: audit 2026-03-09T17:40:19.115039+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:20 vm00 bash[28333]: audit 2026-03-09T17:40:19.115039+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: cluster 2026-03-09T17:40:18.837663+0000 mgr.y (mgr.14505) 551 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: cluster 2026-03-09T17:40:18.837663+0000 mgr.y (mgr.14505) 551 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: audit 2026-03-09T17:40:19.095125+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: audit 2026-03-09T17:40:19.095125+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: audit 2026-03-09T17:40:19.108779+0000 mon.c (mon.2) 691 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: audit 2026-03-09T17:40:19.108779+0000 mon.c (mon.2) 691 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: cluster 2026-03-09T17:40:19.112824+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: cluster 2026-03-09T17:40:19.112824+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T17:40:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: audit 2026-03-09T17:40:19.115039+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:20.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:20 vm00 bash[20770]: audit 2026-03-09T17:40:19.115039+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]: dispatch 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:21 vm00 bash[28333]: cluster 2026-03-09T17:40:20.095978+0000 mon.a (mon.0) 3138 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:21 vm00 bash[28333]: cluster 2026-03-09T17:40:20.095978+0000 mon.a (mon.0) 3138 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:21 vm00 bash[28333]: audit 2026-03-09T17:40:20.112398+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]': finished 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:21 vm00 bash[28333]: audit 2026-03-09T17:40:20.112398+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]': finished 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:21 vm00 bash[28333]: cluster 2026-03-09T17:40:20.120886+0000 mon.a (mon.0) 3140 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:21 vm00 bash[28333]: cluster 2026-03-09T17:40:20.120886+0000 mon.a (mon.0) 3140 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:21 vm00 bash[20770]: cluster 2026-03-09T17:40:20.095978+0000 mon.a (mon.0) 3138 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:21 vm00 bash[20770]: cluster 2026-03-09T17:40:20.095978+0000 mon.a (mon.0) 3138 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:21 vm00 bash[20770]: audit 2026-03-09T17:40:20.112398+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]': finished 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:21 vm00 bash[20770]: audit 2026-03-09T17:40:20.112398+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]': finished 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:21 vm00 bash[20770]: cluster 2026-03-09T17:40:20.120886+0000 mon.a (mon.0) 3140 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T17:40:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:21 vm00 bash[20770]: cluster 2026-03-09T17:40:20.120886+0000 mon.a (mon.0) 3140 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T17:40:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:21 vm02 bash[23351]: cluster 2026-03-09T17:40:20.095978+0000 mon.a (mon.0) 3138 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:21 vm02 bash[23351]: cluster 2026-03-09T17:40:20.095978+0000 mon.a (mon.0) 3138 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:21 vm02 bash[23351]: audit 2026-03-09T17:40:20.112398+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]': finished 2026-03-09T17:40:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:21 vm02 bash[23351]: audit 2026-03-09T17:40:20.112398+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-124", "mode": "writeback"}]': finished 2026-03-09T17:40:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:21 vm02 bash[23351]: cluster 2026-03-09T17:40:20.120886+0000 mon.a (mon.0) 3140 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T17:40:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:21 vm02 bash[23351]: cluster 2026-03-09T17:40:20.120886+0000 mon.a (mon.0) 3140 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T17:40:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:22 vm02 bash[23351]: cluster 2026-03-09T17:40:20.838014+0000 mgr.y (mgr.14505) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:22 vm02 bash[23351]: cluster 2026-03-09T17:40:20.838014+0000 mgr.y (mgr.14505) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:22 vm02 bash[23351]: cluster 2026-03-09T17:40:21.172989+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T17:40:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:22 vm02 bash[23351]: cluster 2026-03-09T17:40:21.172989+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T17:40:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:22 vm02 bash[23351]: audit 2026-03-09T17:40:21.202284+0000 mon.c (mon.2) 692 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:22 vm02 bash[23351]: audit 2026-03-09T17:40:21.202284+0000 mon.c (mon.2) 692 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:22 vm02 bash[23351]: audit 2026-03-09T17:40:21.202741+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:22 vm02 bash[23351]: audit 2026-03-09T17:40:21.202741+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:40:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:22 vm00 bash[20770]: cluster 2026-03-09T17:40:20.838014+0000 mgr.y (mgr.14505) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:22 vm00 bash[20770]: cluster 2026-03-09T17:40:20.838014+0000 mgr.y (mgr.14505) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:22 vm00 bash[20770]: cluster 2026-03-09T17:40:21.172989+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:22 vm00 bash[20770]: cluster 2026-03-09T17:40:21.172989+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:22 vm00 bash[20770]: audit 2026-03-09T17:40:21.202284+0000 mon.c (mon.2) 692 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:22 vm00 bash[20770]: audit 2026-03-09T17:40:21.202284+0000 mon.c (mon.2) 692 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:22 vm00 bash[20770]: audit 2026-03-09T17:40:21.202741+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:22 vm00 bash[20770]: audit 2026-03-09T17:40:21.202741+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:22 vm00 bash[28333]: cluster 2026-03-09T17:40:20.838014+0000 mgr.y (mgr.14505) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:22 vm00 bash[28333]: cluster 2026-03-09T17:40:20.838014+0000 mgr.y (mgr.14505) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:22 vm00 bash[28333]: cluster 2026-03-09T17:40:21.172989+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:22 vm00 bash[28333]: cluster 2026-03-09T17:40:21.172989+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:22 vm00 bash[28333]: audit 2026-03-09T17:40:21.202284+0000 mon.c (mon.2) 692 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:22 vm00 bash[28333]: audit 2026-03-09T17:40:21.202284+0000 mon.c (mon.2) 692 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:22 vm00 bash[28333]: audit 2026-03-09T17:40:21.202741+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:22 vm00 bash[28333]: audit 2026-03-09T17:40:21.202741+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: audit 2026-03-09T17:40:22.119490+0000 mgr.y (mgr.14505) 553 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: audit 2026-03-09T17:40:22.119490+0000 mgr.y (mgr.14505) 553 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: audit 2026-03-09T17:40:22.164931+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: audit 2026-03-09T17:40:22.164931+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: audit 2026-03-09T17:40:22.167406+0000 mon.c (mon.2) 693 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: audit 2026-03-09T17:40:22.167406+0000 mon.c (mon.2) 693 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: cluster 2026-03-09T17:40:22.170066+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: cluster 2026-03-09T17:40:22.170066+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: audit 2026-03-09T17:40:22.176507+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:23 vm00 bash[28333]: audit 2026-03-09T17:40:22.176507+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: audit 2026-03-09T17:40:22.119490+0000 mgr.y (mgr.14505) 553 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: audit 2026-03-09T17:40:22.119490+0000 mgr.y (mgr.14505) 553 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: audit 2026-03-09T17:40:22.164931+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: audit 2026-03-09T17:40:22.164931+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: audit 2026-03-09T17:40:22.167406+0000 mon.c (mon.2) 693 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: audit 2026-03-09T17:40:22.167406+0000 mon.c (mon.2) 693 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: cluster 2026-03-09T17:40:22.170066+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: cluster 2026-03-09T17:40:22.170066+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: audit 2026-03-09T17:40:22.176507+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:23 vm00 bash[20770]: audit 2026-03-09T17:40:22.176507+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: audit 2026-03-09T17:40:22.119490+0000 mgr.y (mgr.14505) 553 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: audit 2026-03-09T17:40:22.119490+0000 mgr.y (mgr.14505) 553 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: audit 2026-03-09T17:40:22.164931+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: audit 2026-03-09T17:40:22.164931+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: audit 2026-03-09T17:40:22.167406+0000 mon.c (mon.2) 693 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: audit 2026-03-09T17:40:22.167406+0000 mon.c (mon.2) 693 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: cluster 2026-03-09T17:40:22.170066+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: cluster 2026-03-09T17:40:22.170066+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: audit 2026-03-09T17:40:22.176507+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:23 vm02 bash[23351]: audit 2026-03-09T17:40:22.176507+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]: dispatch 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:24 vm00 bash[28333]: cluster 2026-03-09T17:40:22.838597+0000 mgr.y (mgr.14505) 554 : cluster [DBG] pgmap v955: 268 pgs: 4 active+clean+snaptrim, 264 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:24 vm00 bash[28333]: cluster 2026-03-09T17:40:22.838597+0000 mgr.y (mgr.14505) 554 : cluster [DBG] pgmap v955: 268 pgs: 4 active+clean+snaptrim, 264 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:24 vm00 bash[28333]: cluster 2026-03-09T17:40:23.165237+0000 mon.a (mon.0) 3146 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:24 vm00 bash[28333]: cluster 2026-03-09T17:40:23.165237+0000 mon.a (mon.0) 3146 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:24 vm00 bash[28333]: audit 2026-03-09T17:40:23.167627+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:24 vm00 bash[28333]: audit 2026-03-09T17:40:23.167627+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:24 vm00 bash[28333]: cluster 2026-03-09T17:40:23.172490+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:24 vm00 bash[28333]: cluster 2026-03-09T17:40:23.172490+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:24 vm00 bash[20770]: cluster 2026-03-09T17:40:22.838597+0000 mgr.y (mgr.14505) 554 : cluster [DBG] pgmap v955: 268 pgs: 4 active+clean+snaptrim, 264 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:24 vm00 bash[20770]: cluster 2026-03-09T17:40:22.838597+0000 mgr.y (mgr.14505) 554 : cluster [DBG] pgmap v955: 268 pgs: 4 active+clean+snaptrim, 264 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:24 vm00 bash[20770]: cluster 2026-03-09T17:40:23.165237+0000 mon.a (mon.0) 3146 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:24 vm00 bash[20770]: cluster 2026-03-09T17:40:23.165237+0000 mon.a (mon.0) 3146 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:24 vm00 bash[20770]: audit 2026-03-09T17:40:23.167627+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:24 vm00 bash[20770]: audit 2026-03-09T17:40:23.167627+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:24 vm00 bash[20770]: cluster 2026-03-09T17:40:23.172490+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T17:40:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:24 vm00 bash[20770]: cluster 2026-03-09T17:40:23.172490+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T17:40:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:24 vm02 bash[23351]: cluster 2026-03-09T17:40:22.838597+0000 mgr.y (mgr.14505) 554 : cluster [DBG] pgmap v955: 268 pgs: 4 active+clean+snaptrim, 264 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:40:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:24 vm02 bash[23351]: cluster 2026-03-09T17:40:22.838597+0000 mgr.y (mgr.14505) 554 : cluster [DBG] pgmap v955: 268 pgs: 4 active+clean+snaptrim, 264 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T17:40:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:24 vm02 bash[23351]: cluster 2026-03-09T17:40:23.165237+0000 mon.a (mon.0) 3146 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:24 vm02 bash[23351]: cluster 2026-03-09T17:40:23.165237+0000 mon.a (mon.0) 3146 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:24 vm02 bash[23351]: audit 2026-03-09T17:40:23.167627+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:24 vm02 bash[23351]: audit 2026-03-09T17:40:23.167627+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-124"}]': finished 2026-03-09T17:40:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:24 vm02 bash[23351]: cluster 2026-03-09T17:40:23.172490+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T17:40:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:24 vm02 bash[23351]: cluster 2026-03-09T17:40:23.172490+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T17:40:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:25 vm02 bash[23351]: cluster 2026-03-09T17:40:24.201257+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T17:40:25.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:25 vm02 bash[23351]: cluster 2026-03-09T17:40:24.201257+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T17:40:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:25 vm00 bash[28333]: cluster 2026-03-09T17:40:24.201257+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T17:40:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:25 vm00 bash[28333]: cluster 2026-03-09T17:40:24.201257+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T17:40:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:25 vm00 bash[20770]: cluster 2026-03-09T17:40:24.201257+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T17:40:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:25 vm00 bash[20770]: cluster 2026-03-09T17:40:24.201257+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T17:40:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:26 vm02 bash[23351]: cluster 2026-03-09T17:40:24.838899+0000 mgr.y (mgr.14505) 555 : cluster [DBG] pgmap v958: 236 pgs: 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:26 vm02 bash[23351]: cluster 2026-03-09T17:40:24.838899+0000 mgr.y (mgr.14505) 555 : cluster [DBG] pgmap v958: 236 pgs: 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:26 vm02 bash[23351]: cluster 2026-03-09T17:40:25.349688+0000 mon.a (mon.0) 3150 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T17:40:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:26 vm02 bash[23351]: cluster 2026-03-09T17:40:25.349688+0000 mon.a (mon.0) 3150 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T17:40:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:26 vm02 bash[23351]: audit 2026-03-09T17:40:25.351718+0000 mon.c (mon.2) 694 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:26 vm02 bash[23351]: audit 2026-03-09T17:40:25.351718+0000 mon.c (mon.2) 694 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:26 vm02 bash[23351]: audit 2026-03-09T17:40:25.356842+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:26 vm02 bash[23351]: audit 2026-03-09T17:40:25.356842+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:26 vm00 bash[28333]: cluster 2026-03-09T17:40:24.838899+0000 mgr.y (mgr.14505) 555 : cluster [DBG] pgmap v958: 236 pgs: 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:26 vm00 bash[28333]: cluster 2026-03-09T17:40:24.838899+0000 mgr.y (mgr.14505) 555 : cluster [DBG] pgmap v958: 236 pgs: 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:26 vm00 bash[28333]: cluster 2026-03-09T17:40:25.349688+0000 mon.a (mon.0) 3150 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:26 vm00 bash[28333]: cluster 2026-03-09T17:40:25.349688+0000 mon.a (mon.0) 3150 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:26 vm00 bash[28333]: audit 2026-03-09T17:40:25.351718+0000 mon.c (mon.2) 694 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:26 vm00 bash[28333]: audit 2026-03-09T17:40:25.351718+0000 mon.c (mon.2) 694 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:26 vm00 bash[28333]: audit 2026-03-09T17:40:25.356842+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:26 vm00 bash[28333]: audit 2026-03-09T17:40:25.356842+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:26 vm00 bash[20770]: cluster 2026-03-09T17:40:24.838899+0000 mgr.y (mgr.14505) 555 : cluster [DBG] pgmap v958: 236 pgs: 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:26 vm00 bash[20770]: cluster 2026-03-09T17:40:24.838899+0000 mgr.y (mgr.14505) 555 : cluster [DBG] pgmap v958: 236 pgs: 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:26 vm00 bash[20770]: cluster 2026-03-09T17:40:25.349688+0000 mon.a (mon.0) 3150 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:26 vm00 bash[20770]: cluster 2026-03-09T17:40:25.349688+0000 mon.a (mon.0) 3150 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:26 vm00 bash[20770]: audit 2026-03-09T17:40:25.351718+0000 mon.c (mon.2) 694 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:26 vm00 bash[20770]: audit 2026-03-09T17:40:25.351718+0000 mon.c (mon.2) 694 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:26 vm00 bash[20770]: audit 2026-03-09T17:40:25.356842+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:26 vm00 bash[20770]: audit 2026-03-09T17:40:25.356842+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:26.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:40:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:40:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:26.349093+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:26.349093+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: cluster 2026-03-09T17:40:26.353700+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: cluster 2026-03-09T17:40:26.353700+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:26.401467+0000 mon.c (mon.2) 695 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:26.401467+0000 mon.c (mon.2) 695 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:26.404288+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:26.404288+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:27.351795+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:27.351795+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: cluster 2026-03-09T17:40:27.356066+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: cluster 2026-03-09T17:40:27.356066+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:27.365554+0000 mon.c (mon.2) 696 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:27.365554+0000 mon.c (mon.2) 696 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:27.366319+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:27 vm00 bash[20770]: audit 2026-03-09T17:40:27.366319+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:26.349093+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:26.349093+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: cluster 2026-03-09T17:40:26.353700+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: cluster 2026-03-09T17:40:26.353700+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:26.401467+0000 mon.c (mon.2) 695 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:26.401467+0000 mon.c (mon.2) 695 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:26.404288+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:26.404288+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:27.351795+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:27.351795+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: cluster 2026-03-09T17:40:27.356066+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: cluster 2026-03-09T17:40:27.356066+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:27.365554+0000 mon.c (mon.2) 696 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:27.365554+0000 mon.c (mon.2) 696 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:27.366319+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:27 vm00 bash[28333]: audit 2026-03-09T17:40:27.366319+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:26.349093+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:26.349093+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: cluster 2026-03-09T17:40:26.353700+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: cluster 2026-03-09T17:40:26.353700+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:26.401467+0000 mon.c (mon.2) 695 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:26.401467+0000 mon.c (mon.2) 695 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:26.404288+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:26.404288+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:27.351795+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:27.351795+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: cluster 2026-03-09T17:40:27.356066+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: cluster 2026-03-09T17:40:27.356066+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:27.365554+0000 mon.c (mon.2) 696 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:27.365554+0000 mon.c (mon.2) 696 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:27.366319+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:27 vm02 bash[23351]: audit 2026-03-09T17:40:27.366319+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: cluster 2026-03-09T17:40:26.839304+0000 mgr.y (mgr.14505) 556 : cluster [DBG] pgmap v961: 268 pgs: 32 unknown, 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: cluster 2026-03-09T17:40:26.839304+0000 mgr.y (mgr.14505) 556 : cluster [DBG] pgmap v961: 268 pgs: 32 unknown, 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: audit 2026-03-09T17:40:28.262155+0000 mon.c (mon.2) 697 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: audit 2026-03-09T17:40:28.262155+0000 mon.c (mon.2) 697 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: audit 2026-03-09T17:40:28.354715+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: audit 2026-03-09T17:40:28.354715+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: audit 2026-03-09T17:40:28.358624+0000 mon.c (mon.2) 698 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: audit 2026-03-09T17:40:28.358624+0000 mon.c (mon.2) 698 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: cluster 2026-03-09T17:40:28.359222+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: cluster 2026-03-09T17:40:28.359222+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: audit 2026-03-09T17:40:28.366148+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:28 vm00 bash[28333]: audit 2026-03-09T17:40:28.366148+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: cluster 2026-03-09T17:40:26.839304+0000 mgr.y (mgr.14505) 556 : cluster [DBG] pgmap v961: 268 pgs: 32 unknown, 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: cluster 2026-03-09T17:40:26.839304+0000 mgr.y (mgr.14505) 556 : cluster [DBG] pgmap v961: 268 pgs: 32 unknown, 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: audit 2026-03-09T17:40:28.262155+0000 mon.c (mon.2) 697 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: audit 2026-03-09T17:40:28.262155+0000 mon.c (mon.2) 697 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: audit 2026-03-09T17:40:28.354715+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: audit 2026-03-09T17:40:28.354715+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: audit 2026-03-09T17:40:28.358624+0000 mon.c (mon.2) 698 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: audit 2026-03-09T17:40:28.358624+0000 mon.c (mon.2) 698 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: cluster 2026-03-09T17:40:28.359222+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: cluster 2026-03-09T17:40:28.359222+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: audit 2026-03-09T17:40:28.366148+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:28 vm00 bash[20770]: audit 2026-03-09T17:40:28.366148+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: cluster 2026-03-09T17:40:26.839304+0000 mgr.y (mgr.14505) 556 : cluster [DBG] pgmap v961: 268 pgs: 32 unknown, 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: cluster 2026-03-09T17:40:26.839304+0000 mgr.y (mgr.14505) 556 : cluster [DBG] pgmap v961: 268 pgs: 32 unknown, 4 active+clean+snaptrim, 232 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: audit 2026-03-09T17:40:28.262155+0000 mon.c (mon.2) 697 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: audit 2026-03-09T17:40:28.262155+0000 mon.c (mon.2) 697 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: audit 2026-03-09T17:40:28.354715+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: audit 2026-03-09T17:40:28.354715+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: audit 2026-03-09T17:40:28.358624+0000 mon.c (mon.2) 698 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: audit 2026-03-09T17:40:28.358624+0000 mon.c (mon.2) 698 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: cluster 2026-03-09T17:40:28.359222+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: cluster 2026-03-09T17:40:28.359222+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: audit 2026-03-09T17:40:28.366148+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:28 vm02 bash[23351]: audit 2026-03-09T17:40:28.366148+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]: dispatch 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:29 vm00 bash[28333]: cluster 2026-03-09T17:40:29.354717+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:29 vm00 bash[28333]: cluster 2026-03-09T17:40:29.354717+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:29 vm00 bash[28333]: audit 2026-03-09T17:40:29.357218+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]': finished 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:29 vm00 bash[28333]: audit 2026-03-09T17:40:29.357218+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]': finished 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:29 vm00 bash[28333]: cluster 2026-03-09T17:40:29.360219+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:29 vm00 bash[28333]: cluster 2026-03-09T17:40:29.360219+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:29 vm00 bash[20770]: cluster 2026-03-09T17:40:29.354717+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:29 vm00 bash[20770]: cluster 2026-03-09T17:40:29.354717+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:29 vm00 bash[20770]: audit 2026-03-09T17:40:29.357218+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]': finished 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:29 vm00 bash[20770]: audit 2026-03-09T17:40:29.357218+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]': finished 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:29 vm00 bash[20770]: cluster 2026-03-09T17:40:29.360219+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T17:40:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:29 vm00 bash[20770]: cluster 2026-03-09T17:40:29.360219+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T17:40:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:29 vm02 bash[23351]: cluster 2026-03-09T17:40:29.354717+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:29 vm02 bash[23351]: cluster 2026-03-09T17:40:29.354717+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:29 vm02 bash[23351]: audit 2026-03-09T17:40:29.357218+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]': finished 2026-03-09T17:40:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:29 vm02 bash[23351]: audit 2026-03-09T17:40:29.357218+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-126", "mode": "writeback"}]': finished 2026-03-09T17:40:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:29 vm02 bash[23351]: cluster 2026-03-09T17:40:29.360219+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T17:40:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:29 vm02 bash[23351]: cluster 2026-03-09T17:40:29.360219+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:30 vm00 bash[28333]: cluster 2026-03-09T17:40:28.839738+0000 mgr.y (mgr.14505) 557 : cluster [DBG] pgmap v964: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:30 vm00 bash[28333]: cluster 2026-03-09T17:40:28.839738+0000 mgr.y (mgr.14505) 557 : cluster [DBG] pgmap v964: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:30 vm00 bash[28333]: audit 2026-03-09T17:40:29.437943+0000 mon.c (mon.2) 699 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:30 vm00 bash[28333]: audit 2026-03-09T17:40:29.437943+0000 mon.c (mon.2) 699 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:30 vm00 bash[28333]: audit 2026-03-09T17:40:29.438280+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:30 vm00 bash[28333]: audit 2026-03-09T17:40:29.438280+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:30 vm00 bash[20770]: cluster 2026-03-09T17:40:28.839738+0000 mgr.y (mgr.14505) 557 : cluster [DBG] pgmap v964: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:30 vm00 bash[20770]: cluster 2026-03-09T17:40:28.839738+0000 mgr.y (mgr.14505) 557 : cluster [DBG] pgmap v964: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:30 vm00 bash[20770]: audit 2026-03-09T17:40:29.437943+0000 mon.c (mon.2) 699 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:30 vm00 bash[20770]: audit 2026-03-09T17:40:29.437943+0000 mon.c (mon.2) 699 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:30 vm00 bash[20770]: audit 2026-03-09T17:40:29.438280+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:30 vm00 bash[20770]: audit 2026-03-09T17:40:29.438280+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:30 vm02 bash[23351]: cluster 2026-03-09T17:40:28.839738+0000 mgr.y (mgr.14505) 557 : cluster [DBG] pgmap v964: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:30 vm02 bash[23351]: cluster 2026-03-09T17:40:28.839738+0000 mgr.y (mgr.14505) 557 : cluster [DBG] pgmap v964: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:30 vm02 bash[23351]: audit 2026-03-09T17:40:29.437943+0000 mon.c (mon.2) 699 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:30 vm02 bash[23351]: audit 2026-03-09T17:40:29.437943+0000 mon.c (mon.2) 699 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:30 vm02 bash[23351]: audit 2026-03-09T17:40:29.438280+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:30 vm02 bash[23351]: audit 2026-03-09T17:40:29.438280+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: audit 2026-03-09T17:40:30.404001+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: audit 2026-03-09T17:40:30.404001+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: audit 2026-03-09T17:40:30.411246+0000 mon.c (mon.2) 700 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: audit 2026-03-09T17:40:30.411246+0000 mon.c (mon.2) 700 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: cluster 2026-03-09T17:40:30.414429+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: cluster 2026-03-09T17:40:30.414429+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: audit 2026-03-09T17:40:30.415910+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: audit 2026-03-09T17:40:30.415910+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: cluster 2026-03-09T17:40:31.404100+0000 mon.a (mon.0) 3168 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: cluster 2026-03-09T17:40:31.404100+0000 mon.a (mon.0) 3168 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: audit 2026-03-09T17:40:31.407168+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: audit 2026-03-09T17:40:31.407168+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: cluster 2026-03-09T17:40:31.417588+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:31 vm00 bash[20770]: cluster 2026-03-09T17:40:31.417588+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: audit 2026-03-09T17:40:30.404001+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: audit 2026-03-09T17:40:30.404001+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: audit 2026-03-09T17:40:30.411246+0000 mon.c (mon.2) 700 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: audit 2026-03-09T17:40:30.411246+0000 mon.c (mon.2) 700 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: cluster 2026-03-09T17:40:30.414429+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: cluster 2026-03-09T17:40:30.414429+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: audit 2026-03-09T17:40:30.415910+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: audit 2026-03-09T17:40:30.415910+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: cluster 2026-03-09T17:40:31.404100+0000 mon.a (mon.0) 3168 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: cluster 2026-03-09T17:40:31.404100+0000 mon.a (mon.0) 3168 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: audit 2026-03-09T17:40:31.407168+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: audit 2026-03-09T17:40:31.407168+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: cluster 2026-03-09T17:40:31.417588+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T17:40:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:31 vm00 bash[28333]: cluster 2026-03-09T17:40:31.417588+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringFlush (9315 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapHasChunk 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapHasChunk (6054 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollback 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollback (5066 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollbackRefcount 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollbackRefcount (24299 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictRollback 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictRollback (14108 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PropagateBaseTierError 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PropagateBaseTierError (12127 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HelloWriteReturn 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 00000000 79 6f 75 20 6d 69 67 68 74 20 73 65 65 20 74 68 |you might see th| 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 00000010 69 73 |is| 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 00000012 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HelloWriteReturn (12146 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier (6089 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP (559219 ms total) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.Dirty 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.Dirty (1123 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.FlushWriteRaces 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.FlushWriteRaces (11073 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.CallForcesPromote 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.CallForcesPromote (18227 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.HitSetNone 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.HitSetNone (1 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP (30424 ms total) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Overlay 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Overlay (7099 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Promote 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Promote (8490 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnap 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: waiting for scrub... 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: done waiting 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnap (24722 ms) 2026-03-09T17:40:31.815 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace 2026-03-09T17:40:31.816 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace (9308 ms) 2026-03-09T17:40:31.816 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Whiteout 2026-03-09T17:40:31.816 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Whiteout (8087 ms) 2026-03-09T17:40:31.816 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Evict 2026-03-09T17:40:31.816 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Evict (8132 ms) 2026-03-09T17:40:31.816 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.EvictSnap 2026-03-09T17:40:31.816 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.EvictSnap (10179 ms) 2026-03-09T17:40:31.816 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlush 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: audit 2026-03-09T17:40:30.404001+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: audit 2026-03-09T17:40:30.404001+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: audit 2026-03-09T17:40:30.411246+0000 mon.c (mon.2) 700 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: audit 2026-03-09T17:40:30.411246+0000 mon.c (mon.2) 700 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: cluster 2026-03-09T17:40:30.414429+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: cluster 2026-03-09T17:40:30.414429+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: audit 2026-03-09T17:40:30.415910+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: audit 2026-03-09T17:40:30.415910+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]: dispatch 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: cluster 2026-03-09T17:40:31.404100+0000 mon.a (mon.0) 3168 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: cluster 2026-03-09T17:40:31.404100+0000 mon.a (mon.0) 3168 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: audit 2026-03-09T17:40:31.407168+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: audit 2026-03-09T17:40:31.407168+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-126"}]': finished 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: cluster 2026-03-09T17:40:31.417588+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T17:40:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:31 vm02 bash[23351]: cluster 2026-03-09T17:40:31.417588+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T17:40:32.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:40:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:32 vm00 bash[28333]: cluster 2026-03-09T17:40:30.840028+0000 mgr.y (mgr.14505) 558 : cluster [DBG] pgmap v967: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:32 vm00 bash[28333]: cluster 2026-03-09T17:40:30.840028+0000 mgr.y (mgr.14505) 558 : cluster [DBG] pgmap v967: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:32 vm00 bash[28333]: cluster 2026-03-09T17:40:31.814930+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:32 vm00 bash[28333]: cluster 2026-03-09T17:40:31.814930+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:32 vm00 bash[28333]: audit 2026-03-09T17:40:32.130184+0000 mgr.y (mgr.14505) 559 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:32 vm00 bash[28333]: audit 2026-03-09T17:40:32.130184+0000 mgr.y (mgr.14505) 559 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:32 vm00 bash[20770]: cluster 2026-03-09T17:40:30.840028+0000 mgr.y (mgr.14505) 558 : cluster [DBG] pgmap v967: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:32 vm00 bash[20770]: cluster 2026-03-09T17:40:30.840028+0000 mgr.y (mgr.14505) 558 : cluster [DBG] pgmap v967: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:32 vm00 bash[20770]: cluster 2026-03-09T17:40:31.814930+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:32 vm00 bash[20770]: cluster 2026-03-09T17:40:31.814930+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:32 vm00 bash[20770]: audit 2026-03-09T17:40:32.130184+0000 mgr.y (mgr.14505) 559 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:32 vm00 bash[20770]: audit 2026-03-09T17:40:32.130184+0000 mgr.y (mgr.14505) 559 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:32 vm02 bash[23351]: cluster 2026-03-09T17:40:30.840028+0000 mgr.y (mgr.14505) 558 : cluster [DBG] pgmap v967: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:32 vm02 bash[23351]: cluster 2026-03-09T17:40:30.840028+0000 mgr.y (mgr.14505) 558 : cluster [DBG] pgmap v967: 268 pgs: 268 active+clean; 455 KiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:32 vm02 bash[23351]: cluster 2026-03-09T17:40:31.814930+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T17:40:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:32 vm02 bash[23351]: cluster 2026-03-09T17:40:31.814930+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T17:40:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:32 vm02 bash[23351]: audit 2026-03-09T17:40:32.130184+0000 mgr.y (mgr.14505) 559 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:32 vm02 bash[23351]: audit 2026-03-09T17:40:32.130184+0000 mgr.y (mgr.14505) 559 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:33 vm02 bash[23351]: cluster 2026-03-09T17:40:32.806731+0000 mon.a (mon.0) 3172 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T17:40:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:33 vm02 bash[23351]: cluster 2026-03-09T17:40:32.806731+0000 mon.a (mon.0) 3172 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T17:40:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:33 vm02 bash[23351]: audit 2026-03-09T17:40:32.813012+0000 mon.c (mon.2) 701 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:33 vm02 bash[23351]: audit 2026-03-09T17:40:32.813012+0000 mon.c (mon.2) 701 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:33 vm02 bash[23351]: audit 2026-03-09T17:40:32.821172+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:33 vm02 bash[23351]: audit 2026-03-09T17:40:32.821172+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:33 vm02 bash[23351]: cluster 2026-03-09T17:40:32.840569+0000 mgr.y (mgr.14505) 560 : cluster [DBG] pgmap v971: 268 pgs: 3 creating+peering, 29 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 294 B/s wr, 1 op/s 2026-03-09T17:40:34.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:33 vm02 bash[23351]: cluster 2026-03-09T17:40:32.840569+0000 mgr.y (mgr.14505) 560 : cluster [DBG] pgmap v971: 268 pgs: 3 creating+peering, 29 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 294 B/s wr, 1 op/s 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:33 vm00 bash[28333]: cluster 2026-03-09T17:40:32.806731+0000 mon.a (mon.0) 3172 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:33 vm00 bash[28333]: cluster 2026-03-09T17:40:32.806731+0000 mon.a (mon.0) 3172 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:33 vm00 bash[28333]: audit 2026-03-09T17:40:32.813012+0000 mon.c (mon.2) 701 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:33 vm00 bash[28333]: audit 2026-03-09T17:40:32.813012+0000 mon.c (mon.2) 701 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:33 vm00 bash[28333]: audit 2026-03-09T17:40:32.821172+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:33 vm00 bash[28333]: audit 2026-03-09T17:40:32.821172+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:33 vm00 bash[28333]: cluster 2026-03-09T17:40:32.840569+0000 mgr.y (mgr.14505) 560 : cluster [DBG] pgmap v971: 268 pgs: 3 creating+peering, 29 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 294 B/s wr, 1 op/s 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:33 vm00 bash[28333]: cluster 2026-03-09T17:40:32.840569+0000 mgr.y (mgr.14505) 560 : cluster [DBG] pgmap v971: 268 pgs: 3 creating+peering, 29 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 294 B/s wr, 1 op/s 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:33 vm00 bash[20770]: cluster 2026-03-09T17:40:32.806731+0000 mon.a (mon.0) 3172 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:33 vm00 bash[20770]: cluster 2026-03-09T17:40:32.806731+0000 mon.a (mon.0) 3172 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:33 vm00 bash[20770]: audit 2026-03-09T17:40:32.813012+0000 mon.c (mon.2) 701 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:33 vm00 bash[20770]: audit 2026-03-09T17:40:32.813012+0000 mon.c (mon.2) 701 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:33 vm00 bash[20770]: audit 2026-03-09T17:40:32.821172+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:33 vm00 bash[20770]: audit 2026-03-09T17:40:32.821172+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:33 vm00 bash[20770]: cluster 2026-03-09T17:40:32.840569+0000 mgr.y (mgr.14505) 560 : cluster [DBG] pgmap v971: 268 pgs: 3 creating+peering, 29 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 294 B/s wr, 1 op/s 2026-03-09T17:40:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:33 vm00 bash[20770]: cluster 2026-03-09T17:40:32.840569+0000 mgr.y (mgr.14505) 560 : cluster [DBG] pgmap v971: 268 pgs: 3 creating+peering, 29 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 294 B/s wr, 1 op/s 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: cluster 2026-03-09T17:40:33.800252+0000 mon.a (mon.0) 3174 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: cluster 2026-03-09T17:40:33.800252+0000 mon.a (mon.0) 3174 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: audit 2026-03-09T17:40:33.811449+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: audit 2026-03-09T17:40:33.811449+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: cluster 2026-03-09T17:40:33.814534+0000 mon.a (mon.0) 3176 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: cluster 2026-03-09T17:40:33.814534+0000 mon.a (mon.0) 3176 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: audit 2026-03-09T17:40:33.828059+0000 mon.c (mon.2) 702 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: audit 2026-03-09T17:40:33.828059+0000 mon.c (mon.2) 702 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: audit 2026-03-09T17:40:33.830194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:34 vm02 bash[23351]: audit 2026-03-09T17:40:33.830194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: cluster 2026-03-09T17:40:33.800252+0000 mon.a (mon.0) 3174 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: cluster 2026-03-09T17:40:33.800252+0000 mon.a (mon.0) 3174 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: audit 2026-03-09T17:40:33.811449+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: audit 2026-03-09T17:40:33.811449+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: cluster 2026-03-09T17:40:33.814534+0000 mon.a (mon.0) 3176 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: cluster 2026-03-09T17:40:33.814534+0000 mon.a (mon.0) 3176 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: audit 2026-03-09T17:40:33.828059+0000 mon.c (mon.2) 702 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: audit 2026-03-09T17:40:33.828059+0000 mon.c (mon.2) 702 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: audit 2026-03-09T17:40:33.830194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:34 vm00 bash[28333]: audit 2026-03-09T17:40:33.830194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: cluster 2026-03-09T17:40:33.800252+0000 mon.a (mon.0) 3174 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: cluster 2026-03-09T17:40:33.800252+0000 mon.a (mon.0) 3174 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: audit 2026-03-09T17:40:33.811449+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: audit 2026-03-09T17:40:33.811449+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: cluster 2026-03-09T17:40:33.814534+0000 mon.a (mon.0) 3176 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: cluster 2026-03-09T17:40:33.814534+0000 mon.a (mon.0) 3176 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: audit 2026-03-09T17:40:33.828059+0000 mon.c (mon.2) 702 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: audit 2026-03-09T17:40:33.828059+0000 mon.c (mon.2) 702 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: audit 2026-03-09T17:40:33.830194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:35.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:34 vm00 bash[20770]: audit 2026-03-09T17:40:33.830194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: audit 2026-03-09T17:40:34.813852+0000 mon.a (mon.0) 3178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: audit 2026-03-09T17:40:34.813852+0000 mon.a (mon.0) 3178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: cluster 2026-03-09T17:40:34.817516+0000 mon.a (mon.0) 3179 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: cluster 2026-03-09T17:40:34.817516+0000 mon.a (mon.0) 3179 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: audit 2026-03-09T17:40:34.827585+0000 mon.c (mon.2) 703 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: audit 2026-03-09T17:40:34.827585+0000 mon.c (mon.2) 703 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: audit 2026-03-09T17:40:34.831290+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: audit 2026-03-09T17:40:34.831290+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: cluster 2026-03-09T17:40:34.840923+0000 mgr.y (mgr.14505) 561 : cluster [DBG] pgmap v974: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 298 B/s wr, 1 op/s 2026-03-09T17:40:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:35 vm02 bash[23351]: cluster 2026-03-09T17:40:34.840923+0000 mgr.y (mgr.14505) 561 : cluster [DBG] pgmap v974: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 298 B/s wr, 1 op/s 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: audit 2026-03-09T17:40:34.813852+0000 mon.a (mon.0) 3178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: audit 2026-03-09T17:40:34.813852+0000 mon.a (mon.0) 3178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: cluster 2026-03-09T17:40:34.817516+0000 mon.a (mon.0) 3179 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: cluster 2026-03-09T17:40:34.817516+0000 mon.a (mon.0) 3179 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: audit 2026-03-09T17:40:34.827585+0000 mon.c (mon.2) 703 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: audit 2026-03-09T17:40:34.827585+0000 mon.c (mon.2) 703 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: audit 2026-03-09T17:40:34.831290+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: audit 2026-03-09T17:40:34.831290+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: cluster 2026-03-09T17:40:34.840923+0000 mgr.y (mgr.14505) 561 : cluster [DBG] pgmap v974: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 298 B/s wr, 1 op/s 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:35 vm00 bash[28333]: cluster 2026-03-09T17:40:34.840923+0000 mgr.y (mgr.14505) 561 : cluster [DBG] pgmap v974: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 298 B/s wr, 1 op/s 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: audit 2026-03-09T17:40:34.813852+0000 mon.a (mon.0) 3178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: audit 2026-03-09T17:40:34.813852+0000 mon.a (mon.0) 3178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: cluster 2026-03-09T17:40:34.817516+0000 mon.a (mon.0) 3179 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: cluster 2026-03-09T17:40:34.817516+0000 mon.a (mon.0) 3179 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: audit 2026-03-09T17:40:34.827585+0000 mon.c (mon.2) 703 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: audit 2026-03-09T17:40:34.827585+0000 mon.c (mon.2) 703 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: audit 2026-03-09T17:40:34.831290+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: audit 2026-03-09T17:40:34.831290+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: cluster 2026-03-09T17:40:34.840923+0000 mgr.y (mgr.14505) 561 : cluster [DBG] pgmap v974: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 298 B/s wr, 1 op/s 2026-03-09T17:40:36.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:35 vm00 bash[20770]: cluster 2026-03-09T17:40:34.840923+0000 mgr.y (mgr.14505) 561 : cluster [DBG] pgmap v974: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 298 B/s wr, 1 op/s 2026-03-09T17:40:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:40:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:40:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:40:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:36 vm02 bash[23351]: audit 2026-03-09T17:40:35.824719+0000 mon.a (mon.0) 3181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:36 vm02 bash[23351]: audit 2026-03-09T17:40:35.824719+0000 mon.a (mon.0) 3181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:36 vm02 bash[23351]: audit 2026-03-09T17:40:35.828882+0000 mon.c (mon.2) 704 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:36 vm02 bash[23351]: audit 2026-03-09T17:40:35.828882+0000 mon.c (mon.2) 704 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:36 vm02 bash[23351]: cluster 2026-03-09T17:40:35.839256+0000 mon.a (mon.0) 3182 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T17:40:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:36 vm02 bash[23351]: cluster 2026-03-09T17:40:35.839256+0000 mon.a (mon.0) 3182 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T17:40:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:36 vm02 bash[23351]: audit 2026-03-09T17:40:35.841174+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:36 vm02 bash[23351]: audit 2026-03-09T17:40:35.841174+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:36 vm00 bash[28333]: audit 2026-03-09T17:40:35.824719+0000 mon.a (mon.0) 3181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:36 vm00 bash[28333]: audit 2026-03-09T17:40:35.824719+0000 mon.a (mon.0) 3181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:36 vm00 bash[28333]: audit 2026-03-09T17:40:35.828882+0000 mon.c (mon.2) 704 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:36 vm00 bash[28333]: audit 2026-03-09T17:40:35.828882+0000 mon.c (mon.2) 704 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:36 vm00 bash[28333]: cluster 2026-03-09T17:40:35.839256+0000 mon.a (mon.0) 3182 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:36 vm00 bash[28333]: cluster 2026-03-09T17:40:35.839256+0000 mon.a (mon.0) 3182 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:36 vm00 bash[28333]: audit 2026-03-09T17:40:35.841174+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:36 vm00 bash[28333]: audit 2026-03-09T17:40:35.841174+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:36 vm00 bash[20770]: audit 2026-03-09T17:40:35.824719+0000 mon.a (mon.0) 3181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:36 vm00 bash[20770]: audit 2026-03-09T17:40:35.824719+0000 mon.a (mon.0) 3181 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:36 vm00 bash[20770]: audit 2026-03-09T17:40:35.828882+0000 mon.c (mon.2) 704 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:36 vm00 bash[20770]: audit 2026-03-09T17:40:35.828882+0000 mon.c (mon.2) 704 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:36 vm00 bash[20770]: cluster 2026-03-09T17:40:35.839256+0000 mon.a (mon.0) 3182 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:36 vm00 bash[20770]: cluster 2026-03-09T17:40:35.839256+0000 mon.a (mon.0) 3182 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:36 vm00 bash[20770]: audit 2026-03-09T17:40:35.841174+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:36 vm00 bash[20770]: audit 2026-03-09T17:40:35.841174+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]: dispatch 2026-03-09T17:40:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:37 vm02 bash[23351]: cluster 2026-03-09T17:40:36.824729+0000 mon.a (mon.0) 3184 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:37 vm02 bash[23351]: cluster 2026-03-09T17:40:36.824729+0000 mon.a (mon.0) 3184 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:37 vm02 bash[23351]: audit 2026-03-09T17:40:36.828535+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]': finished 2026-03-09T17:40:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:37 vm02 bash[23351]: audit 2026-03-09T17:40:36.828535+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]': finished 2026-03-09T17:40:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:37 vm02 bash[23351]: cluster 2026-03-09T17:40:36.834633+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T17:40:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:37 vm02 bash[23351]: cluster 2026-03-09T17:40:36.834633+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T17:40:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:37 vm02 bash[23351]: cluster 2026-03-09T17:40:36.841176+0000 mgr.y (mgr.14505) 562 : cluster [DBG] pgmap v977: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:37 vm02 bash[23351]: cluster 2026-03-09T17:40:36.841176+0000 mgr.y (mgr.14505) 562 : cluster [DBG] pgmap v977: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:37 vm00 bash[28333]: cluster 2026-03-09T17:40:36.824729+0000 mon.a (mon.0) 3184 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:37 vm00 bash[28333]: cluster 2026-03-09T17:40:36.824729+0000 mon.a (mon.0) 3184 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:37 vm00 bash[28333]: audit 2026-03-09T17:40:36.828535+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]': finished 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:37 vm00 bash[28333]: audit 2026-03-09T17:40:36.828535+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]': finished 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:37 vm00 bash[28333]: cluster 2026-03-09T17:40:36.834633+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:37 vm00 bash[28333]: cluster 2026-03-09T17:40:36.834633+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:37 vm00 bash[28333]: cluster 2026-03-09T17:40:36.841176+0000 mgr.y (mgr.14505) 562 : cluster [DBG] pgmap v977: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:37 vm00 bash[28333]: cluster 2026-03-09T17:40:36.841176+0000 mgr.y (mgr.14505) 562 : cluster [DBG] pgmap v977: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:37 vm00 bash[20770]: cluster 2026-03-09T17:40:36.824729+0000 mon.a (mon.0) 3184 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:37 vm00 bash[20770]: cluster 2026-03-09T17:40:36.824729+0000 mon.a (mon.0) 3184 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:37 vm00 bash[20770]: audit 2026-03-09T17:40:36.828535+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]': finished 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:37 vm00 bash[20770]: audit 2026-03-09T17:40:36.828535+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-128", "mode": "writeback"}]': finished 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:37 vm00 bash[20770]: cluster 2026-03-09T17:40:36.834633+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:37 vm00 bash[20770]: cluster 2026-03-09T17:40:36.834633+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:37 vm00 bash[20770]: cluster 2026-03-09T17:40:36.841176+0000 mgr.y (mgr.14505) 562 : cluster [DBG] pgmap v977: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:37 vm00 bash[20770]: cluster 2026-03-09T17:40:36.841176+0000 mgr.y (mgr.14505) 562 : cluster [DBG] pgmap v977: 268 pgs: 8 creating+peering, 24 unknown, 236 active+clean; 455 KiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:39 vm02 bash[23351]: cluster 2026-03-09T17:40:38.841842+0000 mgr.y (mgr.14505) 563 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 407 B/s wr, 1 op/s 2026-03-09T17:40:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:39 vm02 bash[23351]: cluster 2026-03-09T17:40:38.841842+0000 mgr.y (mgr.14505) 563 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 407 B/s wr, 1 op/s 2026-03-09T17:40:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:39 vm00 bash[28333]: cluster 2026-03-09T17:40:38.841842+0000 mgr.y (mgr.14505) 563 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 407 B/s wr, 1 op/s 2026-03-09T17:40:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:39 vm00 bash[28333]: cluster 2026-03-09T17:40:38.841842+0000 mgr.y (mgr.14505) 563 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 407 B/s wr, 1 op/s 2026-03-09T17:40:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:39 vm00 bash[20770]: cluster 2026-03-09T17:40:38.841842+0000 mgr.y (mgr.14505) 563 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 407 B/s wr, 1 op/s 2026-03-09T17:40:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:39 vm00 bash[20770]: cluster 2026-03-09T17:40:38.841842+0000 mgr.y (mgr.14505) 563 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 407 B/s wr, 1 op/s 2026-03-09T17:40:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:41 vm02 bash[23351]: cluster 2026-03-09T17:40:40.842160+0000 mgr.y (mgr.14505) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 339 B/s wr, 1 op/s 2026-03-09T17:40:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:41 vm02 bash[23351]: cluster 2026-03-09T17:40:40.842160+0000 mgr.y (mgr.14505) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 339 B/s wr, 1 op/s 2026-03-09T17:40:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:41 vm02 bash[23351]: cluster 2026-03-09T17:40:41.795042+0000 mon.a (mon.0) 3187 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:42.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:41 vm02 bash[23351]: cluster 2026-03-09T17:40:41.795042+0000 mon.a (mon.0) 3187 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:41 vm00 bash[28333]: cluster 2026-03-09T17:40:40.842160+0000 mgr.y (mgr.14505) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 339 B/s wr, 1 op/s 2026-03-09T17:40:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:41 vm00 bash[28333]: cluster 2026-03-09T17:40:40.842160+0000 mgr.y (mgr.14505) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 339 B/s wr, 1 op/s 2026-03-09T17:40:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:41 vm00 bash[28333]: cluster 2026-03-09T17:40:41.795042+0000 mon.a (mon.0) 3187 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:41 vm00 bash[28333]: cluster 2026-03-09T17:40:41.795042+0000 mon.a (mon.0) 3187 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:41 vm00 bash[20770]: cluster 2026-03-09T17:40:40.842160+0000 mgr.y (mgr.14505) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 339 B/s wr, 1 op/s 2026-03-09T17:40:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:41 vm00 bash[20770]: cluster 2026-03-09T17:40:40.842160+0000 mgr.y (mgr.14505) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1019 B/s rd, 339 B/s wr, 1 op/s 2026-03-09T17:40:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:41 vm00 bash[20770]: cluster 2026-03-09T17:40:41.795042+0000 mon.a (mon.0) 3187 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:41 vm00 bash[20770]: cluster 2026-03-09T17:40:41.795042+0000 mon.a (mon.0) 3187 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:42.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:40:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:40:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:42 vm02 bash[23351]: audit 2026-03-09T17:40:41.899043+0000 mon.c (mon.2) 705 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:42 vm02 bash[23351]: audit 2026-03-09T17:40:41.899043+0000 mon.c (mon.2) 705 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:42 vm02 bash[23351]: audit 2026-03-09T17:40:41.899621+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:42 vm02 bash[23351]: audit 2026-03-09T17:40:41.899621+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:42 vm02 bash[23351]: audit 2026-03-09T17:40:42.138102+0000 mgr.y (mgr.14505) 565 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:43.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:42 vm02 bash[23351]: audit 2026-03-09T17:40:42.138102+0000 mgr.y (mgr.14505) 565 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:42 vm00 bash[28333]: audit 2026-03-09T17:40:41.899043+0000 mon.c (mon.2) 705 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:42 vm00 bash[28333]: audit 2026-03-09T17:40:41.899043+0000 mon.c (mon.2) 705 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:42 vm00 bash[28333]: audit 2026-03-09T17:40:41.899621+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:42 vm00 bash[28333]: audit 2026-03-09T17:40:41.899621+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:42 vm00 bash[28333]: audit 2026-03-09T17:40:42.138102+0000 mgr.y (mgr.14505) 565 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:42 vm00 bash[28333]: audit 2026-03-09T17:40:42.138102+0000 mgr.y (mgr.14505) 565 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:42 vm00 bash[20770]: audit 2026-03-09T17:40:41.899043+0000 mon.c (mon.2) 705 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:42 vm00 bash[20770]: audit 2026-03-09T17:40:41.899043+0000 mon.c (mon.2) 705 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:42 vm00 bash[20770]: audit 2026-03-09T17:40:41.899621+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:42 vm00 bash[20770]: audit 2026-03-09T17:40:41.899621+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:42 vm00 bash[20770]: audit 2026-03-09T17:40:42.138102+0000 mgr.y (mgr.14505) 565 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:42 vm00 bash[20770]: audit 2026-03-09T17:40:42.138102+0000 mgr.y (mgr.14505) 565 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: cluster 2026-03-09T17:40:42.842878+0000 mgr.y (mgr.14505) 566 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: cluster 2026-03-09T17:40:42.842878+0000 mgr.y (mgr.14505) 566 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:42.872266+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:42.872266+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: cluster 2026-03-09T17:40:42.884628+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: cluster 2026-03-09T17:40:42.884628+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:42.885677+0000 mon.c (mon.2) 706 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:42.885677+0000 mon.c (mon.2) 706 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:42.885917+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:42.885917+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:43.273622+0000 mon.a (mon.0) 3192 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:43.273622+0000 mon.a (mon.0) 3192 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:43.277104+0000 mon.c (mon.2) 707 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:43 vm00 bash[28333]: audit 2026-03-09T17:40:43.277104+0000 mon.c (mon.2) 707 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: cluster 2026-03-09T17:40:42.842878+0000 mgr.y (mgr.14505) 566 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: cluster 2026-03-09T17:40:42.842878+0000 mgr.y (mgr.14505) 566 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:42.872266+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:42.872266+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: cluster 2026-03-09T17:40:42.884628+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: cluster 2026-03-09T17:40:42.884628+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:42.885677+0000 mon.c (mon.2) 706 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:42.885677+0000 mon.c (mon.2) 706 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:42.885917+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:42.885917+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:43.273622+0000 mon.a (mon.0) 3192 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:43.273622+0000 mon.a (mon.0) 3192 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:43.277104+0000 mon.c (mon.2) 707 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:44.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:43 vm00 bash[20770]: audit 2026-03-09T17:40:43.277104+0000 mon.c (mon.2) 707 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: cluster 2026-03-09T17:40:42.842878+0000 mgr.y (mgr.14505) 566 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: cluster 2026-03-09T17:40:42.842878+0000 mgr.y (mgr.14505) 566 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:42.872266+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:42.872266+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: cluster 2026-03-09T17:40:42.884628+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: cluster 2026-03-09T17:40:42.884628+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:42.885677+0000 mon.c (mon.2) 706 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:42.885677+0000 mon.c (mon.2) 706 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:42.885917+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:42.885917+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]: dispatch 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:43.273622+0000 mon.a (mon.0) 3192 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:43.273622+0000 mon.a (mon.0) 3192 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:43.277104+0000 mon.c (mon.2) 707 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:43 vm02 bash[23351]: audit 2026-03-09T17:40:43.277104+0000 mon.c (mon.2) 707 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:45 vm02 bash[23351]: cluster 2026-03-09T17:40:43.872451+0000 mon.a (mon.0) 3193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:45 vm02 bash[23351]: cluster 2026-03-09T17:40:43.872451+0000 mon.a (mon.0) 3193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:45 vm02 bash[23351]: audit 2026-03-09T17:40:43.875387+0000 mon.a (mon.0) 3194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:45 vm02 bash[23351]: audit 2026-03-09T17:40:43.875387+0000 mon.a (mon.0) 3194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:45 vm02 bash[23351]: cluster 2026-03-09T17:40:43.878330+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T17:40:45.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:45 vm02 bash[23351]: cluster 2026-03-09T17:40:43.878330+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:45 vm00 bash[28333]: cluster 2026-03-09T17:40:43.872451+0000 mon.a (mon.0) 3193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:45 vm00 bash[28333]: cluster 2026-03-09T17:40:43.872451+0000 mon.a (mon.0) 3193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:45 vm00 bash[28333]: audit 2026-03-09T17:40:43.875387+0000 mon.a (mon.0) 3194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:45 vm00 bash[28333]: audit 2026-03-09T17:40:43.875387+0000 mon.a (mon.0) 3194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:45 vm00 bash[28333]: cluster 2026-03-09T17:40:43.878330+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:45 vm00 bash[28333]: cluster 2026-03-09T17:40:43.878330+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:45 vm00 bash[20770]: cluster 2026-03-09T17:40:43.872451+0000 mon.a (mon.0) 3193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:45 vm00 bash[20770]: cluster 2026-03-09T17:40:43.872451+0000 mon.a (mon.0) 3193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:45 vm00 bash[20770]: audit 2026-03-09T17:40:43.875387+0000 mon.a (mon.0) 3194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:45 vm00 bash[20770]: audit 2026-03-09T17:40:43.875387+0000 mon.a (mon.0) 3194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-128"}]': finished 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:45 vm00 bash[20770]: cluster 2026-03-09T17:40:43.878330+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T17:40:45.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:45 vm00 bash[20770]: cluster 2026-03-09T17:40:43.878330+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T17:40:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:46 vm02 bash[23351]: cluster 2026-03-09T17:40:44.843237+0000 mgr.y (mgr.14505) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:46 vm02 bash[23351]: cluster 2026-03-09T17:40:44.843237+0000 mgr.y (mgr.14505) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:46 vm02 bash[23351]: cluster 2026-03-09T17:40:45.089587+0000 mon.a (mon.0) 3196 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T17:40:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:46 vm02 bash[23351]: cluster 2026-03-09T17:40:45.089587+0000 mon.a (mon.0) 3196 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:46 vm00 bash[28333]: cluster 2026-03-09T17:40:44.843237+0000 mgr.y (mgr.14505) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:46 vm00 bash[28333]: cluster 2026-03-09T17:40:44.843237+0000 mgr.y (mgr.14505) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:46 vm00 bash[28333]: cluster 2026-03-09T17:40:45.089587+0000 mon.a (mon.0) 3196 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:46 vm00 bash[28333]: cluster 2026-03-09T17:40:45.089587+0000 mon.a (mon.0) 3196 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:46 vm00 bash[20770]: cluster 2026-03-09T17:40:44.843237+0000 mgr.y (mgr.14505) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:46 vm00 bash[20770]: cluster 2026-03-09T17:40:44.843237+0000 mgr.y (mgr.14505) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:46 vm00 bash[20770]: cluster 2026-03-09T17:40:45.089587+0000 mon.a (mon.0) 3196 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:46 vm00 bash[20770]: cluster 2026-03-09T17:40:45.089587+0000 mon.a (mon.0) 3196 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T17:40:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:40:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:40:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:40:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:47 vm02 bash[23351]: cluster 2026-03-09T17:40:46.111454+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T17:40:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:47 vm02 bash[23351]: cluster 2026-03-09T17:40:46.111454+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T17:40:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:47 vm02 bash[23351]: audit 2026-03-09T17:40:46.124511+0000 mon.c (mon.2) 708 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:47 vm02 bash[23351]: audit 2026-03-09T17:40:46.124511+0000 mon.c (mon.2) 708 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:47 vm02 bash[23351]: audit 2026-03-09T17:40:46.124841+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:47 vm02 bash[23351]: audit 2026-03-09T17:40:46.124841+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:47 vm02 bash[23351]: cluster 2026-03-09T17:40:46.795630+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:47 vm02 bash[23351]: cluster 2026-03-09T17:40:46.795630+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:47 vm00 bash[28333]: cluster 2026-03-09T17:40:46.111454+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:47 vm00 bash[28333]: cluster 2026-03-09T17:40:46.111454+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:47 vm00 bash[28333]: audit 2026-03-09T17:40:46.124511+0000 mon.c (mon.2) 708 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:47 vm00 bash[28333]: audit 2026-03-09T17:40:46.124511+0000 mon.c (mon.2) 708 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:47 vm00 bash[28333]: audit 2026-03-09T17:40:46.124841+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:47 vm00 bash[28333]: audit 2026-03-09T17:40:46.124841+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:47 vm00 bash[28333]: cluster 2026-03-09T17:40:46.795630+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:47 vm00 bash[28333]: cluster 2026-03-09T17:40:46.795630+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:47 vm00 bash[20770]: cluster 2026-03-09T17:40:46.111454+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:47 vm00 bash[20770]: cluster 2026-03-09T17:40:46.111454+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:47 vm00 bash[20770]: audit 2026-03-09T17:40:46.124511+0000 mon.c (mon.2) 708 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:47 vm00 bash[20770]: audit 2026-03-09T17:40:46.124511+0000 mon.c (mon.2) 708 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:47 vm00 bash[20770]: audit 2026-03-09T17:40:46.124841+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:47 vm00 bash[20770]: audit 2026-03-09T17:40:46.124841+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:47 vm00 bash[20770]: cluster 2026-03-09T17:40:46.795630+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:47 vm00 bash[20770]: cluster 2026-03-09T17:40:46.795630+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: cluster 2026-03-09T17:40:46.843626+0000 mgr.y (mgr.14505) 568 : cluster [DBG] pgmap v986: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: cluster 2026-03-09T17:40:46.843626+0000 mgr.y (mgr.14505) 568 : cluster [DBG] pgmap v986: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: audit 2026-03-09T17:40:47.102165+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: audit 2026-03-09T17:40:47.102165+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: cluster 2026-03-09T17:40:47.108399+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: cluster 2026-03-09T17:40:47.108399+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: audit 2026-03-09T17:40:47.145076+0000 mon.c (mon.2) 709 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: audit 2026-03-09T17:40:47.145076+0000 mon.c (mon.2) 709 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: audit 2026-03-09T17:40:47.146793+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:48 vm00 bash[28333]: audit 2026-03-09T17:40:47.146793+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: cluster 2026-03-09T17:40:46.843626+0000 mgr.y (mgr.14505) 568 : cluster [DBG] pgmap v986: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: cluster 2026-03-09T17:40:46.843626+0000 mgr.y (mgr.14505) 568 : cluster [DBG] pgmap v986: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: audit 2026-03-09T17:40:47.102165+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: audit 2026-03-09T17:40:47.102165+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: cluster 2026-03-09T17:40:47.108399+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: cluster 2026-03-09T17:40:47.108399+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: audit 2026-03-09T17:40:47.145076+0000 mon.c (mon.2) 709 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: audit 2026-03-09T17:40:47.145076+0000 mon.c (mon.2) 709 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: audit 2026-03-09T17:40:47.146793+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:48 vm00 bash[20770]: audit 2026-03-09T17:40:47.146793+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: cluster 2026-03-09T17:40:46.843626+0000 mgr.y (mgr.14505) 568 : cluster [DBG] pgmap v986: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: cluster 2026-03-09T17:40:46.843626+0000 mgr.y (mgr.14505) 568 : cluster [DBG] pgmap v986: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 952 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: audit 2026-03-09T17:40:47.102165+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: audit 2026-03-09T17:40:47.102165+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: cluster 2026-03-09T17:40:47.108399+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: cluster 2026-03-09T17:40:47.108399+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: audit 2026-03-09T17:40:47.145076+0000 mon.c (mon.2) 709 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: audit 2026-03-09T17:40:47.145076+0000 mon.c (mon.2) 709 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: audit 2026-03-09T17:40:47.146793+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:48 vm02 bash[23351]: audit 2026-03-09T17:40:47.146793+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: audit 2026-03-09T17:40:48.134076+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: audit 2026-03-09T17:40:48.134076+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: cluster 2026-03-09T17:40:48.146688+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: cluster 2026-03-09T17:40:48.146688+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: audit 2026-03-09T17:40:48.149216+0000 mon.c (mon.2) 710 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: audit 2026-03-09T17:40:48.149216+0000 mon.c (mon.2) 710 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: audit 2026-03-09T17:40:48.150075+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: audit 2026-03-09T17:40:48.150075+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: audit 2026-03-09T17:40:49.137098+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: audit 2026-03-09T17:40:49.137098+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: cluster 2026-03-09T17:40:49.139771+0000 mon.a (mon.0) 3207 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:49 vm00 bash[28333]: cluster 2026-03-09T17:40:49.139771+0000 mon.a (mon.0) 3207 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: audit 2026-03-09T17:40:48.134076+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: audit 2026-03-09T17:40:48.134076+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: cluster 2026-03-09T17:40:48.146688+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T17:40:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: cluster 2026-03-09T17:40:48.146688+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T17:40:49.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: audit 2026-03-09T17:40:48.149216+0000 mon.c (mon.2) 710 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: audit 2026-03-09T17:40:48.149216+0000 mon.c (mon.2) 710 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: audit 2026-03-09T17:40:48.150075+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: audit 2026-03-09T17:40:48.150075+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: audit 2026-03-09T17:40:49.137098+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:49.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: audit 2026-03-09T17:40:49.137098+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:49.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: cluster 2026-03-09T17:40:49.139771+0000 mon.a (mon.0) 3207 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T17:40:49.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:49 vm00 bash[20770]: cluster 2026-03-09T17:40:49.139771+0000 mon.a (mon.0) 3207 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: audit 2026-03-09T17:40:48.134076+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: audit 2026-03-09T17:40:48.134076+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: cluster 2026-03-09T17:40:48.146688+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: cluster 2026-03-09T17:40:48.146688+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: audit 2026-03-09T17:40:48.149216+0000 mon.c (mon.2) 710 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: audit 2026-03-09T17:40:48.149216+0000 mon.c (mon.2) 710 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: audit 2026-03-09T17:40:48.150075+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: audit 2026-03-09T17:40:48.150075+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: audit 2026-03-09T17:40:49.137098+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: audit 2026-03-09T17:40:49.137098+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: cluster 2026-03-09T17:40:49.139771+0000 mon.a (mon.0) 3207 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T17:40:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:49 vm02 bash[23351]: cluster 2026-03-09T17:40:49.139771+0000 mon.a (mon.0) 3207 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: cluster 2026-03-09T17:40:48.844076+0000 mgr.y (mgr.14505) 569 : cluster [DBG] pgmap v989: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: cluster 2026-03-09T17:40:48.844076+0000 mgr.y (mgr.14505) 569 : cluster [DBG] pgmap v989: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: audit 2026-03-09T17:40:49.155282+0000 mon.c (mon.2) 711 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: audit 2026-03-09T17:40:49.155282+0000 mon.c (mon.2) 711 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: audit 2026-03-09T17:40:49.161137+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: audit 2026-03-09T17:40:49.161137+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: cluster 2026-03-09T17:40:50.155992+0000 mon.a (mon.0) 3209 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: cluster 2026-03-09T17:40:50.155992+0000 mon.a (mon.0) 3209 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: audit 2026-03-09T17:40:50.159546+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]': finished 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:50 vm00 bash[28333]: audit 2026-03-09T17:40:50.159546+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]': finished 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: cluster 2026-03-09T17:40:48.844076+0000 mgr.y (mgr.14505) 569 : cluster [DBG] pgmap v989: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: cluster 2026-03-09T17:40:48.844076+0000 mgr.y (mgr.14505) 569 : cluster [DBG] pgmap v989: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: audit 2026-03-09T17:40:49.155282+0000 mon.c (mon.2) 711 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: audit 2026-03-09T17:40:49.155282+0000 mon.c (mon.2) 711 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: audit 2026-03-09T17:40:49.161137+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: audit 2026-03-09T17:40:49.161137+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: cluster 2026-03-09T17:40:50.155992+0000 mon.a (mon.0) 3209 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: cluster 2026-03-09T17:40:50.155992+0000 mon.a (mon.0) 3209 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: audit 2026-03-09T17:40:50.159546+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]': finished 2026-03-09T17:40:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:50 vm00 bash[20770]: audit 2026-03-09T17:40:50.159546+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]': finished 2026-03-09T17:40:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: cluster 2026-03-09T17:40:48.844076+0000 mgr.y (mgr.14505) 569 : cluster [DBG] pgmap v989: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:40:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: cluster 2026-03-09T17:40:48.844076+0000 mgr.y (mgr.14505) 569 : cluster [DBG] pgmap v989: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:40:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: audit 2026-03-09T17:40:49.155282+0000 mon.c (mon.2) 711 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: audit 2026-03-09T17:40:49.155282+0000 mon.c (mon.2) 711 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: audit 2026-03-09T17:40:49.161137+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: audit 2026-03-09T17:40:49.161137+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]: dispatch 2026-03-09T17:40:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: cluster 2026-03-09T17:40:50.155992+0000 mon.a (mon.0) 3209 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: cluster 2026-03-09T17:40:50.155992+0000 mon.a (mon.0) 3209 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:40:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: audit 2026-03-09T17:40:50.159546+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]': finished 2026-03-09T17:40:50.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:50 vm02 bash[23351]: audit 2026-03-09T17:40:50.159546+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-130", "mode": "writeback"}]': finished 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:51 vm00 bash[28333]: cluster 2026-03-09T17:40:50.172691+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:51 vm00 bash[28333]: cluster 2026-03-09T17:40:50.172691+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:51 vm00 bash[28333]: audit 2026-03-09T17:40:50.350147+0000 mon.c (mon.2) 712 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:51 vm00 bash[28333]: audit 2026-03-09T17:40:50.350147+0000 mon.c (mon.2) 712 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:51 vm00 bash[28333]: audit 2026-03-09T17:40:50.350538+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:51 vm00 bash[28333]: audit 2026-03-09T17:40:50.350538+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:51 vm00 bash[20770]: cluster 2026-03-09T17:40:50.172691+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:51 vm00 bash[20770]: cluster 2026-03-09T17:40:50.172691+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:51 vm00 bash[20770]: audit 2026-03-09T17:40:50.350147+0000 mon.c (mon.2) 712 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:51 vm00 bash[20770]: audit 2026-03-09T17:40:50.350147+0000 mon.c (mon.2) 712 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:51 vm00 bash[20770]: audit 2026-03-09T17:40:50.350538+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:51 vm00 bash[20770]: audit 2026-03-09T17:40:50.350538+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:51 vm02 bash[23351]: cluster 2026-03-09T17:40:50.172691+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T17:40:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:51 vm02 bash[23351]: cluster 2026-03-09T17:40:50.172691+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T17:40:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:51 vm02 bash[23351]: audit 2026-03-09T17:40:50.350147+0000 mon.c (mon.2) 712 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:51 vm02 bash[23351]: audit 2026-03-09T17:40:50.350147+0000 mon.c (mon.2) 712 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:51 vm02 bash[23351]: audit 2026-03-09T17:40:50.350538+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:51 vm02 bash[23351]: audit 2026-03-09T17:40:50.350538+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:40:52.414 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:40:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: cluster 2026-03-09T17:40:50.844460+0000 mgr.y (mgr.14505) 570 : cluster [DBG] pgmap v992: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: cluster 2026-03-09T17:40:50.844460+0000 mgr.y (mgr.14505) 570 : cluster [DBG] pgmap v992: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: audit 2026-03-09T17:40:51.391416+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: audit 2026-03-09T17:40:51.391416+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: audit 2026-03-09T17:40:51.402844+0000 mon.c (mon.2) 713 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: audit 2026-03-09T17:40:51.402844+0000 mon.c (mon.2) 713 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: cluster 2026-03-09T17:40:51.405344+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: cluster 2026-03-09T17:40:51.405344+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: audit 2026-03-09T17:40:51.405776+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: audit 2026-03-09T17:40:51.405776+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: cluster 2026-03-09T17:40:51.796237+0000 mon.a (mon.0) 3216 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:52 vm00 bash[28333]: cluster 2026-03-09T17:40:51.796237+0000 mon.a (mon.0) 3216 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: cluster 2026-03-09T17:40:50.844460+0000 mgr.y (mgr.14505) 570 : cluster [DBG] pgmap v992: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: cluster 2026-03-09T17:40:50.844460+0000 mgr.y (mgr.14505) 570 : cluster [DBG] pgmap v992: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: audit 2026-03-09T17:40:51.391416+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: audit 2026-03-09T17:40:51.391416+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: audit 2026-03-09T17:40:51.402844+0000 mon.c (mon.2) 713 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: audit 2026-03-09T17:40:51.402844+0000 mon.c (mon.2) 713 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: cluster 2026-03-09T17:40:51.405344+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: cluster 2026-03-09T17:40:51.405344+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: audit 2026-03-09T17:40:51.405776+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: audit 2026-03-09T17:40:51.405776+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: cluster 2026-03-09T17:40:51.796237+0000 mon.a (mon.0) 3216 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:52.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:52 vm00 bash[20770]: cluster 2026-03-09T17:40:51.796237+0000 mon.a (mon.0) 3216 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: cluster 2026-03-09T17:40:50.844460+0000 mgr.y (mgr.14505) 570 : cluster [DBG] pgmap v992: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: cluster 2026-03-09T17:40:50.844460+0000 mgr.y (mgr.14505) 570 : cluster [DBG] pgmap v992: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:40:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: audit 2026-03-09T17:40:51.391416+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: audit 2026-03-09T17:40:51.391416+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:40:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: audit 2026-03-09T17:40:51.402844+0000 mon.c (mon.2) 713 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: audit 2026-03-09T17:40:51.402844+0000 mon.c (mon.2) 713 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: cluster 2026-03-09T17:40:51.405344+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T17:40:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: cluster 2026-03-09T17:40:51.405344+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T17:40:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: audit 2026-03-09T17:40:51.405776+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: audit 2026-03-09T17:40:51.405776+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]: dispatch 2026-03-09T17:40:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: cluster 2026-03-09T17:40:51.796237+0000 mon.a (mon.0) 3216 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:52.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:52 vm02 bash[23351]: cluster 2026-03-09T17:40:51.796237+0000 mon.a (mon.0) 3216 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:53 vm00 bash[28333]: audit 2026-03-09T17:40:52.147772+0000 mgr.y (mgr.14505) 571 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:53 vm00 bash[28333]: audit 2026-03-09T17:40:52.147772+0000 mgr.y (mgr.14505) 571 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:53 vm00 bash[28333]: cluster 2026-03-09T17:40:52.391600+0000 mon.a (mon.0) 3217 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:53 vm00 bash[28333]: cluster 2026-03-09T17:40:52.391600+0000 mon.a (mon.0) 3217 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:53 vm00 bash[28333]: audit 2026-03-09T17:40:52.395676+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:53 vm00 bash[28333]: audit 2026-03-09T17:40:52.395676+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:53 vm00 bash[28333]: cluster 2026-03-09T17:40:52.400957+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:53 vm00 bash[28333]: cluster 2026-03-09T17:40:52.400957+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:53 vm00 bash[20770]: audit 2026-03-09T17:40:52.147772+0000 mgr.y (mgr.14505) 571 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:53 vm00 bash[20770]: audit 2026-03-09T17:40:52.147772+0000 mgr.y (mgr.14505) 571 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:53 vm00 bash[20770]: cluster 2026-03-09T17:40:52.391600+0000 mon.a (mon.0) 3217 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:53 vm00 bash[20770]: cluster 2026-03-09T17:40:52.391600+0000 mon.a (mon.0) 3217 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:53 vm00 bash[20770]: audit 2026-03-09T17:40:52.395676+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:53 vm00 bash[20770]: audit 2026-03-09T17:40:52.395676+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:53 vm00 bash[20770]: cluster 2026-03-09T17:40:52.400957+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T17:40:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:53 vm00 bash[20770]: cluster 2026-03-09T17:40:52.400957+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T17:40:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:53 vm02 bash[23351]: audit 2026-03-09T17:40:52.147772+0000 mgr.y (mgr.14505) 571 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:53 vm02 bash[23351]: audit 2026-03-09T17:40:52.147772+0000 mgr.y (mgr.14505) 571 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:40:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:53 vm02 bash[23351]: cluster 2026-03-09T17:40:52.391600+0000 mon.a (mon.0) 3217 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:53 vm02 bash[23351]: cluster 2026-03-09T17:40:52.391600+0000 mon.a (mon.0) 3217 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:40:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:53 vm02 bash[23351]: audit 2026-03-09T17:40:52.395676+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:53 vm02 bash[23351]: audit 2026-03-09T17:40:52.395676+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-130"}]': finished 2026-03-09T17:40:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:53 vm02 bash[23351]: cluster 2026-03-09T17:40:52.400957+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T17:40:53.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:53 vm02 bash[23351]: cluster 2026-03-09T17:40:52.400957+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T17:40:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:54 vm00 bash[28333]: cluster 2026-03-09T17:40:52.845022+0000 mgr.y (mgr.14505) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T17:40:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:54 vm00 bash[28333]: cluster 2026-03-09T17:40:52.845022+0000 mgr.y (mgr.14505) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T17:40:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:54 vm00 bash[28333]: cluster 2026-03-09T17:40:53.438313+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T17:40:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:54 vm00 bash[28333]: cluster 2026-03-09T17:40:53.438313+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T17:40:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:54 vm00 bash[20770]: cluster 2026-03-09T17:40:52.845022+0000 mgr.y (mgr.14505) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T17:40:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:54 vm00 bash[20770]: cluster 2026-03-09T17:40:52.845022+0000 mgr.y (mgr.14505) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T17:40:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:54 vm00 bash[20770]: cluster 2026-03-09T17:40:53.438313+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T17:40:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:54 vm00 bash[20770]: cluster 2026-03-09T17:40:53.438313+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T17:40:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:54 vm02 bash[23351]: cluster 2026-03-09T17:40:52.845022+0000 mgr.y (mgr.14505) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T17:40:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:54 vm02 bash[23351]: cluster 2026-03-09T17:40:52.845022+0000 mgr.y (mgr.14505) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T17:40:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:54 vm02 bash[23351]: cluster 2026-03-09T17:40:53.438313+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T17:40:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:54 vm02 bash[23351]: cluster 2026-03-09T17:40:53.438313+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:55 vm00 bash[28333]: cluster 2026-03-09T17:40:54.474867+0000 mon.a (mon.0) 3221 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:55 vm00 bash[28333]: cluster 2026-03-09T17:40:54.474867+0000 mon.a (mon.0) 3221 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:55 vm00 bash[28333]: audit 2026-03-09T17:40:54.479894+0000 mon.c (mon.2) 714 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:55 vm00 bash[28333]: audit 2026-03-09T17:40:54.479894+0000 mon.c (mon.2) 714 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:55 vm00 bash[28333]: audit 2026-03-09T17:40:54.483675+0000 mon.a (mon.0) 3222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:55 vm00 bash[28333]: audit 2026-03-09T17:40:54.483675+0000 mon.a (mon.0) 3222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:55 vm00 bash[28333]: cluster 2026-03-09T17:40:54.845305+0000 mgr.y (mgr.14505) 573 : cluster [DBG] pgmap v998: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:55 vm00 bash[28333]: cluster 2026-03-09T17:40:54.845305+0000 mgr.y (mgr.14505) 573 : cluster [DBG] pgmap v998: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:55 vm00 bash[20770]: cluster 2026-03-09T17:40:54.474867+0000 mon.a (mon.0) 3221 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:55 vm00 bash[20770]: cluster 2026-03-09T17:40:54.474867+0000 mon.a (mon.0) 3221 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:55 vm00 bash[20770]: audit 2026-03-09T17:40:54.479894+0000 mon.c (mon.2) 714 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:55 vm00 bash[20770]: audit 2026-03-09T17:40:54.479894+0000 mon.c (mon.2) 714 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:55 vm00 bash[20770]: audit 2026-03-09T17:40:54.483675+0000 mon.a (mon.0) 3222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:55 vm00 bash[20770]: audit 2026-03-09T17:40:54.483675+0000 mon.a (mon.0) 3222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:55 vm00 bash[20770]: cluster 2026-03-09T17:40:54.845305+0000 mgr.y (mgr.14505) 573 : cluster [DBG] pgmap v998: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:55.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:55 vm00 bash[20770]: cluster 2026-03-09T17:40:54.845305+0000 mgr.y (mgr.14505) 573 : cluster [DBG] pgmap v998: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:55 vm02 bash[23351]: cluster 2026-03-09T17:40:54.474867+0000 mon.a (mon.0) 3221 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T17:40:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:55 vm02 bash[23351]: cluster 2026-03-09T17:40:54.474867+0000 mon.a (mon.0) 3221 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T17:40:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:55 vm02 bash[23351]: audit 2026-03-09T17:40:54.479894+0000 mon.c (mon.2) 714 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:55 vm02 bash[23351]: audit 2026-03-09T17:40:54.479894+0000 mon.c (mon.2) 714 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:55 vm02 bash[23351]: audit 2026-03-09T17:40:54.483675+0000 mon.a (mon.0) 3222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:55 vm02 bash[23351]: audit 2026-03-09T17:40:54.483675+0000 mon.a (mon.0) 3222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:40:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:55 vm02 bash[23351]: cluster 2026-03-09T17:40:54.845305+0000 mgr.y (mgr.14505) 573 : cluster [DBG] pgmap v998: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:55 vm02 bash[23351]: cluster 2026-03-09T17:40:54.845305+0000 mgr.y (mgr.14505) 573 : cluster [DBG] pgmap v998: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:55.465894+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:55.465894+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: cluster 2026-03-09T17:40:55.471094+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: cluster 2026-03-09T17:40:55.471094+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:55.501791+0000 mon.c (mon.2) 715 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:55.501791+0000 mon.c (mon.2) 715 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:55.503341+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:55.503341+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:55.571592+0000 mon.c (mon.2) 716 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:55.571592+0000 mon.c (mon.2) 716 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:56.023198+0000 mon.a (mon.0) 3226 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:56.023198+0000 mon.a (mon.0) 3226 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:56.255591+0000 mon.a (mon.0) 3227 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:56 vm00 bash[28333]: audit 2026-03-09T17:40:56.255591+0000 mon.a (mon.0) 3227 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:55.465894+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:55.465894+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: cluster 2026-03-09T17:40:55.471094+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: cluster 2026-03-09T17:40:55.471094+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:55.501791+0000 mon.c (mon.2) 715 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:55.501791+0000 mon.c (mon.2) 715 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:55.503341+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:55.503341+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:55.571592+0000 mon.c (mon.2) 716 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:55.571592+0000 mon.c (mon.2) 716 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:56.023198+0000 mon.a (mon.0) 3226 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:56.023198+0000 mon.a (mon.0) 3226 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:56.255591+0000 mon.a (mon.0) 3227 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:56 vm00 bash[20770]: audit 2026-03-09T17:40:56.255591+0000 mon.a (mon.0) 3227 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:40:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:40:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:55.465894+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:55.465894+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: cluster 2026-03-09T17:40:55.471094+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: cluster 2026-03-09T17:40:55.471094+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:55.501791+0000 mon.c (mon.2) 715 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:55.501791+0000 mon.c (mon.2) 715 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:55.503341+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:55.503341+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:55.571592+0000 mon.c (mon.2) 716 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:55.571592+0000 mon.c (mon.2) 716 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:56.023198+0000 mon.a (mon.0) 3226 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:56.023198+0000 mon.a (mon.0) 3226 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:56.255591+0000 mon.a (mon.0) 3227 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:56 vm02 bash[23351]: audit 2026-03-09T17:40:56.255591+0000 mon.a (mon.0) 3227 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.658508+0000 mon.a (mon.0) 3228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.658508+0000 mon.a (mon.0) 3228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: cluster 2026-03-09T17:40:56.699148+0000 mon.a (mon.0) 3229 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: cluster 2026-03-09T17:40:56.699148+0000 mon.a (mon.0) 3229 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.722006+0000 mon.c (mon.2) 717 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.722006+0000 mon.c (mon.2) 717 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.722609+0000 mon.c (mon.2) 718 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.722609+0000 mon.c (mon.2) 718 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.722878+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.722878+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.723772+0000 mon.c (mon.2) 719 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.723772+0000 mon.c (mon.2) 719 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.772016+0000 mon.a (mon.0) 3231 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: audit 2026-03-09T17:40:56.772016+0000 mon.a (mon.0) 3231 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: cluster 2026-03-09T17:40:56.796816+0000 mon.a (mon.0) 3232 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: cluster 2026-03-09T17:40:56.796816+0000 mon.a (mon.0) 3232 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: cluster 2026-03-09T17:40:56.845622+0000 mgr.y (mgr.14505) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:57 vm00 bash[28333]: cluster 2026-03-09T17:40:56.845622+0000 mgr.y (mgr.14505) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.658508+0000 mon.a (mon.0) 3228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.658508+0000 mon.a (mon.0) 3228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: cluster 2026-03-09T17:40:56.699148+0000 mon.a (mon.0) 3229 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: cluster 2026-03-09T17:40:56.699148+0000 mon.a (mon.0) 3229 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.722006+0000 mon.c (mon.2) 717 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.722006+0000 mon.c (mon.2) 717 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.722609+0000 mon.c (mon.2) 718 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.722609+0000 mon.c (mon.2) 718 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.722878+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.722878+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.723772+0000 mon.c (mon.2) 719 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.723772+0000 mon.c (mon.2) 719 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.772016+0000 mon.a (mon.0) 3231 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: audit 2026-03-09T17:40:56.772016+0000 mon.a (mon.0) 3231 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: cluster 2026-03-09T17:40:56.796816+0000 mon.a (mon.0) 3232 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: cluster 2026-03-09T17:40:56.796816+0000 mon.a (mon.0) 3232 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: cluster 2026-03-09T17:40:56.845622+0000 mgr.y (mgr.14505) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:58.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:57 vm00 bash[20770]: cluster 2026-03-09T17:40:56.845622+0000 mgr.y (mgr.14505) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.658508+0000 mon.a (mon.0) 3228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.658508+0000 mon.a (mon.0) 3228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: cluster 2026-03-09T17:40:56.699148+0000 mon.a (mon.0) 3229 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: cluster 2026-03-09T17:40:56.699148+0000 mon.a (mon.0) 3229 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.722006+0000 mon.c (mon.2) 717 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.722006+0000 mon.c (mon.2) 717 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.722609+0000 mon.c (mon.2) 718 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.722609+0000 mon.c (mon.2) 718 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.722878+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.722878+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.723772+0000 mon.c (mon.2) 719 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.723772+0000 mon.c (mon.2) 719 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.772016+0000 mon.a (mon.0) 3231 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: audit 2026-03-09T17:40:56.772016+0000 mon.a (mon.0) 3231 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: cluster 2026-03-09T17:40:56.796816+0000 mon.a (mon.0) 3232 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: cluster 2026-03-09T17:40:56.796816+0000 mon.a (mon.0) 3232 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: cluster 2026-03-09T17:40:56.845622+0000 mgr.y (mgr.14505) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:57 vm02 bash[23351]: cluster 2026-03-09T17:40:56.845622+0000 mgr.y (mgr.14505) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:57.672619+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:57.672619+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:57.679122+0000 mon.c (mon.2) 720 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:57.679122+0000 mon.c (mon.2) 720 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: cluster 2026-03-09T17:40:57.686228+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: cluster 2026-03-09T17:40:57.686228+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:57.687198+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:57.687198+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:58.289676+0000 mon.a (mon.0) 3236 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:58.289676+0000 mon.a (mon.0) 3236 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:58.291477+0000 mon.c (mon.2) 721 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:58 vm00 bash[28333]: audit 2026-03-09T17:40:58.291477+0000 mon.c (mon.2) 721 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:57.672619+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:57.672619+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:57.679122+0000 mon.c (mon.2) 720 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:57.679122+0000 mon.c (mon.2) 720 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: cluster 2026-03-09T17:40:57.686228+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: cluster 2026-03-09T17:40:57.686228+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:57.687198+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:57.687198+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:58.289676+0000 mon.a (mon.0) 3236 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:58.289676+0000 mon.a (mon.0) 3236 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:58.291477+0000 mon.c (mon.2) 721 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:58 vm00 bash[20770]: audit 2026-03-09T17:40:58.291477+0000 mon.c (mon.2) 721 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:57.672619+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:57.672619+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:57.679122+0000 mon.c (mon.2) 720 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:57.679122+0000 mon.c (mon.2) 720 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: cluster 2026-03-09T17:40:57.686228+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: cluster 2026-03-09T17:40:57.686228+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:57.687198+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:57.687198+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]: dispatch 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:58.289676+0000 mon.a (mon.0) 3236 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:58.289676+0000 mon.a (mon.0) 3236 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:58.291477+0000 mon.c (mon.2) 721 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:40:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:58 vm02 bash[23351]: audit 2026-03-09T17:40:58.291477+0000 mon.c (mon.2) 721 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:59 vm00 bash[28333]: cluster 2026-03-09T17:40:58.672645+0000 mon.a (mon.0) 3237 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:59 vm00 bash[28333]: cluster 2026-03-09T17:40:58.672645+0000 mon.a (mon.0) 3237 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:59 vm00 bash[28333]: audit 2026-03-09T17:40:58.675544+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]': finished 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:59 vm00 bash[28333]: audit 2026-03-09T17:40:58.675544+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]': finished 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:59 vm00 bash[28333]: cluster 2026-03-09T17:40:58.680628+0000 mon.a (mon.0) 3239 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:59 vm00 bash[28333]: cluster 2026-03-09T17:40:58.680628+0000 mon.a (mon.0) 3239 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:59 vm00 bash[28333]: cluster 2026-03-09T17:40:58.846045+0000 mgr.y (mgr.14505) 575 : cluster [DBG] pgmap v1004: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:40:59 vm00 bash[28333]: cluster 2026-03-09T17:40:58.846045+0000 mgr.y (mgr.14505) 575 : cluster [DBG] pgmap v1004: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:59 vm00 bash[20770]: cluster 2026-03-09T17:40:58.672645+0000 mon.a (mon.0) 3237 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:59 vm00 bash[20770]: cluster 2026-03-09T17:40:58.672645+0000 mon.a (mon.0) 3237 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:59 vm00 bash[20770]: audit 2026-03-09T17:40:58.675544+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]': finished 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:59 vm00 bash[20770]: audit 2026-03-09T17:40:58.675544+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]': finished 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:59 vm00 bash[20770]: cluster 2026-03-09T17:40:58.680628+0000 mon.a (mon.0) 3239 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:59 vm00 bash[20770]: cluster 2026-03-09T17:40:58.680628+0000 mon.a (mon.0) 3239 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:59 vm00 bash[20770]: cluster 2026-03-09T17:40:58.846045+0000 mgr.y (mgr.14505) 575 : cluster [DBG] pgmap v1004: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:40:59 vm00 bash[20770]: cluster 2026-03-09T17:40:58.846045+0000 mgr.y (mgr.14505) 575 : cluster [DBG] pgmap v1004: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:59 vm02 bash[23351]: cluster 2026-03-09T17:40:58.672645+0000 mon.a (mon.0) 3237 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:59 vm02 bash[23351]: cluster 2026-03-09T17:40:58.672645+0000 mon.a (mon.0) 3237 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:59 vm02 bash[23351]: audit 2026-03-09T17:40:58.675544+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]': finished 2026-03-09T17:41:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:59 vm02 bash[23351]: audit 2026-03-09T17:40:58.675544+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-132", "mode": "writeback"}]': finished 2026-03-09T17:41:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:59 vm02 bash[23351]: cluster 2026-03-09T17:40:58.680628+0000 mon.a (mon.0) 3239 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T17:41:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:59 vm02 bash[23351]: cluster 2026-03-09T17:40:58.680628+0000 mon.a (mon.0) 3239 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T17:41:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:59 vm02 bash[23351]: cluster 2026-03-09T17:40:58.846045+0000 mgr.y (mgr.14505) 575 : cluster [DBG] pgmap v1004: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:40:59 vm02 bash[23351]: cluster 2026-03-09T17:40:58.846045+0000 mgr.y (mgr.14505) 575 : cluster [DBG] pgmap v1004: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:01 vm00 bash[28333]: cluster 2026-03-09T17:40:59.714515+0000 mon.a (mon.0) 3240 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T17:41:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:01 vm00 bash[28333]: cluster 2026-03-09T17:40:59.714515+0000 mon.a (mon.0) 3240 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T17:41:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:01 vm00 bash[20770]: cluster 2026-03-09T17:40:59.714515+0000 mon.a (mon.0) 3240 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T17:41:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:01 vm00 bash[20770]: cluster 2026-03-09T17:40:59.714515+0000 mon.a (mon.0) 3240 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T17:41:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:01 vm02 bash[23351]: cluster 2026-03-09T17:40:59.714515+0000 mon.a (mon.0) 3240 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T17:41:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:01 vm02 bash[23351]: cluster 2026-03-09T17:40:59.714515+0000 mon.a (mon.0) 3240 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T17:41:02.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:41:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: cluster 2026-03-09T17:41:00.846469+0000 mgr.y (mgr.14505) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: cluster 2026-03-09T17:41:00.846469+0000 mgr.y (mgr.14505) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: cluster 2026-03-09T17:41:00.981271+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: cluster 2026-03-09T17:41:00.981271+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: audit 2026-03-09T17:41:01.089064+0000 mon.c (mon.2) 722 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: audit 2026-03-09T17:41:01.089064+0000 mon.c (mon.2) 722 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: audit 2026-03-09T17:41:01.089502+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: audit 2026-03-09T17:41:01.089502+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: cluster 2026-03-09T17:41:01.797432+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:02.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:02 vm02 bash[23351]: cluster 2026-03-09T17:41:01.797432+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: cluster 2026-03-09T17:41:00.846469+0000 mgr.y (mgr.14505) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: cluster 2026-03-09T17:41:00.846469+0000 mgr.y (mgr.14505) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: cluster 2026-03-09T17:41:00.981271+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: cluster 2026-03-09T17:41:00.981271+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: audit 2026-03-09T17:41:01.089064+0000 mon.c (mon.2) 722 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: audit 2026-03-09T17:41:01.089064+0000 mon.c (mon.2) 722 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: audit 2026-03-09T17:41:01.089502+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: audit 2026-03-09T17:41:01.089502+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: cluster 2026-03-09T17:41:01.797432+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:02 vm00 bash[28333]: cluster 2026-03-09T17:41:01.797432+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: cluster 2026-03-09T17:41:00.846469+0000 mgr.y (mgr.14505) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: cluster 2026-03-09T17:41:00.846469+0000 mgr.y (mgr.14505) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 954 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: cluster 2026-03-09T17:41:00.981271+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: cluster 2026-03-09T17:41:00.981271+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: audit 2026-03-09T17:41:01.089064+0000 mon.c (mon.2) 722 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: audit 2026-03-09T17:41:01.089064+0000 mon.c (mon.2) 722 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: audit 2026-03-09T17:41:01.089502+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: audit 2026-03-09T17:41:01.089502+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: cluster 2026-03-09T17:41:01.797432+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:02 vm00 bash[20770]: cluster 2026-03-09T17:41:01.797432+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: audit 2026-03-09T17:41:02.028099+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: audit 2026-03-09T17:41:02.028099+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: cluster 2026-03-09T17:41:02.033313+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: cluster 2026-03-09T17:41:02.033313+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: audit 2026-03-09T17:41:02.059416+0000 mon.c (mon.2) 723 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: audit 2026-03-09T17:41:02.059416+0000 mon.c (mon.2) 723 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: audit 2026-03-09T17:41:02.059695+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: audit 2026-03-09T17:41:02.059695+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: audit 2026-03-09T17:41:02.158392+0000 mgr.y (mgr.14505) 577 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:03 vm02 bash[23351]: audit 2026-03-09T17:41:02.158392+0000 mgr.y (mgr.14505) 577 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: audit 2026-03-09T17:41:02.028099+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: audit 2026-03-09T17:41:02.028099+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: cluster 2026-03-09T17:41:02.033313+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: cluster 2026-03-09T17:41:02.033313+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: audit 2026-03-09T17:41:02.059416+0000 mon.c (mon.2) 723 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: audit 2026-03-09T17:41:02.059416+0000 mon.c (mon.2) 723 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: audit 2026-03-09T17:41:02.059695+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: audit 2026-03-09T17:41:02.059695+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: audit 2026-03-09T17:41:02.158392+0000 mgr.y (mgr.14505) 577 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:03 vm00 bash[28333]: audit 2026-03-09T17:41:02.158392+0000 mgr.y (mgr.14505) 577 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: audit 2026-03-09T17:41:02.028099+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: audit 2026-03-09T17:41:02.028099+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: cluster 2026-03-09T17:41:02.033313+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: cluster 2026-03-09T17:41:02.033313+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: audit 2026-03-09T17:41:02.059416+0000 mon.c (mon.2) 723 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: audit 2026-03-09T17:41:02.059416+0000 mon.c (mon.2) 723 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: audit 2026-03-09T17:41:02.059695+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: audit 2026-03-09T17:41:02.059695+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: audit 2026-03-09T17:41:02.158392+0000 mgr.y (mgr.14505) 577 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:03 vm00 bash[20770]: audit 2026-03-09T17:41:02.158392+0000 mgr.y (mgr.14505) 577 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:04 vm02 bash[23351]: cluster 2026-03-09T17:41:02.847313+0000 mgr.y (mgr.14505) 578 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:41:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:04 vm02 bash[23351]: cluster 2026-03-09T17:41:02.847313+0000 mgr.y (mgr.14505) 578 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:41:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:04 vm02 bash[23351]: audit 2026-03-09T17:41:03.046722+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:04 vm02 bash[23351]: audit 2026-03-09T17:41:03.046722+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:04 vm02 bash[23351]: cluster 2026-03-09T17:41:03.051637+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T17:41:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:04 vm02 bash[23351]: cluster 2026-03-09T17:41:03.051637+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:04 vm00 bash[28333]: cluster 2026-03-09T17:41:02.847313+0000 mgr.y (mgr.14505) 578 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:04 vm00 bash[28333]: cluster 2026-03-09T17:41:02.847313+0000 mgr.y (mgr.14505) 578 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:04 vm00 bash[28333]: audit 2026-03-09T17:41:03.046722+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:04 vm00 bash[28333]: audit 2026-03-09T17:41:03.046722+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:04 vm00 bash[28333]: cluster 2026-03-09T17:41:03.051637+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:04 vm00 bash[28333]: cluster 2026-03-09T17:41:03.051637+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:04 vm00 bash[20770]: cluster 2026-03-09T17:41:02.847313+0000 mgr.y (mgr.14505) 578 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:04 vm00 bash[20770]: cluster 2026-03-09T17:41:02.847313+0000 mgr.y (mgr.14505) 578 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:04 vm00 bash[20770]: audit 2026-03-09T17:41:03.046722+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:04 vm00 bash[20770]: audit 2026-03-09T17:41:03.046722+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:04 vm00 bash[20770]: cluster 2026-03-09T17:41:03.051637+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T17:41:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:04 vm00 bash[20770]: cluster 2026-03-09T17:41:03.051637+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:05 vm00 bash[28333]: cluster 2026-03-09T17:41:04.346794+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:05 vm00 bash[28333]: cluster 2026-03-09T17:41:04.346794+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:05 vm00 bash[28333]: audit 2026-03-09T17:41:04.398888+0000 mon.c (mon.2) 724 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:05 vm00 bash[28333]: audit 2026-03-09T17:41:04.398888+0000 mon.c (mon.2) 724 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:05 vm00 bash[28333]: audit 2026-03-09T17:41:04.399164+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:05 vm00 bash[28333]: audit 2026-03-09T17:41:04.399164+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:05 vm00 bash[20770]: cluster 2026-03-09T17:41:04.346794+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:05 vm00 bash[20770]: cluster 2026-03-09T17:41:04.346794+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:05 vm00 bash[20770]: audit 2026-03-09T17:41:04.398888+0000 mon.c (mon.2) 724 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:05 vm00 bash[20770]: audit 2026-03-09T17:41:04.398888+0000 mon.c (mon.2) 724 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:05 vm00 bash[20770]: audit 2026-03-09T17:41:04.399164+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:05 vm00 bash[20770]: audit 2026-03-09T17:41:04.399164+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:05 vm02 bash[23351]: cluster 2026-03-09T17:41:04.346794+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T17:41:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:05 vm02 bash[23351]: cluster 2026-03-09T17:41:04.346794+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T17:41:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:05 vm02 bash[23351]: audit 2026-03-09T17:41:04.398888+0000 mon.c (mon.2) 724 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:05 vm02 bash[23351]: audit 2026-03-09T17:41:04.398888+0000 mon.c (mon.2) 724 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:05 vm02 bash[23351]: audit 2026-03-09T17:41:04.399164+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:05.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:05 vm02 bash[23351]: audit 2026-03-09T17:41:04.399164+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: cluster 2026-03-09T17:41:04.847694+0000 mgr.y (mgr.14505) 579 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: cluster 2026-03-09T17:41:04.847694+0000 mgr.y (mgr.14505) 579 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: audit 2026-03-09T17:41:05.419562+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: audit 2026-03-09T17:41:05.419562+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: audit 2026-03-09T17:41:05.433981+0000 mon.c (mon.2) 725 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: audit 2026-03-09T17:41:05.433981+0000 mon.c (mon.2) 725 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: cluster 2026-03-09T17:41:05.435731+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: cluster 2026-03-09T17:41:05.435731+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: audit 2026-03-09T17:41:05.436431+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:06 vm00 bash[20770]: audit 2026-03-09T17:41:05.436431+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: cluster 2026-03-09T17:41:04.847694+0000 mgr.y (mgr.14505) 579 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: cluster 2026-03-09T17:41:04.847694+0000 mgr.y (mgr.14505) 579 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: audit 2026-03-09T17:41:05.419562+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: audit 2026-03-09T17:41:05.419562+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: audit 2026-03-09T17:41:05.433981+0000 mon.c (mon.2) 725 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: audit 2026-03-09T17:41:05.433981+0000 mon.c (mon.2) 725 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: cluster 2026-03-09T17:41:05.435731+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: cluster 2026-03-09T17:41:05.435731+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: audit 2026-03-09T17:41:05.436431+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:06 vm00 bash[28333]: audit 2026-03-09T17:41:05.436431+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:41:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:41:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: cluster 2026-03-09T17:41:04.847694+0000 mgr.y (mgr.14505) 579 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: cluster 2026-03-09T17:41:04.847694+0000 mgr.y (mgr.14505) 579 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: audit 2026-03-09T17:41:05.419562+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: audit 2026-03-09T17:41:05.419562+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: audit 2026-03-09T17:41:05.433981+0000 mon.c (mon.2) 725 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: audit 2026-03-09T17:41:05.433981+0000 mon.c (mon.2) 725 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: cluster 2026-03-09T17:41:05.435731+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: cluster 2026-03-09T17:41:05.435731+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: audit 2026-03-09T17:41:05.436431+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:06 vm02 bash[23351]: audit 2026-03-09T17:41:05.436431+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]: dispatch 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:07 vm00 bash[20770]: cluster 2026-03-09T17:41:06.419626+0000 mon.a (mon.0) 3254 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:07 vm00 bash[20770]: cluster 2026-03-09T17:41:06.419626+0000 mon.a (mon.0) 3254 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:07 vm00 bash[20770]: audit 2026-03-09T17:41:06.423163+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:07 vm00 bash[20770]: audit 2026-03-09T17:41:06.423163+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:07 vm00 bash[20770]: cluster 2026-03-09T17:41:06.439316+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:07 vm00 bash[20770]: cluster 2026-03-09T17:41:06.439316+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:07 vm00 bash[20770]: cluster 2026-03-09T17:41:06.848001+0000 mgr.y (mgr.14505) 580 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:07 vm00 bash[20770]: cluster 2026-03-09T17:41:06.848001+0000 mgr.y (mgr.14505) 580 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:07 vm00 bash[28333]: cluster 2026-03-09T17:41:06.419626+0000 mon.a (mon.0) 3254 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:07 vm00 bash[28333]: cluster 2026-03-09T17:41:06.419626+0000 mon.a (mon.0) 3254 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:07 vm00 bash[28333]: audit 2026-03-09T17:41:06.423163+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:07 vm00 bash[28333]: audit 2026-03-09T17:41:06.423163+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:07 vm00 bash[28333]: cluster 2026-03-09T17:41:06.439316+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:07 vm00 bash[28333]: cluster 2026-03-09T17:41:06.439316+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:07 vm00 bash[28333]: cluster 2026-03-09T17:41:06.848001+0000 mgr.y (mgr.14505) 580 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:07 vm00 bash[28333]: cluster 2026-03-09T17:41:06.848001+0000 mgr.y (mgr.14505) 580 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:07 vm02 bash[23351]: cluster 2026-03-09T17:41:06.419626+0000 mon.a (mon.0) 3254 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:07 vm02 bash[23351]: cluster 2026-03-09T17:41:06.419626+0000 mon.a (mon.0) 3254 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:07 vm02 bash[23351]: audit 2026-03-09T17:41:06.423163+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:07 vm02 bash[23351]: audit 2026-03-09T17:41:06.423163+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-132"}]': finished 2026-03-09T17:41:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:07 vm02 bash[23351]: cluster 2026-03-09T17:41:06.439316+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T17:41:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:07 vm02 bash[23351]: cluster 2026-03-09T17:41:06.439316+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T17:41:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:07 vm02 bash[23351]: cluster 2026-03-09T17:41:06.848001+0000 mgr.y (mgr.14505) 580 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:07.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:07 vm02 bash[23351]: cluster 2026-03-09T17:41:06.848001+0000 mgr.y (mgr.14505) 580 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:08 vm00 bash[28333]: cluster 2026-03-09T17:41:07.448377+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:08 vm00 bash[28333]: cluster 2026-03-09T17:41:07.448377+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:08 vm00 bash[28333]: cluster 2026-03-09T17:41:07.462496+0000 mon.a (mon.0) 3258 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T17:41:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:08 vm00 bash[28333]: cluster 2026-03-09T17:41:07.462496+0000 mon.a (mon.0) 3258 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T17:41:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:08 vm00 bash[20770]: cluster 2026-03-09T17:41:07.448377+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:08 vm00 bash[20770]: cluster 2026-03-09T17:41:07.448377+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:08 vm00 bash[20770]: cluster 2026-03-09T17:41:07.462496+0000 mon.a (mon.0) 3258 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T17:41:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:08 vm00 bash[20770]: cluster 2026-03-09T17:41:07.462496+0000 mon.a (mon.0) 3258 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T17:41:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:08 vm02 bash[23351]: cluster 2026-03-09T17:41:07.448377+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:08 vm02 bash[23351]: cluster 2026-03-09T17:41:07.448377+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:08 vm02 bash[23351]: cluster 2026-03-09T17:41:07.462496+0000 mon.a (mon.0) 3258 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T17:41:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:08 vm02 bash[23351]: cluster 2026-03-09T17:41:07.462496+0000 mon.a (mon.0) 3258 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:09 vm00 bash[28333]: cluster 2026-03-09T17:41:08.494695+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:09 vm00 bash[28333]: cluster 2026-03-09T17:41:08.494695+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:09 vm00 bash[28333]: audit 2026-03-09T17:41:08.495269+0000 mon.c (mon.2) 726 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:09 vm00 bash[28333]: audit 2026-03-09T17:41:08.495269+0000 mon.c (mon.2) 726 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:09 vm00 bash[28333]: audit 2026-03-09T17:41:08.496987+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:09 vm00 bash[28333]: audit 2026-03-09T17:41:08.496987+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:09 vm00 bash[28333]: cluster 2026-03-09T17:41:08.848371+0000 mgr.y (mgr.14505) 581 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:09 vm00 bash[28333]: cluster 2026-03-09T17:41:08.848371+0000 mgr.y (mgr.14505) 581 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:09 vm00 bash[20770]: cluster 2026-03-09T17:41:08.494695+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:09 vm00 bash[20770]: cluster 2026-03-09T17:41:08.494695+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:09 vm00 bash[20770]: audit 2026-03-09T17:41:08.495269+0000 mon.c (mon.2) 726 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:09 vm00 bash[20770]: audit 2026-03-09T17:41:08.495269+0000 mon.c (mon.2) 726 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:09 vm00 bash[20770]: audit 2026-03-09T17:41:08.496987+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:09 vm00 bash[20770]: audit 2026-03-09T17:41:08.496987+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:09 vm00 bash[20770]: cluster 2026-03-09T17:41:08.848371+0000 mgr.y (mgr.14505) 581 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:09 vm00 bash[20770]: cluster 2026-03-09T17:41:08.848371+0000 mgr.y (mgr.14505) 581 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:09 vm02 bash[23351]: cluster 2026-03-09T17:41:08.494695+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T17:41:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:09 vm02 bash[23351]: cluster 2026-03-09T17:41:08.494695+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T17:41:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:09 vm02 bash[23351]: audit 2026-03-09T17:41:08.495269+0000 mon.c (mon.2) 726 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:09 vm02 bash[23351]: audit 2026-03-09T17:41:08.495269+0000 mon.c (mon.2) 726 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:09 vm02 bash[23351]: audit 2026-03-09T17:41:08.496987+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:09 vm02 bash[23351]: audit 2026-03-09T17:41:08.496987+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:09 vm02 bash[23351]: cluster 2026-03-09T17:41:08.848371+0000 mgr.y (mgr.14505) 581 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:09 vm02 bash[23351]: cluster 2026-03-09T17:41:08.848371+0000 mgr.y (mgr.14505) 581 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:09.474581+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:09.474581+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: cluster 2026-03-09T17:41:09.484050+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: cluster 2026-03-09T17:41:09.484050+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:09.487990+0000 mon.c (mon.2) 727 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:09.487990+0000 mon.c (mon.2) 727 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:09.503116+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:09.503116+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:10.478030+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:10.478030+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: cluster 2026-03-09T17:41:10.482836+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: cluster 2026-03-09T17:41:10.482836+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:10.483873+0000 mon.c (mon.2) 728 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:10.483873+0000 mon.c (mon.2) 728 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:10.488794+0000 mon.a (mon.0) 3266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:10 vm00 bash[20770]: audit 2026-03-09T17:41:10.488794+0000 mon.a (mon.0) 3266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:09.474581+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:09.474581+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: cluster 2026-03-09T17:41:09.484050+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: cluster 2026-03-09T17:41:09.484050+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:09.487990+0000 mon.c (mon.2) 727 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:09.487990+0000 mon.c (mon.2) 727 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:09.503116+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:09.503116+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:10.478030+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:10.478030+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: cluster 2026-03-09T17:41:10.482836+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: cluster 2026-03-09T17:41:10.482836+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:10.483873+0000 mon.c (mon.2) 728 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:10.483873+0000 mon.c (mon.2) 728 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:10.488794+0000 mon.a (mon.0) 3266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:10 vm00 bash[28333]: audit 2026-03-09T17:41:10.488794+0000 mon.a (mon.0) 3266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:09.474581+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:09.474581+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: cluster 2026-03-09T17:41:09.484050+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T17:41:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: cluster 2026-03-09T17:41:09.484050+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:09.487990+0000 mon.c (mon.2) 727 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:09.487990+0000 mon.c (mon.2) 727 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:09.503116+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:09.503116+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:10.478030+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:10.478030+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: cluster 2026-03-09T17:41:10.482836+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: cluster 2026-03-09T17:41:10.482836+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:10.483873+0000 mon.c (mon.2) 728 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:10.483873+0000 mon.c (mon.2) 728 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:10.488794+0000 mon.a (mon.0) 3266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:10.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:10 vm02 bash[23351]: audit 2026-03-09T17:41:10.488794+0000 mon.a (mon.0) 3266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: cluster 2026-03-09T17:41:10.848706+0000 mgr.y (mgr.14505) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: cluster 2026-03-09T17:41:10.848706+0000 mgr.y (mgr.14505) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: audit 2026-03-09T17:41:11.481732+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: audit 2026-03-09T17:41:11.481732+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: audit 2026-03-09T17:41:11.485220+0000 mon.c (mon.2) 729 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: audit 2026-03-09T17:41:11.485220+0000 mon.c (mon.2) 729 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: cluster 2026-03-09T17:41:11.493569+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: cluster 2026-03-09T17:41:11.493569+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: audit 2026-03-09T17:41:11.494112+0000 mon.a (mon.0) 3269 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:11 vm00 bash[20770]: audit 2026-03-09T17:41:11.494112+0000 mon.a (mon.0) 3269 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: cluster 2026-03-09T17:41:10.848706+0000 mgr.y (mgr.14505) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: cluster 2026-03-09T17:41:10.848706+0000 mgr.y (mgr.14505) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: audit 2026-03-09T17:41:11.481732+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: audit 2026-03-09T17:41:11.481732+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: audit 2026-03-09T17:41:11.485220+0000 mon.c (mon.2) 729 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: audit 2026-03-09T17:41:11.485220+0000 mon.c (mon.2) 729 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: cluster 2026-03-09T17:41:11.493569+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: cluster 2026-03-09T17:41:11.493569+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T17:41:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: audit 2026-03-09T17:41:11.494112+0000 mon.a (mon.0) 3269 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:11 vm00 bash[28333]: audit 2026-03-09T17:41:11.494112+0000 mon.a (mon.0) 3269 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: cluster 2026-03-09T17:41:10.848706+0000 mgr.y (mgr.14505) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: cluster 2026-03-09T17:41:10.848706+0000 mgr.y (mgr.14505) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 955 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: audit 2026-03-09T17:41:11.481732+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: audit 2026-03-09T17:41:11.481732+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: audit 2026-03-09T17:41:11.485220+0000 mon.c (mon.2) 729 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: audit 2026-03-09T17:41:11.485220+0000 mon.c (mon.2) 729 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: cluster 2026-03-09T17:41:11.493569+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: cluster 2026-03-09T17:41:11.493569+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T17:41:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: audit 2026-03-09T17:41:11.494112+0000 mon.a (mon.0) 3269 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:11.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:11 vm02 bash[23351]: audit 2026-03-09T17:41:11.494112+0000 mon.a (mon.0) 3269 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]: dispatch 2026-03-09T17:41:12.519 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:41:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:12 vm00 bash[20770]: audit 2026-03-09T17:41:12.167599+0000 mgr.y (mgr.14505) 583 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:12 vm00 bash[20770]: audit 2026-03-09T17:41:12.167599+0000 mgr.y (mgr.14505) 583 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:12 vm00 bash[20770]: cluster 2026-03-09T17:41:12.481844+0000 mon.a (mon.0) 3270 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:12 vm00 bash[20770]: cluster 2026-03-09T17:41:12.481844+0000 mon.a (mon.0) 3270 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:12 vm00 bash[20770]: audit 2026-03-09T17:41:12.484681+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]': finished 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:12 vm00 bash[20770]: audit 2026-03-09T17:41:12.484681+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]': finished 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:12 vm00 bash[20770]: cluster 2026-03-09T17:41:12.496040+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:12 vm00 bash[20770]: cluster 2026-03-09T17:41:12.496040+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:12 vm00 bash[28333]: audit 2026-03-09T17:41:12.167599+0000 mgr.y (mgr.14505) 583 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:12 vm00 bash[28333]: audit 2026-03-09T17:41:12.167599+0000 mgr.y (mgr.14505) 583 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:12 vm00 bash[28333]: cluster 2026-03-09T17:41:12.481844+0000 mon.a (mon.0) 3270 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:12 vm00 bash[28333]: cluster 2026-03-09T17:41:12.481844+0000 mon.a (mon.0) 3270 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:12 vm00 bash[28333]: audit 2026-03-09T17:41:12.484681+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]': finished 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:12 vm00 bash[28333]: audit 2026-03-09T17:41:12.484681+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]': finished 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:12 vm00 bash[28333]: cluster 2026-03-09T17:41:12.496040+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T17:41:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:12 vm00 bash[28333]: cluster 2026-03-09T17:41:12.496040+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T17:41:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:12 vm02 bash[23351]: audit 2026-03-09T17:41:12.167599+0000 mgr.y (mgr.14505) 583 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:12 vm02 bash[23351]: audit 2026-03-09T17:41:12.167599+0000 mgr.y (mgr.14505) 583 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:12 vm02 bash[23351]: cluster 2026-03-09T17:41:12.481844+0000 mon.a (mon.0) 3270 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:12 vm02 bash[23351]: cluster 2026-03-09T17:41:12.481844+0000 mon.a (mon.0) 3270 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:12 vm02 bash[23351]: audit 2026-03-09T17:41:12.484681+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]': finished 2026-03-09T17:41:12.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:12 vm02 bash[23351]: audit 2026-03-09T17:41:12.484681+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-134", "mode": "writeback"}]': finished 2026-03-09T17:41:12.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:12 vm02 bash[23351]: cluster 2026-03-09T17:41:12.496040+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T17:41:12.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:12 vm02 bash[23351]: cluster 2026-03-09T17:41:12.496040+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: audit 2026-03-09T17:41:12.552291+0000 mon.c (mon.2) 730 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: audit 2026-03-09T17:41:12.552291+0000 mon.c (mon.2) 730 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: audit 2026-03-09T17:41:12.552635+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: audit 2026-03-09T17:41:12.552635+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: cluster 2026-03-09T17:41:12.849410+0000 mgr.y (mgr.14505) 584 : cluster [DBG] pgmap v1024: 268 pgs: 6 unknown, 262 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: cluster 2026-03-09T17:41:12.849410+0000 mgr.y (mgr.14505) 584 : cluster [DBG] pgmap v1024: 268 pgs: 6 unknown, 262 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: audit 2026-03-09T17:41:13.303622+0000 mon.a (mon.0) 3274 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: audit 2026-03-09T17:41:13.303622+0000 mon.a (mon.0) 3274 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: audit 2026-03-09T17:41:13.306996+0000 mon.c (mon.2) 731 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:13 vm00 bash[20770]: audit 2026-03-09T17:41:13.306996+0000 mon.c (mon.2) 731 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: audit 2026-03-09T17:41:12.552291+0000 mon.c (mon.2) 730 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: audit 2026-03-09T17:41:12.552291+0000 mon.c (mon.2) 730 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: audit 2026-03-09T17:41:12.552635+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: audit 2026-03-09T17:41:12.552635+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: cluster 2026-03-09T17:41:12.849410+0000 mgr.y (mgr.14505) 584 : cluster [DBG] pgmap v1024: 268 pgs: 6 unknown, 262 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: cluster 2026-03-09T17:41:12.849410+0000 mgr.y (mgr.14505) 584 : cluster [DBG] pgmap v1024: 268 pgs: 6 unknown, 262 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: audit 2026-03-09T17:41:13.303622+0000 mon.a (mon.0) 3274 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: audit 2026-03-09T17:41:13.303622+0000 mon.a (mon.0) 3274 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: audit 2026-03-09T17:41:13.306996+0000 mon.c (mon.2) 731 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:13 vm00 bash[28333]: audit 2026-03-09T17:41:13.306996+0000 mon.c (mon.2) 731 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: audit 2026-03-09T17:41:12.552291+0000 mon.c (mon.2) 730 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: audit 2026-03-09T17:41:12.552291+0000 mon.c (mon.2) 730 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: audit 2026-03-09T17:41:12.552635+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: audit 2026-03-09T17:41:12.552635+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: cluster 2026-03-09T17:41:12.849410+0000 mgr.y (mgr.14505) 584 : cluster [DBG] pgmap v1024: 268 pgs: 6 unknown, 262 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: cluster 2026-03-09T17:41:12.849410+0000 mgr.y (mgr.14505) 584 : cluster [DBG] pgmap v1024: 268 pgs: 6 unknown, 262 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:13.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: audit 2026-03-09T17:41:13.303622+0000 mon.a (mon.0) 3274 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:13.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: audit 2026-03-09T17:41:13.303622+0000 mon.a (mon.0) 3274 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:13.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: audit 2026-03-09T17:41:13.306996+0000 mon.c (mon.2) 731 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:13.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:13 vm02 bash[23351]: audit 2026-03-09T17:41:13.306996+0000 mon.c (mon.2) 731 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:14 vm02 bash[23351]: audit 2026-03-09T17:41:13.531877+0000 mon.a (mon.0) 3275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:14 vm02 bash[23351]: audit 2026-03-09T17:41:13.531877+0000 mon.a (mon.0) 3275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:14 vm02 bash[23351]: cluster 2026-03-09T17:41:13.537666+0000 mon.a (mon.0) 3276 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T17:41:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:14 vm02 bash[23351]: cluster 2026-03-09T17:41:13.537666+0000 mon.a (mon.0) 3276 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T17:41:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:14 vm02 bash[23351]: audit 2026-03-09T17:41:13.541376+0000 mon.c (mon.2) 732 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:14 vm02 bash[23351]: audit 2026-03-09T17:41:13.541376+0000 mon.c (mon.2) 732 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:14 vm02 bash[23351]: audit 2026-03-09T17:41:13.541680+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:14 vm02 bash[23351]: audit 2026-03-09T17:41:13.541680+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:14 vm00 bash[20770]: audit 2026-03-09T17:41:13.531877+0000 mon.a (mon.0) 3275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:14 vm00 bash[20770]: audit 2026-03-09T17:41:13.531877+0000 mon.a (mon.0) 3275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:14 vm00 bash[20770]: cluster 2026-03-09T17:41:13.537666+0000 mon.a (mon.0) 3276 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:14 vm00 bash[20770]: cluster 2026-03-09T17:41:13.537666+0000 mon.a (mon.0) 3276 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:14 vm00 bash[20770]: audit 2026-03-09T17:41:13.541376+0000 mon.c (mon.2) 732 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:14 vm00 bash[20770]: audit 2026-03-09T17:41:13.541376+0000 mon.c (mon.2) 732 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:14 vm00 bash[20770]: audit 2026-03-09T17:41:13.541680+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:14 vm00 bash[20770]: audit 2026-03-09T17:41:13.541680+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:14 vm00 bash[28333]: audit 2026-03-09T17:41:13.531877+0000 mon.a (mon.0) 3275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:14 vm00 bash[28333]: audit 2026-03-09T17:41:13.531877+0000 mon.a (mon.0) 3275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:14 vm00 bash[28333]: cluster 2026-03-09T17:41:13.537666+0000 mon.a (mon.0) 3276 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:14 vm00 bash[28333]: cluster 2026-03-09T17:41:13.537666+0000 mon.a (mon.0) 3276 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:14 vm00 bash[28333]: audit 2026-03-09T17:41:13.541376+0000 mon.c (mon.2) 732 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:14 vm00 bash[28333]: audit 2026-03-09T17:41:13.541376+0000 mon.c (mon.2) 732 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:14 vm00 bash[28333]: audit 2026-03-09T17:41:13.541680+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:14 vm00 bash[28333]: audit 2026-03-09T17:41:13.541680+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]: dispatch 2026-03-09T17:41:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:15 vm02 bash[23351]: cluster 2026-03-09T17:41:14.532044+0000 mon.a (mon.0) 3278 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:15 vm02 bash[23351]: cluster 2026-03-09T17:41:14.532044+0000 mon.a (mon.0) 3278 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:15 vm02 bash[23351]: audit 2026-03-09T17:41:14.535150+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:15 vm02 bash[23351]: audit 2026-03-09T17:41:14.535150+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:15 vm02 bash[23351]: cluster 2026-03-09T17:41:14.543603+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T17:41:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:15 vm02 bash[23351]: cluster 2026-03-09T17:41:14.543603+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T17:41:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:15 vm02 bash[23351]: cluster 2026-03-09T17:41:14.849757+0000 mgr.y (mgr.14505) 585 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:15 vm02 bash[23351]: cluster 2026-03-09T17:41:14.849757+0000 mgr.y (mgr.14505) 585 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:15 vm00 bash[20770]: cluster 2026-03-09T17:41:14.532044+0000 mon.a (mon.0) 3278 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:15 vm00 bash[20770]: cluster 2026-03-09T17:41:14.532044+0000 mon.a (mon.0) 3278 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:15 vm00 bash[20770]: audit 2026-03-09T17:41:14.535150+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:15 vm00 bash[20770]: audit 2026-03-09T17:41:14.535150+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:15 vm00 bash[20770]: cluster 2026-03-09T17:41:14.543603+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:15 vm00 bash[20770]: cluster 2026-03-09T17:41:14.543603+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:15 vm00 bash[20770]: cluster 2026-03-09T17:41:14.849757+0000 mgr.y (mgr.14505) 585 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:15 vm00 bash[20770]: cluster 2026-03-09T17:41:14.849757+0000 mgr.y (mgr.14505) 585 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:15 vm00 bash[28333]: cluster 2026-03-09T17:41:14.532044+0000 mon.a (mon.0) 3278 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:15 vm00 bash[28333]: cluster 2026-03-09T17:41:14.532044+0000 mon.a (mon.0) 3278 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:15 vm00 bash[28333]: audit 2026-03-09T17:41:14.535150+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:15 vm00 bash[28333]: audit 2026-03-09T17:41:14.535150+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-134"}]': finished 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:15 vm00 bash[28333]: cluster 2026-03-09T17:41:14.543603+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:15 vm00 bash[28333]: cluster 2026-03-09T17:41:14.543603+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:15 vm00 bash[28333]: cluster 2026-03-09T17:41:14.849757+0000 mgr.y (mgr.14505) 585 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:15 vm00 bash[28333]: cluster 2026-03-09T17:41:14.849757+0000 mgr.y (mgr.14505) 585 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T17:41:16.553 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:41:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:41:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:41:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:16 vm02 bash[23351]: cluster 2026-03-09T17:41:15.548320+0000 mon.a (mon.0) 3281 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:16 vm02 bash[23351]: cluster 2026-03-09T17:41:15.548320+0000 mon.a (mon.0) 3281 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:16 vm02 bash[23351]: cluster 2026-03-09T17:41:15.572833+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T17:41:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:16 vm02 bash[23351]: cluster 2026-03-09T17:41:15.572833+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T17:41:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:16 vm00 bash[28333]: cluster 2026-03-09T17:41:15.548320+0000 mon.a (mon.0) 3281 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:16 vm00 bash[28333]: cluster 2026-03-09T17:41:15.548320+0000 mon.a (mon.0) 3281 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:16 vm00 bash[28333]: cluster 2026-03-09T17:41:15.572833+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T17:41:17.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:16 vm00 bash[28333]: cluster 2026-03-09T17:41:15.572833+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T17:41:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:16 vm00 bash[20770]: cluster 2026-03-09T17:41:15.548320+0000 mon.a (mon.0) 3281 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:16 vm00 bash[20770]: cluster 2026-03-09T17:41:15.548320+0000 mon.a (mon.0) 3281 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:16 vm00 bash[20770]: cluster 2026-03-09T17:41:15.572833+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T17:41:17.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:16 vm00 bash[20770]: cluster 2026-03-09T17:41:15.572833+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T17:41:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.586533+0000 mon.c (mon.2) 733 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.586533+0000 mon.c (mon.2) 733 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: cluster 2026-03-09T17:41:16.586677+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T17:41:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: cluster 2026-03-09T17:41:16.586677+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T17:41:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.589392+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.589392+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.803346+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:17.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.803346+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:17.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: cluster 2026-03-09T17:41:16.811294+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T17:41:17.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: cluster 2026-03-09T17:41:16.811294+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T17:41:17.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.814337+0000 mon.c (mon.2) 734 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:17.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.814337+0000 mon.c (mon.2) 734 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:17.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.815344+0000 mon.a (mon.0) 3287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:17.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: audit 2026-03-09T17:41:16.815344+0000 mon.a (mon.0) 3287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:17.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: cluster 2026-03-09T17:41:16.850091+0000 mgr.y (mgr.14505) 586 : cluster [DBG] pgmap v1031: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:17.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:17 vm02 bash[23351]: cluster 2026-03-09T17:41:16.850091+0000 mgr.y (mgr.14505) 586 : cluster [DBG] pgmap v1031: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.586533+0000 mon.c (mon.2) 733 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.586533+0000 mon.c (mon.2) 733 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: cluster 2026-03-09T17:41:16.586677+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: cluster 2026-03-09T17:41:16.586677+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.589392+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.589392+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.803346+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.803346+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: cluster 2026-03-09T17:41:16.811294+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: cluster 2026-03-09T17:41:16.811294+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.814337+0000 mon.c (mon.2) 734 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.814337+0000 mon.c (mon.2) 734 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.815344+0000 mon.a (mon.0) 3287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: audit 2026-03-09T17:41:16.815344+0000 mon.a (mon.0) 3287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: cluster 2026-03-09T17:41:16.850091+0000 mgr.y (mgr.14505) 586 : cluster [DBG] pgmap v1031: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:18.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:17 vm00 bash[20770]: cluster 2026-03-09T17:41:16.850091+0000 mgr.y (mgr.14505) 586 : cluster [DBG] pgmap v1031: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.586533+0000 mon.c (mon.2) 733 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.586533+0000 mon.c (mon.2) 733 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: cluster 2026-03-09T17:41:16.586677+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: cluster 2026-03-09T17:41:16.586677+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.589392+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.589392+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.803346+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.803346+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: cluster 2026-03-09T17:41:16.811294+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: cluster 2026-03-09T17:41:16.811294+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.814337+0000 mon.c (mon.2) 734 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.814337+0000 mon.c (mon.2) 734 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.815344+0000 mon.a (mon.0) 3287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: audit 2026-03-09T17:41:16.815344+0000 mon.a (mon.0) 3287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: cluster 2026-03-09T17:41:16.850091+0000 mgr.y (mgr.14505) 586 : cluster [DBG] pgmap v1031: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:18.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:17 vm00 bash[28333]: cluster 2026-03-09T17:41:16.850091+0000 mgr.y (mgr.14505) 586 : cluster [DBG] pgmap v1031: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:18 vm02 bash[23351]: audit 2026-03-09T17:41:17.813442+0000 mon.a (mon.0) 3288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:18 vm02 bash[23351]: audit 2026-03-09T17:41:17.813442+0000 mon.a (mon.0) 3288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:18 vm02 bash[23351]: cluster 2026-03-09T17:41:17.818672+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T17:41:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:18 vm02 bash[23351]: cluster 2026-03-09T17:41:17.818672+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T17:41:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:18 vm02 bash[23351]: audit 2026-03-09T17:41:17.824514+0000 mon.c (mon.2) 735 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:18 vm02 bash[23351]: audit 2026-03-09T17:41:17.824514+0000 mon.c (mon.2) 735 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:18 vm02 bash[23351]: audit 2026-03-09T17:41:17.826100+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:18 vm02 bash[23351]: audit 2026-03-09T17:41:17.826100+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:18 vm00 bash[20770]: audit 2026-03-09T17:41:17.813442+0000 mon.a (mon.0) 3288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:18 vm00 bash[20770]: audit 2026-03-09T17:41:17.813442+0000 mon.a (mon.0) 3288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:18 vm00 bash[20770]: cluster 2026-03-09T17:41:17.818672+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:18 vm00 bash[20770]: cluster 2026-03-09T17:41:17.818672+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:18 vm00 bash[20770]: audit 2026-03-09T17:41:17.824514+0000 mon.c (mon.2) 735 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:18 vm00 bash[20770]: audit 2026-03-09T17:41:17.824514+0000 mon.c (mon.2) 735 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:18 vm00 bash[20770]: audit 2026-03-09T17:41:17.826100+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:18 vm00 bash[20770]: audit 2026-03-09T17:41:17.826100+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:18 vm00 bash[28333]: audit 2026-03-09T17:41:17.813442+0000 mon.a (mon.0) 3288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:18 vm00 bash[28333]: audit 2026-03-09T17:41:17.813442+0000 mon.a (mon.0) 3288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:18 vm00 bash[28333]: cluster 2026-03-09T17:41:17.818672+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:18 vm00 bash[28333]: cluster 2026-03-09T17:41:17.818672+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:18 vm00 bash[28333]: audit 2026-03-09T17:41:17.824514+0000 mon.c (mon.2) 735 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:18 vm00 bash[28333]: audit 2026-03-09T17:41:17.824514+0000 mon.c (mon.2) 735 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:18 vm00 bash[28333]: audit 2026-03-09T17:41:17.826100+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:19.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:18 vm00 bash[28333]: audit 2026-03-09T17:41:17.826100+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: audit 2026-03-09T17:41:18.816636+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: audit 2026-03-09T17:41:18.816636+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: audit 2026-03-09T17:41:18.822115+0000 mon.c (mon.2) 736 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: audit 2026-03-09T17:41:18.822115+0000 mon.c (mon.2) 736 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: cluster 2026-03-09T17:41:18.824386+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: cluster 2026-03-09T17:41:18.824386+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: audit 2026-03-09T17:41:18.825465+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: audit 2026-03-09T17:41:18.825465+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: cluster 2026-03-09T17:41:18.851366+0000 mgr.y (mgr.14505) 587 : cluster [DBG] pgmap v1034: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: cluster 2026-03-09T17:41:18.851366+0000 mgr.y (mgr.14505) 587 : cluster [DBG] pgmap v1034: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: cluster 2026-03-09T17:41:19.816659+0000 mon.a (mon.0) 3294 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: cluster 2026-03-09T17:41:19.816659+0000 mon.a (mon.0) 3294 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: audit 2026-03-09T17:41:19.819721+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]': finished 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: audit 2026-03-09T17:41:19.819721+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]': finished 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: cluster 2026-03-09T17:41:19.828474+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T17:41:20.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:19 vm02 bash[23351]: cluster 2026-03-09T17:41:19.828474+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: audit 2026-03-09T17:41:18.816636+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: audit 2026-03-09T17:41:18.816636+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: audit 2026-03-09T17:41:18.822115+0000 mon.c (mon.2) 736 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: audit 2026-03-09T17:41:18.822115+0000 mon.c (mon.2) 736 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: cluster 2026-03-09T17:41:18.824386+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: cluster 2026-03-09T17:41:18.824386+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: audit 2026-03-09T17:41:18.825465+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: audit 2026-03-09T17:41:18.825465+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: cluster 2026-03-09T17:41:18.851366+0000 mgr.y (mgr.14505) 587 : cluster [DBG] pgmap v1034: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: cluster 2026-03-09T17:41:18.851366+0000 mgr.y (mgr.14505) 587 : cluster [DBG] pgmap v1034: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: cluster 2026-03-09T17:41:19.816659+0000 mon.a (mon.0) 3294 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: cluster 2026-03-09T17:41:19.816659+0000 mon.a (mon.0) 3294 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: audit 2026-03-09T17:41:19.819721+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]': finished 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: audit 2026-03-09T17:41:19.819721+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]': finished 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: cluster 2026-03-09T17:41:19.828474+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:19 vm00 bash[28333]: cluster 2026-03-09T17:41:19.828474+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: audit 2026-03-09T17:41:18.816636+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: audit 2026-03-09T17:41:18.816636+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: audit 2026-03-09T17:41:18.822115+0000 mon.c (mon.2) 736 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: audit 2026-03-09T17:41:18.822115+0000 mon.c (mon.2) 736 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: cluster 2026-03-09T17:41:18.824386+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: cluster 2026-03-09T17:41:18.824386+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: audit 2026-03-09T17:41:18.825465+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: audit 2026-03-09T17:41:18.825465+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]: dispatch 2026-03-09T17:41:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: cluster 2026-03-09T17:41:18.851366+0000 mgr.y (mgr.14505) 587 : cluster [DBG] pgmap v1034: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:41:20.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: cluster 2026-03-09T17:41:18.851366+0000 mgr.y (mgr.14505) 587 : cluster [DBG] pgmap v1034: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T17:41:20.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: cluster 2026-03-09T17:41:19.816659+0000 mon.a (mon.0) 3294 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:20.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: cluster 2026-03-09T17:41:19.816659+0000 mon.a (mon.0) 3294 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:20.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: audit 2026-03-09T17:41:19.819721+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]': finished 2026-03-09T17:41:20.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: audit 2026-03-09T17:41:19.819721+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-136", "mode": "writeback"}]': finished 2026-03-09T17:41:20.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: cluster 2026-03-09T17:41:19.828474+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T17:41:20.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:19 vm00 bash[20770]: cluster 2026-03-09T17:41:19.828474+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T17:41:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:20 vm00 bash[28333]: audit 2026-03-09T17:41:19.911324+0000 mon.c (mon.2) 737 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:20 vm00 bash[28333]: audit 2026-03-09T17:41:19.911324+0000 mon.c (mon.2) 737 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:20 vm00 bash[28333]: audit 2026-03-09T17:41:19.911755+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:20 vm00 bash[28333]: audit 2026-03-09T17:41:19.911755+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:20 vm00 bash[20770]: audit 2026-03-09T17:41:19.911324+0000 mon.c (mon.2) 737 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:20 vm00 bash[20770]: audit 2026-03-09T17:41:19.911324+0000 mon.c (mon.2) 737 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:20 vm00 bash[20770]: audit 2026-03-09T17:41:19.911755+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:20 vm00 bash[20770]: audit 2026-03-09T17:41:19.911755+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:20 vm02 bash[23351]: audit 2026-03-09T17:41:19.911324+0000 mon.c (mon.2) 737 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:20 vm02 bash[23351]: audit 2026-03-09T17:41:19.911324+0000 mon.c (mon.2) 737 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:20 vm02 bash[23351]: audit 2026-03-09T17:41:19.911755+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:21.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:20 vm02 bash[23351]: audit 2026-03-09T17:41:19.911755+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: cluster 2026-03-09T17:41:20.851714+0000 mgr.y (mgr.14505) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: cluster 2026-03-09T17:41:20.851714+0000 mgr.y (mgr.14505) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: audit 2026-03-09T17:41:20.897517+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: audit 2026-03-09T17:41:20.897517+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: cluster 2026-03-09T17:41:20.900578+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: cluster 2026-03-09T17:41:20.900578+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: audit 2026-03-09T17:41:20.902208+0000 mon.c (mon.2) 738 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: audit 2026-03-09T17:41:20.902208+0000 mon.c (mon.2) 738 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: audit 2026-03-09T17:41:20.905095+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: audit 2026-03-09T17:41:20.905095+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: cluster 2026-03-09T17:41:21.800939+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:22 vm00 bash[20770]: cluster 2026-03-09T17:41:21.800939+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: cluster 2026-03-09T17:41:20.851714+0000 mgr.y (mgr.14505) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: cluster 2026-03-09T17:41:20.851714+0000 mgr.y (mgr.14505) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: audit 2026-03-09T17:41:20.897517+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: audit 2026-03-09T17:41:20.897517+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: cluster 2026-03-09T17:41:20.900578+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: cluster 2026-03-09T17:41:20.900578+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: audit 2026-03-09T17:41:20.902208+0000 mon.c (mon.2) 738 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: audit 2026-03-09T17:41:20.902208+0000 mon.c (mon.2) 738 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: audit 2026-03-09T17:41:20.905095+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: audit 2026-03-09T17:41:20.905095+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: cluster 2026-03-09T17:41:21.800939+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:22 vm00 bash[28333]: cluster 2026-03-09T17:41:21.800939+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:22.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:41:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:41:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: cluster 2026-03-09T17:41:20.851714+0000 mgr.y (mgr.14505) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: cluster 2026-03-09T17:41:20.851714+0000 mgr.y (mgr.14505) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: audit 2026-03-09T17:41:20.897517+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: audit 2026-03-09T17:41:20.897517+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:41:22.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: cluster 2026-03-09T17:41:20.900578+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T17:41:22.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: cluster 2026-03-09T17:41:20.900578+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T17:41:22.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: audit 2026-03-09T17:41:20.902208+0000 mon.c (mon.2) 738 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: audit 2026-03-09T17:41:20.902208+0000 mon.c (mon.2) 738 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: audit 2026-03-09T17:41:20.905095+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: audit 2026-03-09T17:41:20.905095+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]: dispatch 2026-03-09T17:41:22.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: cluster 2026-03-09T17:41:21.800939+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:22.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:22 vm02 bash[23351]: cluster 2026-03-09T17:41:21.800939+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:23 vm00 bash[20770]: cluster 2026-03-09T17:41:21.897668+0000 mon.a (mon.0) 3302 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:23 vm00 bash[20770]: cluster 2026-03-09T17:41:21.897668+0000 mon.a (mon.0) 3302 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:23 vm00 bash[20770]: audit 2026-03-09T17:41:22.006700+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:23 vm00 bash[20770]: audit 2026-03-09T17:41:22.006700+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:23 vm00 bash[20770]: cluster 2026-03-09T17:41:22.009552+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:23 vm00 bash[20770]: cluster 2026-03-09T17:41:22.009552+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:23 vm00 bash[20770]: audit 2026-03-09T17:41:22.178089+0000 mgr.y (mgr.14505) 589 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:23 vm00 bash[20770]: audit 2026-03-09T17:41:22.178089+0000 mgr.y (mgr.14505) 589 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:23 vm00 bash[28333]: cluster 2026-03-09T17:41:21.897668+0000 mon.a (mon.0) 3302 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:23 vm00 bash[28333]: cluster 2026-03-09T17:41:21.897668+0000 mon.a (mon.0) 3302 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:23 vm00 bash[28333]: audit 2026-03-09T17:41:22.006700+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:23 vm00 bash[28333]: audit 2026-03-09T17:41:22.006700+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:23 vm00 bash[28333]: cluster 2026-03-09T17:41:22.009552+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:23 vm00 bash[28333]: cluster 2026-03-09T17:41:22.009552+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:23 vm00 bash[28333]: audit 2026-03-09T17:41:22.178089+0000 mgr.y (mgr.14505) 589 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:23 vm00 bash[28333]: audit 2026-03-09T17:41:22.178089+0000 mgr.y (mgr.14505) 589 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:23 vm02 bash[23351]: cluster 2026-03-09T17:41:21.897668+0000 mon.a (mon.0) 3302 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:23 vm02 bash[23351]: cluster 2026-03-09T17:41:21.897668+0000 mon.a (mon.0) 3302 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:41:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:23 vm02 bash[23351]: audit 2026-03-09T17:41:22.006700+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:23 vm02 bash[23351]: audit 2026-03-09T17:41:22.006700+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-136"}]': finished 2026-03-09T17:41:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:23 vm02 bash[23351]: cluster 2026-03-09T17:41:22.009552+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T17:41:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:23 vm02 bash[23351]: cluster 2026-03-09T17:41:22.009552+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T17:41:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:23 vm02 bash[23351]: audit 2026-03-09T17:41:22.178089+0000 mgr.y (mgr.14505) 589 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:23 vm02 bash[23351]: audit 2026-03-09T17:41:22.178089+0000 mgr.y (mgr.14505) 589 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:24 vm00 bash[20770]: cluster 2026-03-09T17:41:22.852579+0000 mgr.y (mgr.14505) 590 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T17:41:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:24 vm00 bash[20770]: cluster 2026-03-09T17:41:22.852579+0000 mgr.y (mgr.14505) 590 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T17:41:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:24 vm00 bash[20770]: cluster 2026-03-09T17:41:23.035439+0000 mon.a (mon.0) 3305 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T17:41:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:24 vm00 bash[20770]: cluster 2026-03-09T17:41:23.035439+0000 mon.a (mon.0) 3305 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T17:41:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:24 vm00 bash[28333]: cluster 2026-03-09T17:41:22.852579+0000 mgr.y (mgr.14505) 590 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T17:41:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:24 vm00 bash[28333]: cluster 2026-03-09T17:41:22.852579+0000 mgr.y (mgr.14505) 590 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T17:41:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:24 vm00 bash[28333]: cluster 2026-03-09T17:41:23.035439+0000 mon.a (mon.0) 3305 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T17:41:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:24 vm00 bash[28333]: cluster 2026-03-09T17:41:23.035439+0000 mon.a (mon.0) 3305 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T17:41:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:24 vm02 bash[23351]: cluster 2026-03-09T17:41:22.852579+0000 mgr.y (mgr.14505) 590 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T17:41:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:24 vm02 bash[23351]: cluster 2026-03-09T17:41:22.852579+0000 mgr.y (mgr.14505) 590 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T17:41:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:24 vm02 bash[23351]: cluster 2026-03-09T17:41:23.035439+0000 mon.a (mon.0) 3305 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T17:41:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:24 vm02 bash[23351]: cluster 2026-03-09T17:41:23.035439+0000 mon.a (mon.0) 3305 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T17:41:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:25 vm02 bash[23351]: cluster 2026-03-09T17:41:24.046652+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T17:41:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:25 vm02 bash[23351]: cluster 2026-03-09T17:41:24.046652+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T17:41:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:25 vm02 bash[23351]: audit 2026-03-09T17:41:24.062900+0000 mon.c (mon.2) 739 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:25 vm02 bash[23351]: audit 2026-03-09T17:41:24.062900+0000 mon.c (mon.2) 739 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:25 vm02 bash[23351]: audit 2026-03-09T17:41:24.063092+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:25 vm02 bash[23351]: audit 2026-03-09T17:41:24.063092+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:25 vm00 bash[20770]: cluster 2026-03-09T17:41:24.046652+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:25 vm00 bash[20770]: cluster 2026-03-09T17:41:24.046652+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:25 vm00 bash[20770]: audit 2026-03-09T17:41:24.062900+0000 mon.c (mon.2) 739 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:25 vm00 bash[20770]: audit 2026-03-09T17:41:24.062900+0000 mon.c (mon.2) 739 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:25 vm00 bash[20770]: audit 2026-03-09T17:41:24.063092+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:25 vm00 bash[20770]: audit 2026-03-09T17:41:24.063092+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:25 vm00 bash[28333]: cluster 2026-03-09T17:41:24.046652+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:25 vm00 bash[28333]: cluster 2026-03-09T17:41:24.046652+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:25 vm00 bash[28333]: audit 2026-03-09T17:41:24.062900+0000 mon.c (mon.2) 739 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:25 vm00 bash[28333]: audit 2026-03-09T17:41:24.062900+0000 mon.c (mon.2) 739 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:25 vm00 bash[28333]: audit 2026-03-09T17:41:24.063092+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:25 vm00 bash[28333]: audit 2026-03-09T17:41:24.063092+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: cluster 2026-03-09T17:41:24.852909+0000 mgr.y (mgr.14505) 591 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: cluster 2026-03-09T17:41:24.852909+0000 mgr.y (mgr.14505) 591 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: audit 2026-03-09T17:41:25.058690+0000 mon.a (mon.0) 3308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: audit 2026-03-09T17:41:25.058690+0000 mon.a (mon.0) 3308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: cluster 2026-03-09T17:41:25.064952+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: cluster 2026-03-09T17:41:25.064952+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: audit 2026-03-09T17:41:25.081927+0000 mon.c (mon.2) 740 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: audit 2026-03-09T17:41:25.081927+0000 mon.c (mon.2) 740 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: audit 2026-03-09T17:41:25.090147+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:26 vm00 bash[20770]: audit 2026-03-09T17:41:25.090147+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: cluster 2026-03-09T17:41:24.852909+0000 mgr.y (mgr.14505) 591 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: cluster 2026-03-09T17:41:24.852909+0000 mgr.y (mgr.14505) 591 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-09T17:41:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: audit 2026-03-09T17:41:25.058690+0000 mon.a (mon.0) 3308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: audit 2026-03-09T17:41:25.058690+0000 mon.a (mon.0) 3308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: cluster 2026-03-09T17:41:25.064952+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T17:41:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: cluster 2026-03-09T17:41:25.064952+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T17:41:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: audit 2026-03-09T17:41:25.081927+0000 mon.c (mon.2) 740 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: audit 2026-03-09T17:41:25.081927+0000 mon.c (mon.2) 740 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: audit 2026-03-09T17:41:25.090147+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:26 vm00 bash[28333]: audit 2026-03-09T17:41:25.090147+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.539 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:41:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:41:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: cluster 2026-03-09T17:41:24.852909+0000 mgr.y (mgr.14505) 591 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: cluster 2026-03-09T17:41:24.852909+0000 mgr.y (mgr.14505) 591 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: audit 2026-03-09T17:41:25.058690+0000 mon.a (mon.0) 3308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: audit 2026-03-09T17:41:25.058690+0000 mon.a (mon.0) 3308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: cluster 2026-03-09T17:41:25.064952+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: cluster 2026-03-09T17:41:25.064952+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: audit 2026-03-09T17:41:25.081927+0000 mon.c (mon.2) 740 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: audit 2026-03-09T17:41:25.081927+0000 mon.c (mon.2) 740 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: audit 2026-03-09T17:41:25.090147+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:26 vm02 bash[23351]: audit 2026-03-09T17:41:25.090147+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: audit 2026-03-09T17:41:26.255350+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: audit 2026-03-09T17:41:26.255350+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: audit 2026-03-09T17:41:26.337454+0000 mon.c (mon.2) 741 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: audit 2026-03-09T17:41:26.337454+0000 mon.c (mon.2) 741 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: cluster 2026-03-09T17:41:26.338572+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: cluster 2026-03-09T17:41:26.338572+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: audit 2026-03-09T17:41:26.339032+0000 mon.a (mon.0) 3313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: audit 2026-03-09T17:41:26.339032+0000 mon.a (mon.0) 3313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: cluster 2026-03-09T17:41:26.801559+0000 mon.a (mon.0) 3314 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:27 vm00 bash[20770]: cluster 2026-03-09T17:41:26.801559+0000 mon.a (mon.0) 3314 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: audit 2026-03-09T17:41:26.255350+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: audit 2026-03-09T17:41:26.255350+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: audit 2026-03-09T17:41:26.337454+0000 mon.c (mon.2) 741 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: audit 2026-03-09T17:41:26.337454+0000 mon.c (mon.2) 741 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: cluster 2026-03-09T17:41:26.338572+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: cluster 2026-03-09T17:41:26.338572+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: audit 2026-03-09T17:41:26.339032+0000 mon.a (mon.0) 3313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: audit 2026-03-09T17:41:26.339032+0000 mon.a (mon.0) 3313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: cluster 2026-03-09T17:41:26.801559+0000 mon.a (mon.0) 3314 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:27.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:27 vm00 bash[28333]: cluster 2026-03-09T17:41:26.801559+0000 mon.a (mon.0) 3314 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:27.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: audit 2026-03-09T17:41:26.255350+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:27.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: audit 2026-03-09T17:41:26.255350+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:27.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: audit 2026-03-09T17:41:26.337454+0000 mon.c (mon.2) 741 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: audit 2026-03-09T17:41:26.337454+0000 mon.c (mon.2) 741 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: cluster 2026-03-09T17:41:26.338572+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T17:41:27.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: cluster 2026-03-09T17:41:26.338572+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T17:41:27.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: audit 2026-03-09T17:41:26.339032+0000 mon.a (mon.0) 3313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: audit 2026-03-09T17:41:26.339032+0000 mon.a (mon.0) 3313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:27.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: cluster 2026-03-09T17:41:26.801559+0000 mon.a (mon.0) 3314 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:27.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:27 vm02 bash[23351]: cluster 2026-03-09T17:41:26.801559+0000 mon.a (mon.0) 3314 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: cluster 2026-03-09T17:41:26.853242+0000 mgr.y (mgr.14505) 592 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: cluster 2026-03-09T17:41:26.853242+0000 mgr.y (mgr.14505) 592 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: audit 2026-03-09T17:41:27.258945+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: audit 2026-03-09T17:41:27.258945+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: cluster 2026-03-09T17:41:27.268599+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: cluster 2026-03-09T17:41:27.268599+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: audit 2026-03-09T17:41:27.269472+0000 mon.c (mon.2) 742 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: audit 2026-03-09T17:41:27.269472+0000 mon.c (mon.2) 742 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: audit 2026-03-09T17:41:27.271001+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: audit 2026-03-09T17:41:27.271001+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: audit 2026-03-09T17:41:28.263809+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: audit 2026-03-09T17:41:28.263809+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: cluster 2026-03-09T17:41:28.268029+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:28 vm00 bash[20770]: cluster 2026-03-09T17:41:28.268029+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T17:41:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: cluster 2026-03-09T17:41:26.853242+0000 mgr.y (mgr.14505) 592 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: cluster 2026-03-09T17:41:26.853242+0000 mgr.y (mgr.14505) 592 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: audit 2026-03-09T17:41:27.258945+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: audit 2026-03-09T17:41:27.258945+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: cluster 2026-03-09T17:41:27.268599+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: cluster 2026-03-09T17:41:27.268599+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: audit 2026-03-09T17:41:27.269472+0000 mon.c (mon.2) 742 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: audit 2026-03-09T17:41:27.269472+0000 mon.c (mon.2) 742 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: audit 2026-03-09T17:41:27.271001+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: audit 2026-03-09T17:41:27.271001+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: audit 2026-03-09T17:41:28.263809+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: audit 2026-03-09T17:41:28.263809+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: cluster 2026-03-09T17:41:28.268029+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T17:41:28.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:28 vm00 bash[28333]: cluster 2026-03-09T17:41:28.268029+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: cluster 2026-03-09T17:41:26.853242+0000 mgr.y (mgr.14505) 592 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: cluster 2026-03-09T17:41:26.853242+0000 mgr.y (mgr.14505) 592 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: audit 2026-03-09T17:41:27.258945+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: audit 2026-03-09T17:41:27.258945+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: cluster 2026-03-09T17:41:27.268599+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: cluster 2026-03-09T17:41:27.268599+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: audit 2026-03-09T17:41:27.269472+0000 mon.c (mon.2) 742 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: audit 2026-03-09T17:41:27.269472+0000 mon.c (mon.2) 742 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: audit 2026-03-09T17:41:27.271001+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: audit 2026-03-09T17:41:27.271001+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: audit 2026-03-09T17:41:28.263809+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:41:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: audit 2026-03-09T17:41:28.263809+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:41:28.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: cluster 2026-03-09T17:41:28.268029+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T17:41:28.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:28 vm02 bash[23351]: cluster 2026-03-09T17:41:28.268029+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T17:41:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:29 vm02 bash[23351]: audit 2026-03-09T17:41:28.276337+0000 mon.c (mon.2) 743 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:29 vm02 bash[23351]: audit 2026-03-09T17:41:28.276337+0000 mon.c (mon.2) 743 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:29 vm02 bash[23351]: audit 2026-03-09T17:41:28.293617+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:29 vm02 bash[23351]: audit 2026-03-09T17:41:28.293617+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:29 vm02 bash[23351]: audit 2026-03-09T17:41:28.317259+0000 mon.a (mon.0) 3321 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:29 vm02 bash[23351]: audit 2026-03-09T17:41:28.317259+0000 mon.a (mon.0) 3321 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:29 vm02 bash[23351]: audit 2026-03-09T17:41:28.318470+0000 mon.c (mon.2) 744 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:29 vm02 bash[23351]: audit 2026-03-09T17:41:28.318470+0000 mon.c (mon.2) 744 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:29 vm00 bash[28333]: audit 2026-03-09T17:41:28.276337+0000 mon.c (mon.2) 743 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:29 vm00 bash[28333]: audit 2026-03-09T17:41:28.276337+0000 mon.c (mon.2) 743 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:29 vm00 bash[28333]: audit 2026-03-09T17:41:28.293617+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:29 vm00 bash[28333]: audit 2026-03-09T17:41:28.293617+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:29 vm00 bash[28333]: audit 2026-03-09T17:41:28.317259+0000 mon.a (mon.0) 3321 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:29 vm00 bash[28333]: audit 2026-03-09T17:41:28.317259+0000 mon.a (mon.0) 3321 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:29 vm00 bash[28333]: audit 2026-03-09T17:41:28.318470+0000 mon.c (mon.2) 744 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:29 vm00 bash[28333]: audit 2026-03-09T17:41:28.318470+0000 mon.c (mon.2) 744 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:29 vm00 bash[20770]: audit 2026-03-09T17:41:28.276337+0000 mon.c (mon.2) 743 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:29 vm00 bash[20770]: audit 2026-03-09T17:41:28.276337+0000 mon.c (mon.2) 743 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:29 vm00 bash[20770]: audit 2026-03-09T17:41:28.293617+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:29 vm00 bash[20770]: audit 2026-03-09T17:41:28.293617+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:29 vm00 bash[20770]: audit 2026-03-09T17:41:28.317259+0000 mon.a (mon.0) 3321 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:29 vm00 bash[20770]: audit 2026-03-09T17:41:28.317259+0000 mon.a (mon.0) 3321 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:29 vm00 bash[20770]: audit 2026-03-09T17:41:28.318470+0000 mon.c (mon.2) 744 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:29 vm00 bash[20770]: audit 2026-03-09T17:41:28.318470+0000 mon.c (mon.2) 744 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:30 vm02 bash[23351]: cluster 2026-03-09T17:41:28.853534+0000 mgr.y (mgr.14505) 593 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:30 vm02 bash[23351]: cluster 2026-03-09T17:41:28.853534+0000 mgr.y (mgr.14505) 593 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:30 vm02 bash[23351]: audit 2026-03-09T17:41:29.280771+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:41:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:30 vm02 bash[23351]: audit 2026-03-09T17:41:29.280771+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:41:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:30 vm02 bash[23351]: cluster 2026-03-09T17:41:29.285979+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T17:41:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:30 vm02 bash[23351]: cluster 2026-03-09T17:41:29.285979+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:30 vm00 bash[20770]: cluster 2026-03-09T17:41:28.853534+0000 mgr.y (mgr.14505) 593 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:30 vm00 bash[20770]: cluster 2026-03-09T17:41:28.853534+0000 mgr.y (mgr.14505) 593 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:30 vm00 bash[20770]: audit 2026-03-09T17:41:29.280771+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:30 vm00 bash[20770]: audit 2026-03-09T17:41:29.280771+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:30 vm00 bash[20770]: cluster 2026-03-09T17:41:29.285979+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:30 vm00 bash[20770]: cluster 2026-03-09T17:41:29.285979+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:30 vm00 bash[28333]: cluster 2026-03-09T17:41:28.853534+0000 mgr.y (mgr.14505) 593 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:30 vm00 bash[28333]: cluster 2026-03-09T17:41:28.853534+0000 mgr.y (mgr.14505) 593 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:30 vm00 bash[28333]: audit 2026-03-09T17:41:29.280771+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:30 vm00 bash[28333]: audit 2026-03-09T17:41:29.280771+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:30 vm00 bash[28333]: cluster 2026-03-09T17:41:29.285979+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T17:41:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:30 vm00 bash[28333]: cluster 2026-03-09T17:41:29.285979+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T17:41:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:31 vm02 bash[23351]: audit 2026-03-09T17:41:30.328554+0000 mon.c (mon.2) 745 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:31 vm02 bash[23351]: audit 2026-03-09T17:41:30.328554+0000 mon.c (mon.2) 745 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:31 vm02 bash[23351]: audit 2026-03-09T17:41:30.328934+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:31 vm02 bash[23351]: audit 2026-03-09T17:41:30.328934+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:31 vm02 bash[23351]: audit 2026-03-09T17:41:30.329437+0000 mon.c (mon.2) 746 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:31 vm02 bash[23351]: audit 2026-03-09T17:41:30.329437+0000 mon.c (mon.2) 746 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:31 vm02 bash[23351]: audit 2026-03-09T17:41:30.329717+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:31 vm02 bash[23351]: audit 2026-03-09T17:41:30.329717+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:31 vm00 bash[20770]: audit 2026-03-09T17:41:30.328554+0000 mon.c (mon.2) 745 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:31 vm00 bash[20770]: audit 2026-03-09T17:41:30.328554+0000 mon.c (mon.2) 745 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:31 vm00 bash[20770]: audit 2026-03-09T17:41:30.328934+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:31 vm00 bash[20770]: audit 2026-03-09T17:41:30.328934+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:31 vm00 bash[20770]: audit 2026-03-09T17:41:30.329437+0000 mon.c (mon.2) 746 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:31 vm00 bash[20770]: audit 2026-03-09T17:41:30.329437+0000 mon.c (mon.2) 746 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:31 vm00 bash[20770]: audit 2026-03-09T17:41:30.329717+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:31 vm00 bash[20770]: audit 2026-03-09T17:41:30.329717+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:31 vm00 bash[28333]: audit 2026-03-09T17:41:30.328554+0000 mon.c (mon.2) 745 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:31 vm00 bash[28333]: audit 2026-03-09T17:41:30.328554+0000 mon.c (mon.2) 745 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:31 vm00 bash[28333]: audit 2026-03-09T17:41:30.328934+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:31 vm00 bash[28333]: audit 2026-03-09T17:41:30.328934+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:31 vm00 bash[28333]: audit 2026-03-09T17:41:30.329437+0000 mon.c (mon.2) 746 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:31 vm00 bash[28333]: audit 2026-03-09T17:41:30.329437+0000 mon.c (mon.2) 746 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:31 vm00 bash[28333]: audit 2026-03-09T17:41:30.329717+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:31.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:31 vm00 bash[28333]: audit 2026-03-09T17:41:30.329717+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]: dispatch 2026-03-09T17:41:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:32 vm02 bash[23351]: cluster 2026-03-09T17:41:30.853915+0000 mgr.y (mgr.14505) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:32 vm02 bash[23351]: cluster 2026-03-09T17:41:30.853915+0000 mgr.y (mgr.14505) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:32 vm02 bash[23351]: audit 2026-03-09T17:41:31.298932+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]': finished 2026-03-09T17:41:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:32 vm02 bash[23351]: audit 2026-03-09T17:41:31.298932+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]': finished 2026-03-09T17:41:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:32 vm02 bash[23351]: cluster 2026-03-09T17:41:31.309242+0000 mon.a (mon.0) 3327 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T17:41:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:32 vm02 bash[23351]: cluster 2026-03-09T17:41:31.309242+0000 mon.a (mon.0) 3327 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T17:41:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:32 vm02 bash[23351]: cluster 2026-03-09T17:41:31.803795+0000 mon.a (mon.0) 3328 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:32 vm02 bash[23351]: cluster 2026-03-09T17:41:31.803795+0000 mon.a (mon.0) 3328 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:32.637 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:41:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:32 vm00 bash[20770]: cluster 2026-03-09T17:41:30.853915+0000 mgr.y (mgr.14505) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:32 vm00 bash[20770]: cluster 2026-03-09T17:41:30.853915+0000 mgr.y (mgr.14505) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:32 vm00 bash[20770]: audit 2026-03-09T17:41:31.298932+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]': finished 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:32 vm00 bash[20770]: audit 2026-03-09T17:41:31.298932+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]': finished 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:32 vm00 bash[20770]: cluster 2026-03-09T17:41:31.309242+0000 mon.a (mon.0) 3327 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:32 vm00 bash[20770]: cluster 2026-03-09T17:41:31.309242+0000 mon.a (mon.0) 3327 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:32 vm00 bash[20770]: cluster 2026-03-09T17:41:31.803795+0000 mon.a (mon.0) 3328 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:32 vm00 bash[20770]: cluster 2026-03-09T17:41:31.803795+0000 mon.a (mon.0) 3328 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:32 vm00 bash[28333]: cluster 2026-03-09T17:41:30.853915+0000 mgr.y (mgr.14505) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:32 vm00 bash[28333]: cluster 2026-03-09T17:41:30.853915+0000 mgr.y (mgr.14505) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:32.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:32 vm00 bash[28333]: audit 2026-03-09T17:41:31.298932+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]': finished 2026-03-09T17:41:32.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:32 vm00 bash[28333]: audit 2026-03-09T17:41:31.298932+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-138"}]': finished 2026-03-09T17:41:32.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:32 vm00 bash[28333]: cluster 2026-03-09T17:41:31.309242+0000 mon.a (mon.0) 3327 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T17:41:32.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:32 vm00 bash[28333]: cluster 2026-03-09T17:41:31.309242+0000 mon.a (mon.0) 3327 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T17:41:32.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:32 vm00 bash[28333]: cluster 2026-03-09T17:41:31.803795+0000 mon.a (mon.0) 3328 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:32.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:32 vm00 bash[28333]: cluster 2026-03-09T17:41:31.803795+0000 mon.a (mon.0) 3328 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:33 vm02 bash[23351]: audit 2026-03-09T17:41:32.188630+0000 mgr.y (mgr.14505) 595 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:33 vm02 bash[23351]: audit 2026-03-09T17:41:32.188630+0000 mgr.y (mgr.14505) 595 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:33 vm02 bash[23351]: cluster 2026-03-09T17:41:32.322072+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T17:41:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:33 vm02 bash[23351]: cluster 2026-03-09T17:41:32.322072+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T17:41:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:33 vm00 bash[28333]: audit 2026-03-09T17:41:32.188630+0000 mgr.y (mgr.14505) 595 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:33 vm00 bash[28333]: audit 2026-03-09T17:41:32.188630+0000 mgr.y (mgr.14505) 595 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:33 vm00 bash[28333]: cluster 2026-03-09T17:41:32.322072+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T17:41:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:33 vm00 bash[28333]: cluster 2026-03-09T17:41:32.322072+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T17:41:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:33 vm00 bash[20770]: audit 2026-03-09T17:41:32.188630+0000 mgr.y (mgr.14505) 595 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:33 vm00 bash[20770]: audit 2026-03-09T17:41:32.188630+0000 mgr.y (mgr.14505) 595 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:33 vm00 bash[20770]: cluster 2026-03-09T17:41:32.322072+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T17:41:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:33 vm00 bash[20770]: cluster 2026-03-09T17:41:32.322072+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:34 vm00 bash[20770]: cluster 2026-03-09T17:41:32.854667+0000 mgr.y (mgr.14505) 596 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:34 vm00 bash[20770]: cluster 2026-03-09T17:41:32.854667+0000 mgr.y (mgr.14505) 596 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:34 vm00 bash[20770]: cluster 2026-03-09T17:41:33.369688+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:34 vm00 bash[20770]: cluster 2026-03-09T17:41:33.369688+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:34 vm00 bash[20770]: audit 2026-03-09T17:41:33.372983+0000 mon.c (mon.2) 747 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:34 vm00 bash[20770]: audit 2026-03-09T17:41:33.372983+0000 mon.c (mon.2) 747 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:34 vm00 bash[20770]: audit 2026-03-09T17:41:33.376674+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:34 vm00 bash[20770]: audit 2026-03-09T17:41:33.376674+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:34 vm00 bash[28333]: cluster 2026-03-09T17:41:32.854667+0000 mgr.y (mgr.14505) 596 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:34 vm00 bash[28333]: cluster 2026-03-09T17:41:32.854667+0000 mgr.y (mgr.14505) 596 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:34 vm00 bash[28333]: cluster 2026-03-09T17:41:33.369688+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:34 vm00 bash[28333]: cluster 2026-03-09T17:41:33.369688+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:34 vm00 bash[28333]: audit 2026-03-09T17:41:33.372983+0000 mon.c (mon.2) 747 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:34 vm00 bash[28333]: audit 2026-03-09T17:41:33.372983+0000 mon.c (mon.2) 747 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:34 vm00 bash[28333]: audit 2026-03-09T17:41:33.376674+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:34 vm00 bash[28333]: audit 2026-03-09T17:41:33.376674+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:34 vm02 bash[23351]: cluster 2026-03-09T17:41:32.854667+0000 mgr.y (mgr.14505) 596 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:34 vm02 bash[23351]: cluster 2026-03-09T17:41:32.854667+0000 mgr.y (mgr.14505) 596 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:41:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:34 vm02 bash[23351]: cluster 2026-03-09T17:41:33.369688+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T17:41:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:34 vm02 bash[23351]: cluster 2026-03-09T17:41:33.369688+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T17:41:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:34 vm02 bash[23351]: audit 2026-03-09T17:41:33.372983+0000 mon.c (mon.2) 747 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:34 vm02 bash[23351]: audit 2026-03-09T17:41:33.372983+0000 mon.c (mon.2) 747 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:34 vm02 bash[23351]: audit 2026-03-09T17:41:33.376674+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:35.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:34 vm02 bash[23351]: audit 2026-03-09T17:41:33.376674+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: audit 2026-03-09T17:41:34.498598+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: audit 2026-03-09T17:41:34.498598+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: cluster 2026-03-09T17:41:34.555331+0000 mon.a (mon.0) 3333 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: cluster 2026-03-09T17:41:34.555331+0000 mon.a (mon.0) 3333 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: audit 2026-03-09T17:41:34.846780+0000 mon.c (mon.2) 748 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: audit 2026-03-09T17:41:34.846780+0000 mon.c (mon.2) 748 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: audit 2026-03-09T17:41:34.847053+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: audit 2026-03-09T17:41:34.847053+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: cluster 2026-03-09T17:41:34.855029+0000 mgr.y (mgr.14505) 597 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:35 vm00 bash[28333]: cluster 2026-03-09T17:41:34.855029+0000 mgr.y (mgr.14505) 597 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: audit 2026-03-09T17:41:34.498598+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: audit 2026-03-09T17:41:34.498598+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: cluster 2026-03-09T17:41:34.555331+0000 mon.a (mon.0) 3333 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: cluster 2026-03-09T17:41:34.555331+0000 mon.a (mon.0) 3333 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: audit 2026-03-09T17:41:34.846780+0000 mon.c (mon.2) 748 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: audit 2026-03-09T17:41:34.846780+0000 mon.c (mon.2) 748 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: audit 2026-03-09T17:41:34.847053+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: audit 2026-03-09T17:41:34.847053+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: cluster 2026-03-09T17:41:34.855029+0000 mgr.y (mgr.14505) 597 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:36.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:35 vm00 bash[20770]: cluster 2026-03-09T17:41:34.855029+0000 mgr.y (mgr.14505) 597 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: audit 2026-03-09T17:41:34.498598+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: audit 2026-03-09T17:41:34.498598+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: cluster 2026-03-09T17:41:34.555331+0000 mon.a (mon.0) 3333 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: cluster 2026-03-09T17:41:34.555331+0000 mon.a (mon.0) 3333 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: audit 2026-03-09T17:41:34.846780+0000 mon.c (mon.2) 748 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: audit 2026-03-09T17:41:34.846780+0000 mon.c (mon.2) 748 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: audit 2026-03-09T17:41:34.847053+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: audit 2026-03-09T17:41:34.847053+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: cluster 2026-03-09T17:41:34.855029+0000 mgr.y (mgr.14505) 597 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:36.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:35 vm02 bash[23351]: cluster 2026-03-09T17:41:34.855029+0000 mgr.y (mgr.14505) 597 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:36.692 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:41:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:41:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:36 vm00 bash[28333]: audit 2026-03-09T17:41:35.673539+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:36 vm00 bash[28333]: audit 2026-03-09T17:41:35.673539+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:36 vm00 bash[28333]: cluster 2026-03-09T17:41:35.685984+0000 mon.a (mon.0) 3336 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:36 vm00 bash[28333]: cluster 2026-03-09T17:41:35.685984+0000 mon.a (mon.0) 3336 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:36 vm00 bash[28333]: audit 2026-03-09T17:41:35.691989+0000 mon.c (mon.2) 749 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:36 vm00 bash[28333]: audit 2026-03-09T17:41:35.691989+0000 mon.c (mon.2) 749 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:36 vm00 bash[28333]: audit 2026-03-09T17:41:35.692315+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:36 vm00 bash[28333]: audit 2026-03-09T17:41:35.692315+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:36 vm00 bash[20770]: audit 2026-03-09T17:41:35.673539+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:36 vm00 bash[20770]: audit 2026-03-09T17:41:35.673539+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:36 vm00 bash[20770]: cluster 2026-03-09T17:41:35.685984+0000 mon.a (mon.0) 3336 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:36 vm00 bash[20770]: cluster 2026-03-09T17:41:35.685984+0000 mon.a (mon.0) 3336 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:36 vm00 bash[20770]: audit 2026-03-09T17:41:35.691989+0000 mon.c (mon.2) 749 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:36 vm00 bash[20770]: audit 2026-03-09T17:41:35.691989+0000 mon.c (mon.2) 749 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:36 vm00 bash[20770]: audit 2026-03-09T17:41:35.692315+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:36 vm00 bash[20770]: audit 2026-03-09T17:41:35.692315+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:36 vm02 bash[23351]: audit 2026-03-09T17:41:35.673539+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:36 vm02 bash[23351]: audit 2026-03-09T17:41:35.673539+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:36 vm02 bash[23351]: cluster 2026-03-09T17:41:35.685984+0000 mon.a (mon.0) 3336 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T17:41:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:36 vm02 bash[23351]: cluster 2026-03-09T17:41:35.685984+0000 mon.a (mon.0) 3336 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T17:41:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:36 vm02 bash[23351]: audit 2026-03-09T17:41:35.691989+0000 mon.c (mon.2) 749 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:36 vm02 bash[23351]: audit 2026-03-09T17:41:35.691989+0000 mon.c (mon.2) 749 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:36 vm02 bash[23351]: audit 2026-03-09T17:41:35.692315+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:37.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:36 vm02 bash[23351]: audit 2026-03-09T17:41:35.692315+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: audit 2026-03-09T17:41:36.698895+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: audit 2026-03-09T17:41:36.698895+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: cluster 2026-03-09T17:41:36.703490+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: cluster 2026-03-09T17:41:36.703490+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: audit 2026-03-09T17:41:36.706680+0000 mon.c (mon.2) 750 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: audit 2026-03-09T17:41:36.706680+0000 mon.c (mon.2) 750 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: audit 2026-03-09T17:41:36.707104+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: audit 2026-03-09T17:41:36.707104+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: cluster 2026-03-09T17:41:36.855613+0000 mgr.y (mgr.14505) 598 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:37 vm00 bash[28333]: cluster 2026-03-09T17:41:36.855613+0000 mgr.y (mgr.14505) 598 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: audit 2026-03-09T17:41:36.698895+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: audit 2026-03-09T17:41:36.698895+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: cluster 2026-03-09T17:41:36.703490+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: cluster 2026-03-09T17:41:36.703490+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: audit 2026-03-09T17:41:36.706680+0000 mon.c (mon.2) 750 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: audit 2026-03-09T17:41:36.706680+0000 mon.c (mon.2) 750 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: audit 2026-03-09T17:41:36.707104+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: audit 2026-03-09T17:41:36.707104+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: cluster 2026-03-09T17:41:36.855613+0000 mgr.y (mgr.14505) 598 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:37 vm00 bash[20770]: cluster 2026-03-09T17:41:36.855613+0000 mgr.y (mgr.14505) 598 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: audit 2026-03-09T17:41:36.698895+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: audit 2026-03-09T17:41:36.698895+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: cluster 2026-03-09T17:41:36.703490+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: cluster 2026-03-09T17:41:36.703490+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: audit 2026-03-09T17:41:36.706680+0000 mon.c (mon.2) 750 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: audit 2026-03-09T17:41:36.706680+0000 mon.c (mon.2) 750 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: audit 2026-03-09T17:41:36.707104+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: audit 2026-03-09T17:41:36.707104+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: cluster 2026-03-09T17:41:36.855613+0000 mgr.y (mgr.14505) 598 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:38.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:37 vm02 bash[23351]: cluster 2026-03-09T17:41:36.855613+0000 mgr.y (mgr.14505) 598 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: audit 2026-03-09T17:41:37.702157+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: audit 2026-03-09T17:41:37.702157+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: cluster 2026-03-09T17:41:37.708255+0000 mon.a (mon.0) 3342 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: cluster 2026-03-09T17:41:37.708255+0000 mon.a (mon.0) 3342 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: audit 2026-03-09T17:41:37.709533+0000 mon.c (mon.2) 751 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: audit 2026-03-09T17:41:37.709533+0000 mon.c (mon.2) 751 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: audit 2026-03-09T17:41:37.710613+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: audit 2026-03-09T17:41:37.710613+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: audit 2026-03-09T17:41:38.705735+0000 mon.a (mon.0) 3344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: audit 2026-03-09T17:41:38.705735+0000 mon.a (mon.0) 3344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: cluster 2026-03-09T17:41:38.709916+0000 mon.a (mon.0) 3345 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T17:41:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:38 vm02 bash[23351]: cluster 2026-03-09T17:41:38.709916+0000 mon.a (mon.0) 3345 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: audit 2026-03-09T17:41:37.702157+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: audit 2026-03-09T17:41:37.702157+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: cluster 2026-03-09T17:41:37.708255+0000 mon.a (mon.0) 3342 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: cluster 2026-03-09T17:41:37.708255+0000 mon.a (mon.0) 3342 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: audit 2026-03-09T17:41:37.709533+0000 mon.c (mon.2) 751 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: audit 2026-03-09T17:41:37.709533+0000 mon.c (mon.2) 751 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: audit 2026-03-09T17:41:37.710613+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: audit 2026-03-09T17:41:37.710613+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: audit 2026-03-09T17:41:38.705735+0000 mon.a (mon.0) 3344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: audit 2026-03-09T17:41:38.705735+0000 mon.a (mon.0) 3344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: cluster 2026-03-09T17:41:38.709916+0000 mon.a (mon.0) 3345 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:38 vm00 bash[28333]: cluster 2026-03-09T17:41:38.709916+0000 mon.a (mon.0) 3345 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: audit 2026-03-09T17:41:37.702157+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: audit 2026-03-09T17:41:37.702157+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: cluster 2026-03-09T17:41:37.708255+0000 mon.a (mon.0) 3342 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: cluster 2026-03-09T17:41:37.708255+0000 mon.a (mon.0) 3342 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: audit 2026-03-09T17:41:37.709533+0000 mon.c (mon.2) 751 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: audit 2026-03-09T17:41:37.709533+0000 mon.c (mon.2) 751 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: audit 2026-03-09T17:41:37.710613+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: audit 2026-03-09T17:41:37.710613+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: audit 2026-03-09T17:41:38.705735+0000 mon.a (mon.0) 3344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:41:39.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: audit 2026-03-09T17:41:38.705735+0000 mon.a (mon.0) 3344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:41:39.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: cluster 2026-03-09T17:41:38.709916+0000 mon.a (mon.0) 3345 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T17:41:39.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:38 vm00 bash[20770]: cluster 2026-03-09T17:41:38.709916+0000 mon.a (mon.0) 3345 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: audit 2026-03-09T17:41:38.715194+0000 mon.c (mon.2) 752 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: audit 2026-03-09T17:41:38.715194+0000 mon.c (mon.2) 752 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: audit 2026-03-09T17:41:38.781116+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: audit 2026-03-09T17:41:38.781116+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: cluster 2026-03-09T17:41:38.855911+0000 mgr.y (mgr.14505) 599 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: cluster 2026-03-09T17:41:38.855911+0000 mgr.y (mgr.14505) 599 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: audit 2026-03-09T17:41:39.708880+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: audit 2026-03-09T17:41:39.708880+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: cluster 2026-03-09T17:41:39.712285+0000 mon.a (mon.0) 3348 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T17:41:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:39 vm02 bash[23351]: cluster 2026-03-09T17:41:39.712285+0000 mon.a (mon.0) 3348 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: audit 2026-03-09T17:41:38.715194+0000 mon.c (mon.2) 752 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: audit 2026-03-09T17:41:38.715194+0000 mon.c (mon.2) 752 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: audit 2026-03-09T17:41:38.781116+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: audit 2026-03-09T17:41:38.781116+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: cluster 2026-03-09T17:41:38.855911+0000 mgr.y (mgr.14505) 599 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: cluster 2026-03-09T17:41:38.855911+0000 mgr.y (mgr.14505) 599 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: audit 2026-03-09T17:41:39.708880+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: audit 2026-03-09T17:41:39.708880+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: cluster 2026-03-09T17:41:39.712285+0000 mon.a (mon.0) 3348 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:39 vm00 bash[28333]: cluster 2026-03-09T17:41:39.712285+0000 mon.a (mon.0) 3348 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: audit 2026-03-09T17:41:38.715194+0000 mon.c (mon.2) 752 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: audit 2026-03-09T17:41:38.715194+0000 mon.c (mon.2) 752 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: audit 2026-03-09T17:41:38.781116+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: audit 2026-03-09T17:41:38.781116+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: cluster 2026-03-09T17:41:38.855911+0000 mgr.y (mgr.14505) 599 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: cluster 2026-03-09T17:41:38.855911+0000 mgr.y (mgr.14505) 599 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: audit 2026-03-09T17:41:39.708880+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: audit 2026-03-09T17:41:39.708880+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: cluster 2026-03-09T17:41:39.712285+0000 mon.a (mon.0) 3348 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T17:41:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:39 vm00 bash[20770]: cluster 2026-03-09T17:41:39.712285+0000 mon.a (mon.0) 3348 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T17:41:42.194 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:41 vm02 bash[23351]: cluster 2026-03-09T17:41:40.856215+0000 mgr.y (mgr.14505) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:42.194 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:41 vm02 bash[23351]: cluster 2026-03-09T17:41:40.856215+0000 mgr.y (mgr.14505) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:41 vm00 bash[28333]: cluster 2026-03-09T17:41:40.856215+0000 mgr.y (mgr.14505) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:41 vm00 bash[28333]: cluster 2026-03-09T17:41:40.856215+0000 mgr.y (mgr.14505) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:41 vm00 bash[20770]: cluster 2026-03-09T17:41:40.856215+0000 mgr.y (mgr.14505) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:41 vm00 bash[20770]: cluster 2026-03-09T17:41:40.856215+0000 mgr.y (mgr.14505) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:41:42.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:41:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:41:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:42 vm00 bash[28333]: audit 2026-03-09T17:41:42.194144+0000 mgr.y (mgr.14505) 601 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:42 vm00 bash[28333]: audit 2026-03-09T17:41:42.194144+0000 mgr.y (mgr.14505) 601 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:42 vm00 bash[20770]: audit 2026-03-09T17:41:42.194144+0000 mgr.y (mgr.14505) 601 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:42 vm00 bash[20770]: audit 2026-03-09T17:41:42.194144+0000 mgr.y (mgr.14505) 601 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:42 vm02 bash[23351]: audit 2026-03-09T17:41:42.194144+0000 mgr.y (mgr.14505) 601 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:42 vm02 bash[23351]: audit 2026-03-09T17:41:42.194144+0000 mgr.y (mgr.14505) 601 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:43 vm00 bash[28333]: cluster 2026-03-09T17:41:42.857085+0000 mgr.y (mgr.14505) 602 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 5.5 KiB/s wr, 2 op/s 2026-03-09T17:41:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:43 vm00 bash[28333]: cluster 2026-03-09T17:41:42.857085+0000 mgr.y (mgr.14505) 602 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 5.5 KiB/s wr, 2 op/s 2026-03-09T17:41:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:43 vm00 bash[28333]: audit 2026-03-09T17:41:43.325529+0000 mon.c (mon.2) 753 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:43 vm00 bash[28333]: audit 2026-03-09T17:41:43.325529+0000 mon.c (mon.2) 753 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:43 vm00 bash[20770]: cluster 2026-03-09T17:41:42.857085+0000 mgr.y (mgr.14505) 602 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 5.5 KiB/s wr, 2 op/s 2026-03-09T17:41:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:43 vm00 bash[20770]: cluster 2026-03-09T17:41:42.857085+0000 mgr.y (mgr.14505) 602 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 5.5 KiB/s wr, 2 op/s 2026-03-09T17:41:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:43 vm00 bash[20770]: audit 2026-03-09T17:41:43.325529+0000 mon.c (mon.2) 753 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:43 vm00 bash[20770]: audit 2026-03-09T17:41:43.325529+0000 mon.c (mon.2) 753 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:43 vm02 bash[23351]: cluster 2026-03-09T17:41:42.857085+0000 mgr.y (mgr.14505) 602 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 5.5 KiB/s wr, 2 op/s 2026-03-09T17:41:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:43 vm02 bash[23351]: cluster 2026-03-09T17:41:42.857085+0000 mgr.y (mgr.14505) 602 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 5.5 KiB/s wr, 2 op/s 2026-03-09T17:41:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:43 vm02 bash[23351]: audit 2026-03-09T17:41:43.325529+0000 mon.c (mon.2) 753 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:43 vm02 bash[23351]: audit 2026-03-09T17:41:43.325529+0000 mon.c (mon.2) 753 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:45 vm00 bash[28333]: cluster 2026-03-09T17:41:44.857599+0000 mgr.y (mgr.14505) 603 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.6 KiB/s wr, 2 op/s 2026-03-09T17:41:46.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:45 vm00 bash[28333]: cluster 2026-03-09T17:41:44.857599+0000 mgr.y (mgr.14505) 603 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.6 KiB/s wr, 2 op/s 2026-03-09T17:41:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:45 vm00 bash[20770]: cluster 2026-03-09T17:41:44.857599+0000 mgr.y (mgr.14505) 603 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.6 KiB/s wr, 2 op/s 2026-03-09T17:41:46.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:45 vm00 bash[20770]: cluster 2026-03-09T17:41:44.857599+0000 mgr.y (mgr.14505) 603 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.6 KiB/s wr, 2 op/s 2026-03-09T17:41:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:45 vm02 bash[23351]: cluster 2026-03-09T17:41:44.857599+0000 mgr.y (mgr.14505) 603 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.6 KiB/s wr, 2 op/s 2026-03-09T17:41:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:45 vm02 bash[23351]: cluster 2026-03-09T17:41:44.857599+0000 mgr.y (mgr.14505) 603 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.6 KiB/s wr, 2 op/s 2026-03-09T17:41:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:41:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:41:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:41:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:47 vm00 bash[28333]: cluster 2026-03-09T17:41:46.857905+0000 mgr.y (mgr.14505) 604 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 628 B/s rd, 4.1 KiB/s wr, 1 op/s 2026-03-09T17:41:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:47 vm00 bash[28333]: cluster 2026-03-09T17:41:46.857905+0000 mgr.y (mgr.14505) 604 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 628 B/s rd, 4.1 KiB/s wr, 1 op/s 2026-03-09T17:41:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:47 vm00 bash[20770]: cluster 2026-03-09T17:41:46.857905+0000 mgr.y (mgr.14505) 604 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 628 B/s rd, 4.1 KiB/s wr, 1 op/s 2026-03-09T17:41:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:47 vm00 bash[20770]: cluster 2026-03-09T17:41:46.857905+0000 mgr.y (mgr.14505) 604 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 628 B/s rd, 4.1 KiB/s wr, 1 op/s 2026-03-09T17:41:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:47 vm02 bash[23351]: cluster 2026-03-09T17:41:46.857905+0000 mgr.y (mgr.14505) 604 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 628 B/s rd, 4.1 KiB/s wr, 1 op/s 2026-03-09T17:41:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:47 vm02 bash[23351]: cluster 2026-03-09T17:41:46.857905+0000 mgr.y (mgr.14505) 604 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 628 B/s rd, 4.1 KiB/s wr, 1 op/s 2026-03-09T17:41:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:49 vm00 bash[28333]: cluster 2026-03-09T17:41:48.858692+0000 mgr.y (mgr.14505) 605 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.4 KiB/s wr, 2 op/s 2026-03-09T17:41:50.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:49 vm00 bash[28333]: cluster 2026-03-09T17:41:48.858692+0000 mgr.y (mgr.14505) 605 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.4 KiB/s wr, 2 op/s 2026-03-09T17:41:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:49 vm00 bash[20770]: cluster 2026-03-09T17:41:48.858692+0000 mgr.y (mgr.14505) 605 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.4 KiB/s wr, 2 op/s 2026-03-09T17:41:50.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:49 vm00 bash[20770]: cluster 2026-03-09T17:41:48.858692+0000 mgr.y (mgr.14505) 605 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.4 KiB/s wr, 2 op/s 2026-03-09T17:41:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:49 vm02 bash[23351]: cluster 2026-03-09T17:41:48.858692+0000 mgr.y (mgr.14505) 605 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.4 KiB/s wr, 2 op/s 2026-03-09T17:41:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:49 vm02 bash[23351]: cluster 2026-03-09T17:41:48.858692+0000 mgr.y (mgr.14505) 605 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.4 KiB/s wr, 2 op/s 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: cluster 2026-03-09T17:41:50.859020+0000 mgr.y (mgr.14505) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 918 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: cluster 2026-03-09T17:41:50.859020+0000 mgr.y (mgr.14505) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 918 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: audit 2026-03-09T17:41:51.767394+0000 mon.c (mon.2) 754 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: audit 2026-03-09T17:41:51.767394+0000 mon.c (mon.2) 754 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: audit 2026-03-09T17:41:51.767769+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: audit 2026-03-09T17:41:51.767769+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: audit 2026-03-09T17:41:51.768300+0000 mon.c (mon.2) 755 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: audit 2026-03-09T17:41:51.768300+0000 mon.c (mon.2) 755 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: audit 2026-03-09T17:41:51.768590+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:52 vm00 bash[28333]: audit 2026-03-09T17:41:51.768590+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: cluster 2026-03-09T17:41:50.859020+0000 mgr.y (mgr.14505) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 918 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: cluster 2026-03-09T17:41:50.859020+0000 mgr.y (mgr.14505) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 918 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: audit 2026-03-09T17:41:51.767394+0000 mon.c (mon.2) 754 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: audit 2026-03-09T17:41:51.767394+0000 mon.c (mon.2) 754 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: audit 2026-03-09T17:41:51.767769+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: audit 2026-03-09T17:41:51.767769+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: audit 2026-03-09T17:41:51.768300+0000 mon.c (mon.2) 755 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: audit 2026-03-09T17:41:51.768300+0000 mon.c (mon.2) 755 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: audit 2026-03-09T17:41:51.768590+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:52 vm00 bash[20770]: audit 2026-03-09T17:41:51.768590+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: cluster 2026-03-09T17:41:50.859020+0000 mgr.y (mgr.14505) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 918 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: cluster 2026-03-09T17:41:50.859020+0000 mgr.y (mgr.14505) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 918 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: audit 2026-03-09T17:41:51.767394+0000 mon.c (mon.2) 754 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: audit 2026-03-09T17:41:51.767394+0000 mon.c (mon.2) 754 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: audit 2026-03-09T17:41:51.767769+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: audit 2026-03-09T17:41:51.767769+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: audit 2026-03-09T17:41:51.768300+0000 mon.c (mon.2) 755 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: audit 2026-03-09T17:41:51.768300+0000 mon.c (mon.2) 755 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: audit 2026-03-09T17:41:51.768590+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:52 vm02 bash[23351]: audit 2026-03-09T17:41:51.768590+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]: dispatch 2026-03-09T17:41:52.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:41:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:41:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:53 vm02 bash[23351]: audit 2026-03-09T17:41:51.990233+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]': finished 2026-03-09T17:41:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:53 vm02 bash[23351]: audit 2026-03-09T17:41:51.990233+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]': finished 2026-03-09T17:41:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:53 vm02 bash[23351]: cluster 2026-03-09T17:41:51.995939+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T17:41:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:53 vm02 bash[23351]: cluster 2026-03-09T17:41:51.995939+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T17:41:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:53 vm02 bash[23351]: audit 2026-03-09T17:41:52.203901+0000 mgr.y (mgr.14505) 607 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:53 vm02 bash[23351]: audit 2026-03-09T17:41:52.203901+0000 mgr.y (mgr.14505) 607 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:53 vm00 bash[28333]: audit 2026-03-09T17:41:51.990233+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]': finished 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:53 vm00 bash[28333]: audit 2026-03-09T17:41:51.990233+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]': finished 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:53 vm00 bash[28333]: cluster 2026-03-09T17:41:51.995939+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:53 vm00 bash[28333]: cluster 2026-03-09T17:41:51.995939+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:53 vm00 bash[28333]: audit 2026-03-09T17:41:52.203901+0000 mgr.y (mgr.14505) 607 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:53 vm00 bash[28333]: audit 2026-03-09T17:41:52.203901+0000 mgr.y (mgr.14505) 607 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:53 vm00 bash[20770]: audit 2026-03-09T17:41:51.990233+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]': finished 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:53 vm00 bash[20770]: audit 2026-03-09T17:41:51.990233+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-140"}]': finished 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:53 vm00 bash[20770]: cluster 2026-03-09T17:41:51.995939+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:53 vm00 bash[20770]: cluster 2026-03-09T17:41:51.995939+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:53 vm00 bash[20770]: audit 2026-03-09T17:41:52.203901+0000 mgr.y (mgr.14505) 607 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:53 vm00 bash[20770]: audit 2026-03-09T17:41:52.203901+0000 mgr.y (mgr.14505) 607 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:41:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:54 vm02 bash[23351]: cluster 2026-03-09T17:41:52.859551+0000 mgr.y (mgr.14505) 608 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.5 KiB/s wr, 2 op/s 2026-03-09T17:41:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:54 vm02 bash[23351]: cluster 2026-03-09T17:41:52.859551+0000 mgr.y (mgr.14505) 608 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.5 KiB/s wr, 2 op/s 2026-03-09T17:41:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:54 vm02 bash[23351]: cluster 2026-03-09T17:41:53.002920+0000 mon.a (mon.0) 3353 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:54 vm02 bash[23351]: cluster 2026-03-09T17:41:53.002920+0000 mon.a (mon.0) 3353 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:54 vm02 bash[23351]: cluster 2026-03-09T17:41:53.082236+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T17:41:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:54 vm02 bash[23351]: cluster 2026-03-09T17:41:53.082236+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:54 vm00 bash[28333]: cluster 2026-03-09T17:41:52.859551+0000 mgr.y (mgr.14505) 608 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.5 KiB/s wr, 2 op/s 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:54 vm00 bash[28333]: cluster 2026-03-09T17:41:52.859551+0000 mgr.y (mgr.14505) 608 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.5 KiB/s wr, 2 op/s 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:54 vm00 bash[28333]: cluster 2026-03-09T17:41:53.002920+0000 mon.a (mon.0) 3353 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:54 vm00 bash[28333]: cluster 2026-03-09T17:41:53.002920+0000 mon.a (mon.0) 3353 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:54 vm00 bash[28333]: cluster 2026-03-09T17:41:53.082236+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:54 vm00 bash[28333]: cluster 2026-03-09T17:41:53.082236+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:54 vm00 bash[20770]: cluster 2026-03-09T17:41:52.859551+0000 mgr.y (mgr.14505) 608 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.5 KiB/s wr, 2 op/s 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:54 vm00 bash[20770]: cluster 2026-03-09T17:41:52.859551+0000 mgr.y (mgr.14505) 608 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 7.5 KiB/s wr, 2 op/s 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:54 vm00 bash[20770]: cluster 2026-03-09T17:41:53.002920+0000 mon.a (mon.0) 3353 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:54 vm00 bash[20770]: cluster 2026-03-09T17:41:53.002920+0000 mon.a (mon.0) 3353 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:54 vm00 bash[20770]: cluster 2026-03-09T17:41:53.082236+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T17:41:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:54 vm00 bash[20770]: cluster 2026-03-09T17:41:53.082236+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T17:41:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:55 vm02 bash[23351]: cluster 2026-03-09T17:41:54.074303+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T17:41:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:55 vm02 bash[23351]: cluster 2026-03-09T17:41:54.074303+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T17:41:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:55 vm02 bash[23351]: audit 2026-03-09T17:41:54.074753+0000 mon.c (mon.2) 756 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:55 vm02 bash[23351]: audit 2026-03-09T17:41:54.074753+0000 mon.c (mon.2) 756 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:55 vm02 bash[23351]: audit 2026-03-09T17:41:54.081290+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:55 vm02 bash[23351]: audit 2026-03-09T17:41:54.081290+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:55 vm00 bash[28333]: cluster 2026-03-09T17:41:54.074303+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:55 vm00 bash[28333]: cluster 2026-03-09T17:41:54.074303+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:55 vm00 bash[28333]: audit 2026-03-09T17:41:54.074753+0000 mon.c (mon.2) 756 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:55 vm00 bash[28333]: audit 2026-03-09T17:41:54.074753+0000 mon.c (mon.2) 756 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:55 vm00 bash[28333]: audit 2026-03-09T17:41:54.081290+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:55 vm00 bash[28333]: audit 2026-03-09T17:41:54.081290+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:55 vm00 bash[20770]: cluster 2026-03-09T17:41:54.074303+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:55 vm00 bash[20770]: cluster 2026-03-09T17:41:54.074303+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:55 vm00 bash[20770]: audit 2026-03-09T17:41:54.074753+0000 mon.c (mon.2) 756 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:55 vm00 bash[20770]: audit 2026-03-09T17:41:54.074753+0000 mon.c (mon.2) 756 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:55 vm00 bash[20770]: audit 2026-03-09T17:41:54.081290+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:55.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:55 vm00 bash[20770]: audit 2026-03-09T17:41:54.081290+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: cluster 2026-03-09T17:41:54.859862+0000 mgr.y (mgr.14505) 609 : cluster [DBG] pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: cluster 2026-03-09T17:41:54.859862+0000 mgr.y (mgr.14505) 609 : cluster [DBG] pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: audit 2026-03-09T17:41:55.066686+0000 mon.a (mon.0) 3357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: audit 2026-03-09T17:41:55.066686+0000 mon.a (mon.0) 3357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: cluster 2026-03-09T17:41:55.079288+0000 mon.a (mon.0) 3358 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: cluster 2026-03-09T17:41:55.079288+0000 mon.a (mon.0) 3358 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: audit 2026-03-09T17:41:55.137795+0000 mon.c (mon.2) 757 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: audit 2026-03-09T17:41:55.137795+0000 mon.c (mon.2) 757 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: audit 2026-03-09T17:41:55.138283+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:56 vm02 bash[23351]: audit 2026-03-09T17:41:55.138283+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:41:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:41:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: cluster 2026-03-09T17:41:54.859862+0000 mgr.y (mgr.14505) 609 : cluster [DBG] pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: cluster 2026-03-09T17:41:54.859862+0000 mgr.y (mgr.14505) 609 : cluster [DBG] pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: audit 2026-03-09T17:41:55.066686+0000 mon.a (mon.0) 3357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: audit 2026-03-09T17:41:55.066686+0000 mon.a (mon.0) 3357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: cluster 2026-03-09T17:41:55.079288+0000 mon.a (mon.0) 3358 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: cluster 2026-03-09T17:41:55.079288+0000 mon.a (mon.0) 3358 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: audit 2026-03-09T17:41:55.137795+0000 mon.c (mon.2) 757 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: audit 2026-03-09T17:41:55.137795+0000 mon.c (mon.2) 757 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: audit 2026-03-09T17:41:55.138283+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:56 vm00 bash[28333]: audit 2026-03-09T17:41:55.138283+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: cluster 2026-03-09T17:41:54.859862+0000 mgr.y (mgr.14505) 609 : cluster [DBG] pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: cluster 2026-03-09T17:41:54.859862+0000 mgr.y (mgr.14505) 609 : cluster [DBG] pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:41:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: audit 2026-03-09T17:41:55.066686+0000 mon.a (mon.0) 3357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:56.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: audit 2026-03-09T17:41:55.066686+0000 mon.a (mon.0) 3357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:41:56.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: cluster 2026-03-09T17:41:55.079288+0000 mon.a (mon.0) 3358 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T17:41:56.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: cluster 2026-03-09T17:41:55.079288+0000 mon.a (mon.0) 3358 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T17:41:56.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: audit 2026-03-09T17:41:55.137795+0000 mon.c (mon.2) 757 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: audit 2026-03-09T17:41:55.137795+0000 mon.c (mon.2) 757 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: audit 2026-03-09T17:41:55.138283+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:56.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:56 vm00 bash[20770]: audit 2026-03-09T17:41:55.138283+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: audit 2026-03-09T17:41:56.080510+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: audit 2026-03-09T17:41:56.080510+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: cluster 2026-03-09T17:41:56.083194+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: cluster 2026-03-09T17:41:56.083194+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: audit 2026-03-09T17:41:56.087797+0000 mon.c (mon.2) 758 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: audit 2026-03-09T17:41:56.087797+0000 mon.c (mon.2) 758 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: audit 2026-03-09T17:41:56.092018+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: audit 2026-03-09T17:41:56.092018+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: audit 2026-03-09T17:41:56.851019+0000 mon.c (mon.2) 759 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:41:57.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:57 vm02 bash[23351]: audit 2026-03-09T17:41:56.851019+0000 mon.c (mon.2) 759 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: audit 2026-03-09T17:41:56.080510+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: audit 2026-03-09T17:41:56.080510+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: cluster 2026-03-09T17:41:56.083194+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: cluster 2026-03-09T17:41:56.083194+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: audit 2026-03-09T17:41:56.087797+0000 mon.c (mon.2) 758 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: audit 2026-03-09T17:41:56.087797+0000 mon.c (mon.2) 758 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: audit 2026-03-09T17:41:56.092018+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: audit 2026-03-09T17:41:56.092018+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: audit 2026-03-09T17:41:56.851019+0000 mon.c (mon.2) 759 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:57 vm00 bash[28333]: audit 2026-03-09T17:41:56.851019+0000 mon.c (mon.2) 759 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: audit 2026-03-09T17:41:56.080510+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: audit 2026-03-09T17:41:56.080510+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: cluster 2026-03-09T17:41:56.083194+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: cluster 2026-03-09T17:41:56.083194+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: audit 2026-03-09T17:41:56.087797+0000 mon.c (mon.2) 758 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: audit 2026-03-09T17:41:56.087797+0000 mon.c (mon.2) 758 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: audit 2026-03-09T17:41:56.092018+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: audit 2026-03-09T17:41:56.092018+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: audit 2026-03-09T17:41:56.851019+0000 mon.c (mon.2) 759 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:41:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:57 vm00 bash[20770]: audit 2026-03-09T17:41:56.851019+0000 mon.c (mon.2) 759 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: cluster 2026-03-09T17:41:56.860175+0000 mgr.y (mgr.14505) 610 : cluster [DBG] pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: cluster 2026-03-09T17:41:56.860175+0000 mgr.y (mgr.14505) 610 : cluster [DBG] pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.102205+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.102205+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: cluster 2026-03-09T17:41:57.112066+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: cluster 2026-03-09T17:41:57.112066+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.120529+0000 mon.c (mon.2) 760 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.120529+0000 mon.c (mon.2) 760 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.120994+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.120994+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.168865+0000 mon.a (mon.0) 3366 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.168865+0000 mon.a (mon.0) 3366 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.178497+0000 mon.a (mon.0) 3367 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.178497+0000 mon.a (mon.0) 3367 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.527783+0000 mon.c (mon.2) 761 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:41:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.527783+0000 mon.c (mon.2) 761 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:41:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.529127+0000 mon.c (mon.2) 762 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:41:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.529127+0000 mon.c (mon.2) 762 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:41:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.537949+0000 mon.a (mon.0) 3368 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.387 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:58 vm02 bash[23351]: audit 2026-03-09T17:41:57.537949+0000 mon.a (mon.0) 3368 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: cluster 2026-03-09T17:41:56.860175+0000 mgr.y (mgr.14505) 610 : cluster [DBG] pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: cluster 2026-03-09T17:41:56.860175+0000 mgr.y (mgr.14505) 610 : cluster [DBG] pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.102205+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.102205+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: cluster 2026-03-09T17:41:57.112066+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: cluster 2026-03-09T17:41:57.112066+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.120529+0000 mon.c (mon.2) 760 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.120529+0000 mon.c (mon.2) 760 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.120994+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.120994+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.168865+0000 mon.a (mon.0) 3366 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.168865+0000 mon.a (mon.0) 3366 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.178497+0000 mon.a (mon.0) 3367 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.178497+0000 mon.a (mon.0) 3367 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.527783+0000 mon.c (mon.2) 761 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.527783+0000 mon.c (mon.2) 761 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.529127+0000 mon.c (mon.2) 762 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.529127+0000 mon.c (mon.2) 762 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.537949+0000 mon.a (mon.0) 3368 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:58 vm00 bash[28333]: audit 2026-03-09T17:41:57.537949+0000 mon.a (mon.0) 3368 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: cluster 2026-03-09T17:41:56.860175+0000 mgr.y (mgr.14505) 610 : cluster [DBG] pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: cluster 2026-03-09T17:41:56.860175+0000 mgr.y (mgr.14505) 610 : cluster [DBG] pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.102205+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.102205+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: cluster 2026-03-09T17:41:57.112066+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: cluster 2026-03-09T17:41:57.112066+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.120529+0000 mon.c (mon.2) 760 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.120529+0000 mon.c (mon.2) 760 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.120994+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.120994+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]: dispatch 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.168865+0000 mon.a (mon.0) 3366 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.168865+0000 mon.a (mon.0) 3366 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.178497+0000 mon.a (mon.0) 3367 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.178497+0000 mon.a (mon.0) 3367 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.527783+0000 mon.c (mon.2) 761 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.527783+0000 mon.c (mon.2) 761 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.529127+0000 mon.c (mon.2) 762 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.529127+0000 mon.c (mon.2) 762 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.537949+0000 mon.a (mon.0) 3368 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:58.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:58 vm00 bash[20770]: audit 2026-03-09T17:41:57.537949+0000 mon.a (mon.0) 3368 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: cluster 2026-03-09T17:41:58.102279+0000 mon.a (mon.0) 3369 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: cluster 2026-03-09T17:41:58.102279+0000 mon.a (mon.0) 3369 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.105796+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]': finished 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.105796+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]': finished 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.113326+0000 mon.c (mon.2) 763 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.113326+0000 mon.c (mon.2) 763 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: cluster 2026-03-09T17:41:58.113332+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: cluster 2026-03-09T17:41:58.113332+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.118118+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.118118+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.338073+0000 mon.a (mon.0) 3373 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.338073+0000 mon.a (mon.0) 3373 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.338920+0000 mon.c (mon.2) 764 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:41:59 vm02 bash[23351]: audit 2026-03-09T17:41:58.338920+0000 mon.c (mon.2) 764 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: cluster 2026-03-09T17:41:58.102279+0000 mon.a (mon.0) 3369 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: cluster 2026-03-09T17:41:58.102279+0000 mon.a (mon.0) 3369 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.105796+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]': finished 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.105796+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]': finished 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.113326+0000 mon.c (mon.2) 763 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.113326+0000 mon.c (mon.2) 763 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: cluster 2026-03-09T17:41:58.113332+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: cluster 2026-03-09T17:41:58.113332+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.118118+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.118118+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.338073+0000 mon.a (mon.0) 3373 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.338073+0000 mon.a (mon.0) 3373 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.338920+0000 mon.c (mon.2) 764 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:41:59 vm00 bash[28333]: audit 2026-03-09T17:41:58.338920+0000 mon.c (mon.2) 764 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: cluster 2026-03-09T17:41:58.102279+0000 mon.a (mon.0) 3369 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: cluster 2026-03-09T17:41:58.102279+0000 mon.a (mon.0) 3369 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.105796+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]': finished 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.105796+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-142", "mode": "writeback"}]': finished 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.113326+0000 mon.c (mon.2) 763 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.113326+0000 mon.c (mon.2) 763 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: cluster 2026-03-09T17:41:58.113332+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: cluster 2026-03-09T17:41:58.113332+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.118118+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.118118+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.338073+0000 mon.a (mon.0) 3373 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.338073+0000 mon.a (mon.0) 3373 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.338920+0000 mon.c (mon.2) 764 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:41:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:41:59 vm00 bash[20770]: audit 2026-03-09T17:41:58.338920+0000 mon.c (mon.2) 764 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: cluster 2026-03-09T17:41:58.860443+0000 mgr.y (mgr.14505) 611 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: cluster 2026-03-09T17:41:58.860443+0000 mgr.y (mgr.14505) 611 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:41:59.109707+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:41:59.109707+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:41:59.114306+0000 mon.c (mon.2) 765 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:41:59.114306+0000 mon.c (mon.2) 765 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: cluster 2026-03-09T17:41:59.114440+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: cluster 2026-03-09T17:41:59.114440+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:41:59.123708+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:41:59.123708+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:42:00.112960+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:42:00.112960+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: cluster 2026-03-09T17:42:00.115929+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: cluster 2026-03-09T17:42:00.115929+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:42:00.121196+0000 mon.c (mon.2) 766 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:00 vm02 bash[23351]: audit 2026-03-09T17:42:00.121196+0000 mon.c (mon.2) 766 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: cluster 2026-03-09T17:41:58.860443+0000 mgr.y (mgr.14505) 611 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: cluster 2026-03-09T17:41:58.860443+0000 mgr.y (mgr.14505) 611 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:41:59.109707+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:41:59.109707+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:41:59.114306+0000 mon.c (mon.2) 765 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:41:59.114306+0000 mon.c (mon.2) 765 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: cluster 2026-03-09T17:41:59.114440+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: cluster 2026-03-09T17:41:59.114440+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:41:59.123708+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:41:59.123708+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:42:00.112960+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:42:00.112960+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: cluster 2026-03-09T17:42:00.115929+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: cluster 2026-03-09T17:42:00.115929+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:42:00.121196+0000 mon.c (mon.2) 766 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:00 vm00 bash[28333]: audit 2026-03-09T17:42:00.121196+0000 mon.c (mon.2) 766 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: cluster 2026-03-09T17:41:58.860443+0000 mgr.y (mgr.14505) 611 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: cluster 2026-03-09T17:41:58.860443+0000 mgr.y (mgr.14505) 611 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:41:59.109707+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:41:59.109707+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:41:59.114306+0000 mon.c (mon.2) 765 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:41:59.114306+0000 mon.c (mon.2) 765 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: cluster 2026-03-09T17:41:59.114440+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: cluster 2026-03-09T17:41:59.114440+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:41:59.123708+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:41:59.123708+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:42:00.112960+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:42:00.112960+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: cluster 2026-03-09T17:42:00.115929+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: cluster 2026-03-09T17:42:00.115929+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T17:42:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:42:00.121196+0000 mon.c (mon.2) 766 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:00 vm00 bash[20770]: audit 2026-03-09T17:42:00.121196+0000 mon.c (mon.2) 766 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: audit 2026-03-09T17:42:00.125487+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: audit 2026-03-09T17:42:00.125487+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: cluster 2026-03-09T17:42:01.113109+0000 mon.a (mon.0) 3380 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: cluster 2026-03-09T17:42:01.113109+0000 mon.a (mon.0) 3380 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: audit 2026-03-09T17:42:01.116742+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: audit 2026-03-09T17:42:01.116742+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: audit 2026-03-09T17:42:01.120863+0000 mon.c (mon.2) 767 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: audit 2026-03-09T17:42:01.120863+0000 mon.c (mon.2) 767 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: cluster 2026-03-09T17:42:01.127750+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T17:42:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:01 vm02 bash[23351]: cluster 2026-03-09T17:42:01.127750+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: audit 2026-03-09T17:42:00.125487+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: audit 2026-03-09T17:42:00.125487+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: cluster 2026-03-09T17:42:01.113109+0000 mon.a (mon.0) 3380 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: cluster 2026-03-09T17:42:01.113109+0000 mon.a (mon.0) 3380 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: audit 2026-03-09T17:42:01.116742+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: audit 2026-03-09T17:42:01.116742+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: audit 2026-03-09T17:42:01.120863+0000 mon.c (mon.2) 767 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: audit 2026-03-09T17:42:01.120863+0000 mon.c (mon.2) 767 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: cluster 2026-03-09T17:42:01.127750+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:01 vm00 bash[28333]: cluster 2026-03-09T17:42:01.127750+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: audit 2026-03-09T17:42:00.125487+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: audit 2026-03-09T17:42:00.125487+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: cluster 2026-03-09T17:42:01.113109+0000 mon.a (mon.0) 3380 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: cluster 2026-03-09T17:42:01.113109+0000 mon.a (mon.0) 3380 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: audit 2026-03-09T17:42:01.116742+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: audit 2026-03-09T17:42:01.116742+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: audit 2026-03-09T17:42:01.120863+0000 mon.c (mon.2) 767 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: audit 2026-03-09T17:42:01.120863+0000 mon.c (mon.2) 767 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: cluster 2026-03-09T17:42:01.127750+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T17:42:01.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:01 vm00 bash[20770]: cluster 2026-03-09T17:42:01.127750+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: cluster 2026-03-09T17:42:00.860751+0000 mgr.y (mgr.14505) 612 : cluster [DBG] pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: cluster 2026-03-09T17:42:00.860751+0000 mgr.y (mgr.14505) 612 : cluster [DBG] pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: audit 2026-03-09T17:42:01.128123+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: audit 2026-03-09T17:42:01.128123+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: cluster 2026-03-09T17:42:01.809591+0000 mon.a (mon.0) 3384 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: cluster 2026-03-09T17:42:01.809591+0000 mon.a (mon.0) 3384 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: audit 2026-03-09T17:42:02.120054+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: audit 2026-03-09T17:42:02.120054+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: audit 2026-03-09T17:42:02.130206+0000 mon.c (mon.2) 768 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: audit 2026-03-09T17:42:02.130206+0000 mon.c (mon.2) 768 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: cluster 2026-03-09T17:42:02.133812+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:02 vm00 bash[28333]: cluster 2026-03-09T17:42:02.133812+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: cluster 2026-03-09T17:42:00.860751+0000 mgr.y (mgr.14505) 612 : cluster [DBG] pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: cluster 2026-03-09T17:42:00.860751+0000 mgr.y (mgr.14505) 612 : cluster [DBG] pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: audit 2026-03-09T17:42:01.128123+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: audit 2026-03-09T17:42:01.128123+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: cluster 2026-03-09T17:42:01.809591+0000 mon.a (mon.0) 3384 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: cluster 2026-03-09T17:42:01.809591+0000 mon.a (mon.0) 3384 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: audit 2026-03-09T17:42:02.120054+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: audit 2026-03-09T17:42:02.120054+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: audit 2026-03-09T17:42:02.130206+0000 mon.c (mon.2) 768 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: audit 2026-03-09T17:42:02.130206+0000 mon.c (mon.2) 768 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: cluster 2026-03-09T17:42:02.133812+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T17:42:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:02 vm00 bash[20770]: cluster 2026-03-09T17:42:02.133812+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: cluster 2026-03-09T17:42:00.860751+0000 mgr.y (mgr.14505) 612 : cluster [DBG] pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: cluster 2026-03-09T17:42:00.860751+0000 mgr.y (mgr.14505) 612 : cluster [DBG] pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: audit 2026-03-09T17:42:01.128123+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: audit 2026-03-09T17:42:01.128123+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: cluster 2026-03-09T17:42:01.809591+0000 mon.a (mon.0) 3384 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: cluster 2026-03-09T17:42:01.809591+0000 mon.a (mon.0) 3384 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: audit 2026-03-09T17:42:02.120054+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: audit 2026-03-09T17:42:02.120054+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: audit 2026-03-09T17:42:02.130206+0000 mon.c (mon.2) 768 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: audit 2026-03-09T17:42:02.130206+0000 mon.c (mon.2) 768 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: cluster 2026-03-09T17:42:02.133812+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T17:42:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:02 vm02 bash[23351]: cluster 2026-03-09T17:42:02.133812+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T17:42:02.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:42:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:02.135286+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:02.135286+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:02.214596+0000 mgr.y (mgr.14505) 613 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:02.214596+0000 mgr.y (mgr.14505) 613 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:03.123460+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:03.123460+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: cluster 2026-03-09T17:42:03.127198+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: cluster 2026-03-09T17:42:03.127198+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:03.132167+0000 mon.c (mon.2) 769 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:03.132167+0000 mon.c (mon.2) 769 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:03.132453+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:03 vm00 bash[28333]: audit 2026-03-09T17:42:03.132453+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:02.135286+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:02.135286+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:02.214596+0000 mgr.y (mgr.14505) 613 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:02.214596+0000 mgr.y (mgr.14505) 613 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:03.123460+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:03.123460+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: cluster 2026-03-09T17:42:03.127198+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: cluster 2026-03-09T17:42:03.127198+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:03.132167+0000 mon.c (mon.2) 769 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:03.132167+0000 mon.c (mon.2) 769 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:03.132453+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:03 vm00 bash[20770]: audit 2026-03-09T17:42:03.132453+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:02.135286+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:02.135286+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:02.214596+0000 mgr.y (mgr.14505) 613 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:02.214596+0000 mgr.y (mgr.14505) 613 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:03.123460+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:03.123460+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: cluster 2026-03-09T17:42:03.127198+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: cluster 2026-03-09T17:42:03.127198+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:03.132167+0000 mon.c (mon.2) 769 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:03.132167+0000 mon.c (mon.2) 769 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:03.132453+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:03 vm02 bash[23351]: audit 2026-03-09T17:42:03.132453+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:04 vm00 bash[28333]: cluster 2026-03-09T17:42:02.861464+0000 mgr.y (mgr.14505) 614 : cluster [DBG] pgmap v1086: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:04 vm00 bash[28333]: cluster 2026-03-09T17:42:02.861464+0000 mgr.y (mgr.14505) 614 : cluster [DBG] pgmap v1086: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:04 vm00 bash[28333]: audit 2026-03-09T17:42:04.126174+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:04 vm00 bash[28333]: audit 2026-03-09T17:42:04.126174+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:04 vm00 bash[28333]: cluster 2026-03-09T17:42:04.129980+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:04 vm00 bash[28333]: cluster 2026-03-09T17:42:04.129980+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:04 vm00 bash[20770]: cluster 2026-03-09T17:42:02.861464+0000 mgr.y (mgr.14505) 614 : cluster [DBG] pgmap v1086: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:04 vm00 bash[20770]: cluster 2026-03-09T17:42:02.861464+0000 mgr.y (mgr.14505) 614 : cluster [DBG] pgmap v1086: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:04 vm00 bash[20770]: audit 2026-03-09T17:42:04.126174+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:04 vm00 bash[20770]: audit 2026-03-09T17:42:04.126174+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:04 vm00 bash[20770]: cluster 2026-03-09T17:42:04.129980+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T17:42:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:04 vm00 bash[20770]: cluster 2026-03-09T17:42:04.129980+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T17:42:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:04 vm02 bash[23351]: cluster 2026-03-09T17:42:02.861464+0000 mgr.y (mgr.14505) 614 : cluster [DBG] pgmap v1086: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:04 vm02 bash[23351]: cluster 2026-03-09T17:42:02.861464+0000 mgr.y (mgr.14505) 614 : cluster [DBG] pgmap v1086: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:04 vm02 bash[23351]: audit 2026-03-09T17:42:04.126174+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:42:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:04 vm02 bash[23351]: audit 2026-03-09T17:42:04.126174+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T17:42:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:04 vm02 bash[23351]: cluster 2026-03-09T17:42:04.129980+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T17:42:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:04 vm02 bash[23351]: cluster 2026-03-09T17:42:04.129980+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T17:42:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:05 vm00 bash[28333]: audit 2026-03-09T17:42:04.179918+0000 mon.c (mon.2) 770 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:05 vm00 bash[28333]: audit 2026-03-09T17:42:04.179918+0000 mon.c (mon.2) 770 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:05 vm00 bash[28333]: audit 2026-03-09T17:42:04.180316+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:05 vm00 bash[28333]: audit 2026-03-09T17:42:04.180316+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:05 vm00 bash[20770]: audit 2026-03-09T17:42:04.179918+0000 mon.c (mon.2) 770 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:05 vm00 bash[20770]: audit 2026-03-09T17:42:04.179918+0000 mon.c (mon.2) 770 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:05 vm00 bash[20770]: audit 2026-03-09T17:42:04.180316+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:05 vm00 bash[20770]: audit 2026-03-09T17:42:04.180316+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:05 vm02 bash[23351]: audit 2026-03-09T17:42:04.179918+0000 mon.c (mon.2) 770 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:05 vm02 bash[23351]: audit 2026-03-09T17:42:04.179918+0000 mon.c (mon.2) 770 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:05 vm02 bash[23351]: audit 2026-03-09T17:42:04.180316+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:05.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:05 vm02 bash[23351]: audit 2026-03-09T17:42:04.180316+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: cluster 2026-03-09T17:42:04.861842+0000 mgr.y (mgr.14505) 615 : cluster [DBG] pgmap v1089: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: cluster 2026-03-09T17:42:04.861842+0000 mgr.y (mgr.14505) 615 : cluster [DBG] pgmap v1089: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: audit 2026-03-09T17:42:05.172977+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: audit 2026-03-09T17:42:05.172977+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: audit 2026-03-09T17:42:05.176443+0000 mon.c (mon.2) 771 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: audit 2026-03-09T17:42:05.176443+0000 mon.c (mon.2) 771 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: cluster 2026-03-09T17:42:05.184489+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: cluster 2026-03-09T17:42:05.184489+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: audit 2026-03-09T17:42:05.185692+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: audit 2026-03-09T17:42:05.185692+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: audit 2026-03-09T17:42:06.176107+0000 mon.a (mon.0) 3397 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: audit 2026-03-09T17:42:06.176107+0000 mon.a (mon.0) 3397 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: cluster 2026-03-09T17:42:06.179400+0000 mon.a (mon.0) 3398 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:06 vm00 bash[28333]: cluster 2026-03-09T17:42:06.179400+0000 mon.a (mon.0) 3398 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: cluster 2026-03-09T17:42:04.861842+0000 mgr.y (mgr.14505) 615 : cluster [DBG] pgmap v1089: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: cluster 2026-03-09T17:42:04.861842+0000 mgr.y (mgr.14505) 615 : cluster [DBG] pgmap v1089: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: audit 2026-03-09T17:42:05.172977+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: audit 2026-03-09T17:42:05.172977+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: audit 2026-03-09T17:42:05.176443+0000 mon.c (mon.2) 771 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: audit 2026-03-09T17:42:05.176443+0000 mon.c (mon.2) 771 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: cluster 2026-03-09T17:42:05.184489+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T17:42:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: cluster 2026-03-09T17:42:05.184489+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T17:42:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: audit 2026-03-09T17:42:05.185692+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: audit 2026-03-09T17:42:05.185692+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: audit 2026-03-09T17:42:06.176107+0000 mon.a (mon.0) 3397 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:42:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: audit 2026-03-09T17:42:06.176107+0000 mon.a (mon.0) 3397 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:42:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: cluster 2026-03-09T17:42:06.179400+0000 mon.a (mon.0) 3398 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T17:42:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:06 vm00 bash[20770]: cluster 2026-03-09T17:42:06.179400+0000 mon.a (mon.0) 3398 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T17:42:06.539 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:42:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:42:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: cluster 2026-03-09T17:42:04.861842+0000 mgr.y (mgr.14505) 615 : cluster [DBG] pgmap v1089: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: cluster 2026-03-09T17:42:04.861842+0000 mgr.y (mgr.14505) 615 : cluster [DBG] pgmap v1089: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: audit 2026-03-09T17:42:05.172977+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: audit 2026-03-09T17:42:05.172977+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: audit 2026-03-09T17:42:05.176443+0000 mon.c (mon.2) 771 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: audit 2026-03-09T17:42:05.176443+0000 mon.c (mon.2) 771 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: cluster 2026-03-09T17:42:05.184489+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: cluster 2026-03-09T17:42:05.184489+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: audit 2026-03-09T17:42:05.185692+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: audit 2026-03-09T17:42:05.185692+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: audit 2026-03-09T17:42:06.176107+0000 mon.a (mon.0) 3397 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:42:06.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: audit 2026-03-09T17:42:06.176107+0000 mon.a (mon.0) 3397 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]': finished 2026-03-09T17:42:06.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: cluster 2026-03-09T17:42:06.179400+0000 mon.a (mon.0) 3398 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T17:42:06.637 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:06 vm02 bash[23351]: cluster 2026-03-09T17:42:06.179400+0000 mon.a (mon.0) 3398 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:07 vm00 bash[28333]: audit 2026-03-09T17:42:06.218814+0000 mon.c (mon.2) 772 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:07 vm00 bash[28333]: audit 2026-03-09T17:42:06.218814+0000 mon.c (mon.2) 772 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:07 vm00 bash[28333]: audit 2026-03-09T17:42:06.219117+0000 mon.a (mon.0) 3399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:07 vm00 bash[28333]: audit 2026-03-09T17:42:06.219117+0000 mon.a (mon.0) 3399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:07 vm00 bash[28333]: audit 2026-03-09T17:42:06.219756+0000 mon.c (mon.2) 773 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:07 vm00 bash[28333]: audit 2026-03-09T17:42:06.219756+0000 mon.c (mon.2) 773 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:07 vm00 bash[28333]: audit 2026-03-09T17:42:06.219946+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:07 vm00 bash[28333]: audit 2026-03-09T17:42:06.219946+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:07 vm00 bash[20770]: audit 2026-03-09T17:42:06.218814+0000 mon.c (mon.2) 772 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:07 vm00 bash[20770]: audit 2026-03-09T17:42:06.218814+0000 mon.c (mon.2) 772 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:07 vm00 bash[20770]: audit 2026-03-09T17:42:06.219117+0000 mon.a (mon.0) 3399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:07 vm00 bash[20770]: audit 2026-03-09T17:42:06.219117+0000 mon.a (mon.0) 3399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:07 vm00 bash[20770]: audit 2026-03-09T17:42:06.219756+0000 mon.c (mon.2) 773 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:07 vm00 bash[20770]: audit 2026-03-09T17:42:06.219756+0000 mon.c (mon.2) 773 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:07 vm00 bash[20770]: audit 2026-03-09T17:42:06.219946+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:07 vm00 bash[20770]: audit 2026-03-09T17:42:06.219946+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:07 vm02 bash[23351]: audit 2026-03-09T17:42:06.218814+0000 mon.c (mon.2) 772 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:07 vm02 bash[23351]: audit 2026-03-09T17:42:06.218814+0000 mon.c (mon.2) 772 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:07 vm02 bash[23351]: audit 2026-03-09T17:42:06.219117+0000 mon.a (mon.0) 3399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:07 vm02 bash[23351]: audit 2026-03-09T17:42:06.219117+0000 mon.a (mon.0) 3399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:07 vm02 bash[23351]: audit 2026-03-09T17:42:06.219756+0000 mon.c (mon.2) 773 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:07 vm02 bash[23351]: audit 2026-03-09T17:42:06.219756+0000 mon.c (mon.2) 773 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:07 vm02 bash[23351]: audit 2026-03-09T17:42:06.219946+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:07.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:07 vm02 bash[23351]: audit 2026-03-09T17:42:06.219946+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-142"}]: dispatch 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:08 vm00 bash[28333]: cluster 2026-03-09T17:42:06.862200+0000 mgr.y (mgr.14505) 616 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:08 vm00 bash[28333]: cluster 2026-03-09T17:42:06.862200+0000 mgr.y (mgr.14505) 616 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:08 vm00 bash[28333]: cluster 2026-03-09T17:42:07.193585+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:08 vm00 bash[28333]: cluster 2026-03-09T17:42:07.193585+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:08 vm00 bash[28333]: cluster 2026-03-09T17:42:07.210424+0000 mon.a (mon.0) 3402 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:08 vm00 bash[28333]: cluster 2026-03-09T17:42:07.210424+0000 mon.a (mon.0) 3402 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:08 vm00 bash[20770]: cluster 2026-03-09T17:42:06.862200+0000 mgr.y (mgr.14505) 616 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:08 vm00 bash[20770]: cluster 2026-03-09T17:42:06.862200+0000 mgr.y (mgr.14505) 616 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:08 vm00 bash[20770]: cluster 2026-03-09T17:42:07.193585+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:08 vm00 bash[20770]: cluster 2026-03-09T17:42:07.193585+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:08 vm00 bash[20770]: cluster 2026-03-09T17:42:07.210424+0000 mon.a (mon.0) 3402 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T17:42:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:08 vm00 bash[20770]: cluster 2026-03-09T17:42:07.210424+0000 mon.a (mon.0) 3402 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T17:42:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:08 vm02 bash[23351]: cluster 2026-03-09T17:42:06.862200+0000 mgr.y (mgr.14505) 616 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:08 vm02 bash[23351]: cluster 2026-03-09T17:42:06.862200+0000 mgr.y (mgr.14505) 616 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:08 vm02 bash[23351]: cluster 2026-03-09T17:42:07.193585+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:08 vm02 bash[23351]: cluster 2026-03-09T17:42:07.193585+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:08 vm02 bash[23351]: cluster 2026-03-09T17:42:07.210424+0000 mon.a (mon.0) 3402 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T17:42:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:08 vm02 bash[23351]: cluster 2026-03-09T17:42:07.210424+0000 mon.a (mon.0) 3402 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:09 vm00 bash[28333]: cluster 2026-03-09T17:42:08.234797+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:09 vm00 bash[28333]: cluster 2026-03-09T17:42:08.234797+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:09 vm00 bash[28333]: audit 2026-03-09T17:42:08.235924+0000 mon.c (mon.2) 774 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:09 vm00 bash[28333]: audit 2026-03-09T17:42:08.235924+0000 mon.c (mon.2) 774 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:09 vm00 bash[28333]: audit 2026-03-09T17:42:08.238655+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:09 vm00 bash[28333]: audit 2026-03-09T17:42:08.238655+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:09 vm00 bash[20770]: cluster 2026-03-09T17:42:08.234797+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:09 vm00 bash[20770]: cluster 2026-03-09T17:42:08.234797+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:09 vm00 bash[20770]: audit 2026-03-09T17:42:08.235924+0000 mon.c (mon.2) 774 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:09 vm00 bash[20770]: audit 2026-03-09T17:42:08.235924+0000 mon.c (mon.2) 774 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:09 vm00 bash[20770]: audit 2026-03-09T17:42:08.238655+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:09 vm00 bash[20770]: audit 2026-03-09T17:42:08.238655+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:09 vm02 bash[23351]: cluster 2026-03-09T17:42:08.234797+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T17:42:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:09 vm02 bash[23351]: cluster 2026-03-09T17:42:08.234797+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T17:42:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:09 vm02 bash[23351]: audit 2026-03-09T17:42:08.235924+0000 mon.c (mon.2) 774 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:09 vm02 bash[23351]: audit 2026-03-09T17:42:08.235924+0000 mon.c (mon.2) 774 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:09 vm02 bash[23351]: audit 2026-03-09T17:42:08.238655+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:09.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:09 vm02 bash[23351]: audit 2026-03-09T17:42:08.238655+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: cluster 2026-03-09T17:42:08.862542+0000 mgr.y (mgr.14505) 617 : cluster [DBG] pgmap v1095: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: cluster 2026-03-09T17:42:08.862542+0000 mgr.y (mgr.14505) 617 : cluster [DBG] pgmap v1095: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:09.216851+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:09.216851+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: cluster 2026-03-09T17:42:09.220061+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: cluster 2026-03-09T17:42:09.220061+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:09.279291+0000 mon.c (mon.2) 775 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:09.279291+0000 mon.c (mon.2) 775 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:09.279554+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:09.279554+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:10.219609+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:10.219609+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: cluster 2026-03-09T17:42:10.222352+0000 mon.a (mon.0) 3409 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: cluster 2026-03-09T17:42:10.222352+0000 mon.a (mon.0) 3409 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:10.233630+0000 mon.c (mon.2) 776 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:10.233630+0000 mon.c (mon.2) 776 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:10.233911+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:10 vm00 bash[28333]: audit 2026-03-09T17:42:10.233911+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: cluster 2026-03-09T17:42:08.862542+0000 mgr.y (mgr.14505) 617 : cluster [DBG] pgmap v1095: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: cluster 2026-03-09T17:42:08.862542+0000 mgr.y (mgr.14505) 617 : cluster [DBG] pgmap v1095: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:09.216851+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:09.216851+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: cluster 2026-03-09T17:42:09.220061+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: cluster 2026-03-09T17:42:09.220061+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:09.279291+0000 mon.c (mon.2) 775 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:09.279291+0000 mon.c (mon.2) 775 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:09.279554+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:09.279554+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:10.219609+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:10.219609+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: cluster 2026-03-09T17:42:10.222352+0000 mon.a (mon.0) 3409 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: cluster 2026-03-09T17:42:10.222352+0000 mon.a (mon.0) 3409 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:10.233630+0000 mon.c (mon.2) 776 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:10.233630+0000 mon.c (mon.2) 776 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:10.233911+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:10 vm00 bash[20770]: audit 2026-03-09T17:42:10.233911+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: cluster 2026-03-09T17:42:08.862542+0000 mgr.y (mgr.14505) 617 : cluster [DBG] pgmap v1095: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: cluster 2026-03-09T17:42:08.862542+0000 mgr.y (mgr.14505) 617 : cluster [DBG] pgmap v1095: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:09.216851+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:09.216851+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: cluster 2026-03-09T17:42:09.220061+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: cluster 2026-03-09T17:42:09.220061+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:09.279291+0000 mon.c (mon.2) 775 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:09.279291+0000 mon.c (mon.2) 775 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:09.279554+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:09.279554+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:10.219609+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:10.219609+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: cluster 2026-03-09T17:42:10.222352+0000 mon.a (mon.0) 3409 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: cluster 2026-03-09T17:42:10.222352+0000 mon.a (mon.0) 3409 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:10.233630+0000 mon.c (mon.2) 776 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:10.233630+0000 mon.c (mon.2) 776 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:10.233911+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:10 vm02 bash[23351]: audit 2026-03-09T17:42:10.233911+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: cluster 2026-03-09T17:42:10.862929+0000 mgr.y (mgr.14505) 618 : cluster [DBG] pgmap v1098: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: cluster 2026-03-09T17:42:10.862929+0000 mgr.y (mgr.14505) 618 : cluster [DBG] pgmap v1098: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: audit 2026-03-09T17:42:11.223402+0000 mon.a (mon.0) 3411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: audit 2026-03-09T17:42:11.223402+0000 mon.a (mon.0) 3411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: cluster 2026-03-09T17:42:11.228859+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: cluster 2026-03-09T17:42:11.228859+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: audit 2026-03-09T17:42:11.231448+0000 mon.c (mon.2) 777 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: audit 2026-03-09T17:42:11.231448+0000 mon.c (mon.2) 777 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: audit 2026-03-09T17:42:11.231641+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:12 vm00 bash[28333]: audit 2026-03-09T17:42:11.231641+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: cluster 2026-03-09T17:42:10.862929+0000 mgr.y (mgr.14505) 618 : cluster [DBG] pgmap v1098: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: cluster 2026-03-09T17:42:10.862929+0000 mgr.y (mgr.14505) 618 : cluster [DBG] pgmap v1098: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: audit 2026-03-09T17:42:11.223402+0000 mon.a (mon.0) 3411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: audit 2026-03-09T17:42:11.223402+0000 mon.a (mon.0) 3411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: cluster 2026-03-09T17:42:11.228859+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: cluster 2026-03-09T17:42:11.228859+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: audit 2026-03-09T17:42:11.231448+0000 mon.c (mon.2) 777 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: audit 2026-03-09T17:42:11.231448+0000 mon.c (mon.2) 777 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: audit 2026-03-09T17:42:11.231641+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:12 vm00 bash[20770]: audit 2026-03-09T17:42:11.231641+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: cluster 2026-03-09T17:42:10.862929+0000 mgr.y (mgr.14505) 618 : cluster [DBG] pgmap v1098: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: cluster 2026-03-09T17:42:10.862929+0000 mgr.y (mgr.14505) 618 : cluster [DBG] pgmap v1098: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: audit 2026-03-09T17:42:11.223402+0000 mon.a (mon.0) 3411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: audit 2026-03-09T17:42:11.223402+0000 mon.a (mon.0) 3411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: cluster 2026-03-09T17:42:11.228859+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: cluster 2026-03-09T17:42:11.228859+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: audit 2026-03-09T17:42:11.231448+0000 mon.c (mon.2) 777 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: audit 2026-03-09T17:42:11.231448+0000 mon.c (mon.2) 777 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: audit 2026-03-09T17:42:11.231641+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:12 vm02 bash[23351]: audit 2026-03-09T17:42:11.231641+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]: dispatch 2026-03-09T17:42:12.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:42:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:13 vm00 bash[28333]: audit 2026-03-09T17:42:12.216960+0000 mgr.y (mgr.14505) 619 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:13 vm00 bash[28333]: audit 2026-03-09T17:42:12.216960+0000 mgr.y (mgr.14505) 619 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:13 vm00 bash[28333]: cluster 2026-03-09T17:42:12.223380+0000 mon.a (mon.0) 3414 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:13 vm00 bash[28333]: cluster 2026-03-09T17:42:12.223380+0000 mon.a (mon.0) 3414 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:13 vm00 bash[28333]: audit 2026-03-09T17:42:12.226701+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]': finished 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:13 vm00 bash[28333]: audit 2026-03-09T17:42:12.226701+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]': finished 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:13 vm00 bash[28333]: cluster 2026-03-09T17:42:12.230210+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:13 vm00 bash[28333]: cluster 2026-03-09T17:42:12.230210+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:13 vm00 bash[20770]: audit 2026-03-09T17:42:12.216960+0000 mgr.y (mgr.14505) 619 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:13 vm00 bash[20770]: audit 2026-03-09T17:42:12.216960+0000 mgr.y (mgr.14505) 619 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:13 vm00 bash[20770]: cluster 2026-03-09T17:42:12.223380+0000 mon.a (mon.0) 3414 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:13 vm00 bash[20770]: cluster 2026-03-09T17:42:12.223380+0000 mon.a (mon.0) 3414 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:13 vm00 bash[20770]: audit 2026-03-09T17:42:12.226701+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]': finished 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:13 vm00 bash[20770]: audit 2026-03-09T17:42:12.226701+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]': finished 2026-03-09T17:42:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:13 vm00 bash[20770]: cluster 2026-03-09T17:42:12.230210+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T17:42:13.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:13 vm00 bash[20770]: cluster 2026-03-09T17:42:12.230210+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T17:42:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:13 vm02 bash[23351]: audit 2026-03-09T17:42:12.216960+0000 mgr.y (mgr.14505) 619 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:13 vm02 bash[23351]: audit 2026-03-09T17:42:12.216960+0000 mgr.y (mgr.14505) 619 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:13 vm02 bash[23351]: cluster 2026-03-09T17:42:12.223380+0000 mon.a (mon.0) 3414 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:13 vm02 bash[23351]: cluster 2026-03-09T17:42:12.223380+0000 mon.a (mon.0) 3414 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:13 vm02 bash[23351]: audit 2026-03-09T17:42:12.226701+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]': finished 2026-03-09T17:42:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:13 vm02 bash[23351]: audit 2026-03-09T17:42:12.226701+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-144", "mode": "readproxy"}]': finished 2026-03-09T17:42:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:13 vm02 bash[23351]: cluster 2026-03-09T17:42:12.230210+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T17:42:13.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:13 vm02 bash[23351]: cluster 2026-03-09T17:42:12.230210+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T17:42:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:14 vm00 bash[28333]: cluster 2026-03-09T17:42:12.863666+0000 mgr.y (mgr.14505) 620 : cluster [DBG] pgmap v1101: 268 pgs: 5 unknown, 263 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:42:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:14 vm00 bash[28333]: cluster 2026-03-09T17:42:12.863666+0000 mgr.y (mgr.14505) 620 : cluster [DBG] pgmap v1101: 268 pgs: 5 unknown, 263 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:42:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:14 vm00 bash[28333]: audit 2026-03-09T17:42:13.345955+0000 mon.c (mon.2) 778 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:14 vm00 bash[28333]: audit 2026-03-09T17:42:13.345955+0000 mon.c (mon.2) 778 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:14 vm00 bash[20770]: cluster 2026-03-09T17:42:12.863666+0000 mgr.y (mgr.14505) 620 : cluster [DBG] pgmap v1101: 268 pgs: 5 unknown, 263 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:42:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:14 vm00 bash[20770]: cluster 2026-03-09T17:42:12.863666+0000 mgr.y (mgr.14505) 620 : cluster [DBG] pgmap v1101: 268 pgs: 5 unknown, 263 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:42:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:14 vm00 bash[20770]: audit 2026-03-09T17:42:13.345955+0000 mon.c (mon.2) 778 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:14 vm00 bash[20770]: audit 2026-03-09T17:42:13.345955+0000 mon.c (mon.2) 778 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:14 vm02 bash[23351]: cluster 2026-03-09T17:42:12.863666+0000 mgr.y (mgr.14505) 620 : cluster [DBG] pgmap v1101: 268 pgs: 5 unknown, 263 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:42:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:14 vm02 bash[23351]: cluster 2026-03-09T17:42:12.863666+0000 mgr.y (mgr.14505) 620 : cluster [DBG] pgmap v1101: 268 pgs: 5 unknown, 263 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:42:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:14 vm02 bash[23351]: audit 2026-03-09T17:42:13.345955+0000 mon.c (mon.2) 778 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:14 vm02 bash[23351]: audit 2026-03-09T17:42:13.345955+0000 mon.c (mon.2) 778 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:16 vm00 bash[28333]: cluster 2026-03-09T17:42:14.864093+0000 mgr.y (mgr.14505) 621 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:42:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:16 vm00 bash[28333]: cluster 2026-03-09T17:42:14.864093+0000 mgr.y (mgr.14505) 621 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:42:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:16 vm00 bash[20770]: cluster 2026-03-09T17:42:14.864093+0000 mgr.y (mgr.14505) 621 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:42:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:16 vm00 bash[20770]: cluster 2026-03-09T17:42:14.864093+0000 mgr.y (mgr.14505) 621 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:42:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:42:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:42:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:42:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:16 vm02 bash[23351]: cluster 2026-03-09T17:42:14.864093+0000 mgr.y (mgr.14505) 621 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:42:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:16 vm02 bash[23351]: cluster 2026-03-09T17:42:14.864093+0000 mgr.y (mgr.14505) 621 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 181 B/s wr, 1 op/s 2026-03-09T17:42:17.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:17 vm00 bash[28333]: cluster 2026-03-09T17:42:16.811597+0000 mon.a (mon.0) 3417 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:17.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:17 vm00 bash[28333]: cluster 2026-03-09T17:42:16.811597+0000 mon.a (mon.0) 3417 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:17 vm00 bash[20770]: cluster 2026-03-09T17:42:16.811597+0000 mon.a (mon.0) 3417 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:17 vm00 bash[20770]: cluster 2026-03-09T17:42:16.811597+0000 mon.a (mon.0) 3417 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:17 vm02 bash[23351]: cluster 2026-03-09T17:42:16.811597+0000 mon.a (mon.0) 3417 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:17 vm02 bash[23351]: cluster 2026-03-09T17:42:16.811597+0000 mon.a (mon.0) 3417 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:18 vm00 bash[28333]: cluster 2026-03-09T17:42:16.864461+0000 mgr.y (mgr.14505) 622 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 154 B/s wr, 1 op/s 2026-03-09T17:42:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:18 vm00 bash[28333]: cluster 2026-03-09T17:42:16.864461+0000 mgr.y (mgr.14505) 622 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 154 B/s wr, 1 op/s 2026-03-09T17:42:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:18 vm00 bash[20770]: cluster 2026-03-09T17:42:16.864461+0000 mgr.y (mgr.14505) 622 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 154 B/s wr, 1 op/s 2026-03-09T17:42:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:18 vm00 bash[20770]: cluster 2026-03-09T17:42:16.864461+0000 mgr.y (mgr.14505) 622 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 154 B/s wr, 1 op/s 2026-03-09T17:42:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:18 vm02 bash[23351]: cluster 2026-03-09T17:42:16.864461+0000 mgr.y (mgr.14505) 622 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 154 B/s wr, 1 op/s 2026-03-09T17:42:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:18 vm02 bash[23351]: cluster 2026-03-09T17:42:16.864461+0000 mgr.y (mgr.14505) 622 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 154 B/s wr, 1 op/s 2026-03-09T17:42:20.286 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:42:20 vm02 bash[51223]: logger=sqlstore.transactions t=2026-03-09T17:42:20.009027717Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T17:42:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:20 vm00 bash[28333]: cluster 2026-03-09T17:42:18.865202+0000 mgr.y (mgr.14505) 623 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T17:42:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:20 vm00 bash[28333]: cluster 2026-03-09T17:42:18.865202+0000 mgr.y (mgr.14505) 623 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T17:42:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:20 vm00 bash[20770]: cluster 2026-03-09T17:42:18.865202+0000 mgr.y (mgr.14505) 623 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T17:42:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:20 vm00 bash[20770]: cluster 2026-03-09T17:42:18.865202+0000 mgr.y (mgr.14505) 623 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T17:42:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:20 vm02 bash[23351]: cluster 2026-03-09T17:42:18.865202+0000 mgr.y (mgr.14505) 623 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T17:42:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:20 vm02 bash[23351]: cluster 2026-03-09T17:42:18.865202+0000 mgr.y (mgr.14505) 623 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T17:42:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:22 vm02 bash[23351]: cluster 2026-03-09T17:42:20.865541+0000 mgr.y (mgr.14505) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 106 B/s wr, 1 op/s 2026-03-09T17:42:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:22 vm02 bash[23351]: cluster 2026-03-09T17:42:20.865541+0000 mgr.y (mgr.14505) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 106 B/s wr, 1 op/s 2026-03-09T17:42:22.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:42:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:42:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:22 vm00 bash[28333]: cluster 2026-03-09T17:42:20.865541+0000 mgr.y (mgr.14505) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 106 B/s wr, 1 op/s 2026-03-09T17:42:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:22 vm00 bash[28333]: cluster 2026-03-09T17:42:20.865541+0000 mgr.y (mgr.14505) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 106 B/s wr, 1 op/s 2026-03-09T17:42:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:22 vm00 bash[20770]: cluster 2026-03-09T17:42:20.865541+0000 mgr.y (mgr.14505) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 106 B/s wr, 1 op/s 2026-03-09T17:42:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:22 vm00 bash[20770]: cluster 2026-03-09T17:42:20.865541+0000 mgr.y (mgr.14505) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 106 B/s wr, 1 op/s 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:23 vm00 bash[28333]: audit 2026-03-09T17:42:22.225822+0000 mgr.y (mgr.14505) 625 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:23 vm00 bash[28333]: audit 2026-03-09T17:42:22.225822+0000 mgr.y (mgr.14505) 625 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:23 vm00 bash[28333]: audit 2026-03-09T17:42:22.339289+0000 mon.c (mon.2) 779 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:23 vm00 bash[28333]: audit 2026-03-09T17:42:22.339289+0000 mon.c (mon.2) 779 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:23 vm00 bash[28333]: audit 2026-03-09T17:42:22.339559+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:23 vm00 bash[28333]: audit 2026-03-09T17:42:22.339559+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:23 vm00 bash[20770]: audit 2026-03-09T17:42:22.225822+0000 mgr.y (mgr.14505) 625 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:23 vm00 bash[20770]: audit 2026-03-09T17:42:22.225822+0000 mgr.y (mgr.14505) 625 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:23 vm00 bash[20770]: audit 2026-03-09T17:42:22.339289+0000 mon.c (mon.2) 779 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:23 vm00 bash[20770]: audit 2026-03-09T17:42:22.339289+0000 mon.c (mon.2) 779 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:23 vm00 bash[20770]: audit 2026-03-09T17:42:22.339559+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:23 vm00 bash[20770]: audit 2026-03-09T17:42:22.339559+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:23 vm02 bash[23351]: audit 2026-03-09T17:42:22.225822+0000 mgr.y (mgr.14505) 625 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:23 vm02 bash[23351]: audit 2026-03-09T17:42:22.225822+0000 mgr.y (mgr.14505) 625 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:23 vm02 bash[23351]: audit 2026-03-09T17:42:22.339289+0000 mon.c (mon.2) 779 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:23 vm02 bash[23351]: audit 2026-03-09T17:42:22.339289+0000 mon.c (mon.2) 779 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:23 vm02 bash[23351]: audit 2026-03-09T17:42:22.339559+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:23.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:23 vm02 bash[23351]: audit 2026-03-09T17:42:22.339559+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: cluster 2026-03-09T17:42:22.866144+0000 mgr.y (mgr.14505) 626 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 96 B/s wr, 1 op/s 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: cluster 2026-03-09T17:42:22.866144+0000 mgr.y (mgr.14505) 626 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 96 B/s wr, 1 op/s 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: audit 2026-03-09T17:42:23.439748+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: audit 2026-03-09T17:42:23.439748+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: audit 2026-03-09T17:42:23.444420+0000 mon.c (mon.2) 780 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: audit 2026-03-09T17:42:23.444420+0000 mon.c (mon.2) 780 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: cluster 2026-03-09T17:42:23.453843+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: cluster 2026-03-09T17:42:23.453843+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: audit 2026-03-09T17:42:23.454588+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:24 vm00 bash[28333]: audit 2026-03-09T17:42:23.454588+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: cluster 2026-03-09T17:42:22.866144+0000 mgr.y (mgr.14505) 626 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 96 B/s wr, 1 op/s 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: cluster 2026-03-09T17:42:22.866144+0000 mgr.y (mgr.14505) 626 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 96 B/s wr, 1 op/s 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: audit 2026-03-09T17:42:23.439748+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: audit 2026-03-09T17:42:23.439748+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: audit 2026-03-09T17:42:23.444420+0000 mon.c (mon.2) 780 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: audit 2026-03-09T17:42:23.444420+0000 mon.c (mon.2) 780 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: cluster 2026-03-09T17:42:23.453843+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: cluster 2026-03-09T17:42:23.453843+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: audit 2026-03-09T17:42:23.454588+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:24 vm00 bash[20770]: audit 2026-03-09T17:42:23.454588+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: cluster 2026-03-09T17:42:22.866144+0000 mgr.y (mgr.14505) 626 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 96 B/s wr, 1 op/s 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: cluster 2026-03-09T17:42:22.866144+0000 mgr.y (mgr.14505) 626 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 96 B/s wr, 1 op/s 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: audit 2026-03-09T17:42:23.439748+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: audit 2026-03-09T17:42:23.439748+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: audit 2026-03-09T17:42:23.444420+0000 mon.c (mon.2) 780 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: audit 2026-03-09T17:42:23.444420+0000 mon.c (mon.2) 780 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: cluster 2026-03-09T17:42:23.453843+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: cluster 2026-03-09T17:42:23.453843+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: audit 2026-03-09T17:42:23.454588+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:24 vm02 bash[23351]: audit 2026-03-09T17:42:23.454588+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: cluster 2026-03-09T17:42:24.441107+0000 mon.a (mon.0) 3422 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: cluster 2026-03-09T17:42:24.441107+0000 mon.a (mon.0) 3422 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.443756+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.443756+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: cluster 2026-03-09T17:42:24.452797+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: cluster 2026-03-09T17:42:24.452797+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.481958+0000 mon.c (mon.2) 781 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.481958+0000 mon.c (mon.2) 781 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.482393+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.482393+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.483404+0000 mon.c (mon.2) 782 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.483404+0000 mon.c (mon.2) 782 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.483675+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:25 vm00 bash[28333]: audit 2026-03-09T17:42:24.483675+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: cluster 2026-03-09T17:42:24.441107+0000 mon.a (mon.0) 3422 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: cluster 2026-03-09T17:42:24.441107+0000 mon.a (mon.0) 3422 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.443756+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.443756+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: cluster 2026-03-09T17:42:24.452797+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: cluster 2026-03-09T17:42:24.452797+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.481958+0000 mon.c (mon.2) 781 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.481958+0000 mon.c (mon.2) 781 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.482393+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.482393+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.483404+0000 mon.c (mon.2) 782 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.483404+0000 mon.c (mon.2) 782 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.483675+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:25 vm00 bash[20770]: audit 2026-03-09T17:42:24.483675+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: cluster 2026-03-09T17:42:24.441107+0000 mon.a (mon.0) 3422 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: cluster 2026-03-09T17:42:24.441107+0000 mon.a (mon.0) 3422 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.443756+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.443756+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]': finished 2026-03-09T17:42:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: cluster 2026-03-09T17:42:24.452797+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T17:42:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: cluster 2026-03-09T17:42:24.452797+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T17:42:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.481958+0000 mon.c (mon.2) 781 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.481958+0000 mon.c (mon.2) 781 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.482393+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.482393+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:25.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.483404+0000 mon.c (mon.2) 782 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.483404+0000 mon.c (mon.2) 782 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.483675+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:25.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:25 vm02 bash[23351]: audit 2026-03-09T17:42:24.483675+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-144"}]: dispatch 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:26 vm00 bash[28333]: cluster 2026-03-09T17:42:24.866497+0000 mgr.y (mgr.14505) 627 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:26 vm00 bash[28333]: cluster 2026-03-09T17:42:24.866497+0000 mgr.y (mgr.14505) 627 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:26 vm00 bash[28333]: cluster 2026-03-09T17:42:25.446877+0000 mon.a (mon.0) 3427 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:26 vm00 bash[28333]: cluster 2026-03-09T17:42:25.446877+0000 mon.a (mon.0) 3427 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:26 vm00 bash[28333]: cluster 2026-03-09T17:42:25.460664+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:26 vm00 bash[28333]: cluster 2026-03-09T17:42:25.460664+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:26 vm00 bash[20770]: cluster 2026-03-09T17:42:24.866497+0000 mgr.y (mgr.14505) 627 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:26 vm00 bash[20770]: cluster 2026-03-09T17:42:24.866497+0000 mgr.y (mgr.14505) 627 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:26 vm00 bash[20770]: cluster 2026-03-09T17:42:25.446877+0000 mon.a (mon.0) 3427 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:26 vm00 bash[20770]: cluster 2026-03-09T17:42:25.446877+0000 mon.a (mon.0) 3427 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:26 vm00 bash[20770]: cluster 2026-03-09T17:42:25.460664+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:26 vm00 bash[20770]: cluster 2026-03-09T17:42:25.460664+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T17:42:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:42:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:42:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:42:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:26 vm02 bash[23351]: cluster 2026-03-09T17:42:24.866497+0000 mgr.y (mgr.14505) 627 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:26 vm02 bash[23351]: cluster 2026-03-09T17:42:24.866497+0000 mgr.y (mgr.14505) 627 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:42:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:26 vm02 bash[23351]: cluster 2026-03-09T17:42:25.446877+0000 mon.a (mon.0) 3427 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:26 vm02 bash[23351]: cluster 2026-03-09T17:42:25.446877+0000 mon.a (mon.0) 3427 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:26 vm02 bash[23351]: cluster 2026-03-09T17:42:25.460664+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T17:42:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:26 vm02 bash[23351]: cluster 2026-03-09T17:42:25.460664+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:27 vm00 bash[28333]: cluster 2026-03-09T17:42:26.485305+0000 mon.a (mon.0) 3429 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:27 vm00 bash[28333]: cluster 2026-03-09T17:42:26.485305+0000 mon.a (mon.0) 3429 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:27 vm00 bash[28333]: audit 2026-03-09T17:42:26.492417+0000 mon.c (mon.2) 783 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:27 vm00 bash[28333]: audit 2026-03-09T17:42:26.492417+0000 mon.c (mon.2) 783 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:27 vm00 bash[28333]: audit 2026-03-09T17:42:26.492932+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:27 vm00 bash[28333]: audit 2026-03-09T17:42:26.492932+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:27 vm00 bash[28333]: cluster 2026-03-09T17:42:26.866984+0000 mgr.y (mgr.14505) 628 : cluster [DBG] pgmap v1112: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:27 vm00 bash[28333]: cluster 2026-03-09T17:42:26.866984+0000 mgr.y (mgr.14505) 628 : cluster [DBG] pgmap v1112: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:27 vm00 bash[20770]: cluster 2026-03-09T17:42:26.485305+0000 mon.a (mon.0) 3429 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:27 vm00 bash[20770]: cluster 2026-03-09T17:42:26.485305+0000 mon.a (mon.0) 3429 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:27 vm00 bash[20770]: audit 2026-03-09T17:42:26.492417+0000 mon.c (mon.2) 783 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:27 vm00 bash[20770]: audit 2026-03-09T17:42:26.492417+0000 mon.c (mon.2) 783 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:27 vm00 bash[20770]: audit 2026-03-09T17:42:26.492932+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:27 vm00 bash[20770]: audit 2026-03-09T17:42:26.492932+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:27 vm00 bash[20770]: cluster 2026-03-09T17:42:26.866984+0000 mgr.y (mgr.14505) 628 : cluster [DBG] pgmap v1112: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:27.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:27 vm00 bash[20770]: cluster 2026-03-09T17:42:26.866984+0000 mgr.y (mgr.14505) 628 : cluster [DBG] pgmap v1112: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:27 vm02 bash[23351]: cluster 2026-03-09T17:42:26.485305+0000 mon.a (mon.0) 3429 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T17:42:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:27 vm02 bash[23351]: cluster 2026-03-09T17:42:26.485305+0000 mon.a (mon.0) 3429 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T17:42:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:27 vm02 bash[23351]: audit 2026-03-09T17:42:26.492417+0000 mon.c (mon.2) 783 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:27 vm02 bash[23351]: audit 2026-03-09T17:42:26.492417+0000 mon.c (mon.2) 783 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:27 vm02 bash[23351]: audit 2026-03-09T17:42:26.492932+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:27 vm02 bash[23351]: audit 2026-03-09T17:42:26.492932+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:27 vm02 bash[23351]: cluster 2026-03-09T17:42:26.866984+0000 mgr.y (mgr.14505) 628 : cluster [DBG] pgmap v1112: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:27 vm02 bash[23351]: cluster 2026-03-09T17:42:26.866984+0000 mgr.y (mgr.14505) 628 : cluster [DBG] pgmap v1112: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 976 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:27.502173+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:27.502173+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: cluster 2026-03-09T17:42:27.522427+0000 mon.a (mon.0) 3432 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: cluster 2026-03-09T17:42:27.522427+0000 mon.a (mon.0) 3432 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:27.560346+0000 mon.c (mon.2) 784 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:27.560346+0000 mon.c (mon.2) 784 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:27.560680+0000 mon.a (mon.0) 3433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:27.560680+0000 mon.a (mon.0) 3433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:28.357765+0000 mon.a (mon.0) 3434 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:28.357765+0000 mon.a (mon.0) 3434 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:28.360334+0000 mon.c (mon.2) 785 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:28 vm00 bash[28333]: audit 2026-03-09T17:42:28.360334+0000 mon.c (mon.2) 785 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:27.502173+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:27.502173+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: cluster 2026-03-09T17:42:27.522427+0000 mon.a (mon.0) 3432 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: cluster 2026-03-09T17:42:27.522427+0000 mon.a (mon.0) 3432 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:27.560346+0000 mon.c (mon.2) 784 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:27.560346+0000 mon.c (mon.2) 784 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:27.560680+0000 mon.a (mon.0) 3433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:27.560680+0000 mon.a (mon.0) 3433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:28.357765+0000 mon.a (mon.0) 3434 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:28.357765+0000 mon.a (mon.0) 3434 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:28.360334+0000 mon.c (mon.2) 785 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:28 vm00 bash[20770]: audit 2026-03-09T17:42:28.360334+0000 mon.c (mon.2) 785 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:27.502173+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:27.502173+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: cluster 2026-03-09T17:42:27.522427+0000 mon.a (mon.0) 3432 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: cluster 2026-03-09T17:42:27.522427+0000 mon.a (mon.0) 3432 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:27.560346+0000 mon.c (mon.2) 784 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:27.560346+0000 mon.c (mon.2) 784 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:27.560680+0000 mon.a (mon.0) 3433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:27.560680+0000 mon.a (mon.0) 3433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:28.357765+0000 mon.a (mon.0) 3434 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:28.357765+0000 mon.a (mon.0) 3434 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:28.360334+0000 mon.c (mon.2) 785 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:28 vm02 bash[23351]: audit 2026-03-09T17:42:28.360334+0000 mon.c (mon.2) 785 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: audit 2026-03-09T17:42:28.505252+0000 mon.a (mon.0) 3435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: audit 2026-03-09T17:42:28.505252+0000 mon.a (mon.0) 3435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: cluster 2026-03-09T17:42:28.509028+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: cluster 2026-03-09T17:42:28.509028+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: audit 2026-03-09T17:42:28.516065+0000 mon.c (mon.2) 786 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: audit 2026-03-09T17:42:28.516065+0000 mon.c (mon.2) 786 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: audit 2026-03-09T17:42:28.523898+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: audit 2026-03-09T17:42:28.523898+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: cluster 2026-03-09T17:42:28.867274+0000 mgr.y (mgr.14505) 629 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:29 vm00 bash[28333]: cluster 2026-03-09T17:42:28.867274+0000 mgr.y (mgr.14505) 629 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: audit 2026-03-09T17:42:28.505252+0000 mon.a (mon.0) 3435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: audit 2026-03-09T17:42:28.505252+0000 mon.a (mon.0) 3435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: cluster 2026-03-09T17:42:28.509028+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: cluster 2026-03-09T17:42:28.509028+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: audit 2026-03-09T17:42:28.516065+0000 mon.c (mon.2) 786 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: audit 2026-03-09T17:42:28.516065+0000 mon.c (mon.2) 786 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: audit 2026-03-09T17:42:28.523898+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: audit 2026-03-09T17:42:28.523898+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: cluster 2026-03-09T17:42:28.867274+0000 mgr.y (mgr.14505) 629 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:29 vm00 bash[20770]: cluster 2026-03-09T17:42:28.867274+0000 mgr.y (mgr.14505) 629 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: audit 2026-03-09T17:42:28.505252+0000 mon.a (mon.0) 3435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: audit 2026-03-09T17:42:28.505252+0000 mon.a (mon.0) 3435 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: cluster 2026-03-09T17:42:28.509028+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: cluster 2026-03-09T17:42:28.509028+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: audit 2026-03-09T17:42:28.516065+0000 mon.c (mon.2) 786 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: audit 2026-03-09T17:42:28.516065+0000 mon.c (mon.2) 786 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: audit 2026-03-09T17:42:28.523898+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: audit 2026-03-09T17:42:28.523898+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: cluster 2026-03-09T17:42:28.867274+0000 mgr.y (mgr.14505) 629 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:29 vm02 bash[23351]: cluster 2026-03-09T17:42:28.867274+0000 mgr.y (mgr.14505) 629 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:30 vm02 bash[23351]: audit 2026-03-09T17:42:29.526262+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:30 vm02 bash[23351]: audit 2026-03-09T17:42:29.526262+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:30 vm02 bash[23351]: audit 2026-03-09T17:42:29.536411+0000 mon.c (mon.2) 787 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:30 vm02 bash[23351]: audit 2026-03-09T17:42:29.536411+0000 mon.c (mon.2) 787 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:30 vm02 bash[23351]: cluster 2026-03-09T17:42:29.539370+0000 mon.a (mon.0) 3439 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T17:42:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:30 vm02 bash[23351]: cluster 2026-03-09T17:42:29.539370+0000 mon.a (mon.0) 3439 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T17:42:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:30 vm02 bash[23351]: audit 2026-03-09T17:42:29.539551+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:30 vm02 bash[23351]: audit 2026-03-09T17:42:29.539551+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:30 vm00 bash[28333]: audit 2026-03-09T17:42:29.526262+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:30 vm00 bash[28333]: audit 2026-03-09T17:42:29.526262+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:30 vm00 bash[28333]: audit 2026-03-09T17:42:29.536411+0000 mon.c (mon.2) 787 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:30 vm00 bash[28333]: audit 2026-03-09T17:42:29.536411+0000 mon.c (mon.2) 787 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:30 vm00 bash[28333]: cluster 2026-03-09T17:42:29.539370+0000 mon.a (mon.0) 3439 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:30 vm00 bash[28333]: cluster 2026-03-09T17:42:29.539370+0000 mon.a (mon.0) 3439 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:30 vm00 bash[28333]: audit 2026-03-09T17:42:29.539551+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:30 vm00 bash[28333]: audit 2026-03-09T17:42:29.539551+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:30 vm00 bash[20770]: audit 2026-03-09T17:42:29.526262+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:30 vm00 bash[20770]: audit 2026-03-09T17:42:29.526262+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-60118-111", "overlaypool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:30 vm00 bash[20770]: audit 2026-03-09T17:42:29.536411+0000 mon.c (mon.2) 787 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:30 vm00 bash[20770]: audit 2026-03-09T17:42:29.536411+0000 mon.c (mon.2) 787 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:30 vm00 bash[20770]: cluster 2026-03-09T17:42:29.539370+0000 mon.a (mon.0) 3439 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:30 vm00 bash[20770]: cluster 2026-03-09T17:42:29.539370+0000 mon.a (mon.0) 3439 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:30 vm00 bash[20770]: audit 2026-03-09T17:42:29.539551+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:30 vm00 bash[20770]: audit 2026-03-09T17:42:29.539551+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]: dispatch 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: cluster 2026-03-09T17:42:30.526351+0000 mon.a (mon.0) 3441 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: cluster 2026-03-09T17:42:30.526351+0000 mon.a (mon.0) 3441 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: audit 2026-03-09T17:42:30.529625+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]': finished 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: audit 2026-03-09T17:42:30.529625+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]': finished 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: cluster 2026-03-09T17:42:30.533952+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: cluster 2026-03-09T17:42:30.533952+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: audit 2026-03-09T17:42:30.586743+0000 mon.c (mon.2) 788 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: audit 2026-03-09T17:42:30.586743+0000 mon.c (mon.2) 788 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: audit 2026-03-09T17:42:30.587079+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: audit 2026-03-09T17:42:30.587079+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: cluster 2026-03-09T17:42:30.867661+0000 mgr.y (mgr.14505) 630 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:31 vm02 bash[23351]: cluster 2026-03-09T17:42:30.867661+0000 mgr.y (mgr.14505) 630 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: cluster 2026-03-09T17:42:30.526351+0000 mon.a (mon.0) 3441 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: cluster 2026-03-09T17:42:30.526351+0000 mon.a (mon.0) 3441 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: audit 2026-03-09T17:42:30.529625+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]': finished 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: audit 2026-03-09T17:42:30.529625+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]': finished 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: cluster 2026-03-09T17:42:30.533952+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: cluster 2026-03-09T17:42:30.533952+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: audit 2026-03-09T17:42:30.586743+0000 mon.c (mon.2) 788 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: audit 2026-03-09T17:42:30.586743+0000 mon.c (mon.2) 788 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: audit 2026-03-09T17:42:30.587079+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: audit 2026-03-09T17:42:30.587079+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: cluster 2026-03-09T17:42:30.867661+0000 mgr.y (mgr.14505) 630 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:31 vm00 bash[28333]: cluster 2026-03-09T17:42:30.867661+0000 mgr.y (mgr.14505) 630 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: cluster 2026-03-09T17:42:30.526351+0000 mon.a (mon.0) 3441 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: cluster 2026-03-09T17:42:30.526351+0000 mon.a (mon.0) 3441 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: audit 2026-03-09T17:42:30.529625+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]': finished 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: audit 2026-03-09T17:42:30.529625+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60118-146", "mode": "writeback"}]': finished 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: cluster 2026-03-09T17:42:30.533952+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: cluster 2026-03-09T17:42:30.533952+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: audit 2026-03-09T17:42:30.586743+0000 mon.c (mon.2) 788 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: audit 2026-03-09T17:42:30.586743+0000 mon.c (mon.2) 788 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: audit 2026-03-09T17:42:30.587079+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: audit 2026-03-09T17:42:30.587079+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: cluster 2026-03-09T17:42:30.867661+0000 mgr.y (mgr.14505) 630 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:32.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:31 vm00 bash[20770]: cluster 2026-03-09T17:42:30.867661+0000 mgr.y (mgr.14505) 630 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T17:42:32.570 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:42:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: audit 2026-03-09T17:42:31.551338+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: audit 2026-03-09T17:42:31.551338+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: audit 2026-03-09T17:42:31.566688+0000 mon.c (mon.2) 789 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: audit 2026-03-09T17:42:31.566688+0000 mon.c (mon.2) 789 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: cluster 2026-03-09T17:42:31.570050+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: cluster 2026-03-09T17:42:31.570050+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: audit 2026-03-09T17:42:31.570741+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: audit 2026-03-09T17:42:31.570741+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: cluster 2026-03-09T17:42:31.821978+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: cluster 2026-03-09T17:42:31.821978+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: audit 2026-03-09T17:42:32.234029+0000 mgr.y (mgr.14505) 631 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:32 vm02 bash[23351]: audit 2026-03-09T17:42:32.234029+0000 mgr.y (mgr.14505) 631 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: audit 2026-03-09T17:42:31.551338+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: audit 2026-03-09T17:42:31.551338+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: audit 2026-03-09T17:42:31.566688+0000 mon.c (mon.2) 789 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: audit 2026-03-09T17:42:31.566688+0000 mon.c (mon.2) 789 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: cluster 2026-03-09T17:42:31.570050+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: cluster 2026-03-09T17:42:31.570050+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: audit 2026-03-09T17:42:31.570741+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: audit 2026-03-09T17:42:31.570741+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: cluster 2026-03-09T17:42:31.821978+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: cluster 2026-03-09T17:42:31.821978+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: audit 2026-03-09T17:42:32.234029+0000 mgr.y (mgr.14505) 631 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:32 vm00 bash[28333]: audit 2026-03-09T17:42:32.234029+0000 mgr.y (mgr.14505) 631 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: audit 2026-03-09T17:42:31.551338+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: audit 2026-03-09T17:42:31.551338+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: audit 2026-03-09T17:42:31.566688+0000 mon.c (mon.2) 789 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: audit 2026-03-09T17:42:31.566688+0000 mon.c (mon.2) 789 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: cluster 2026-03-09T17:42:31.570050+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: cluster 2026-03-09T17:42:31.570050+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: audit 2026-03-09T17:42:31.570741+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: audit 2026-03-09T17:42:31.570741+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: cluster 2026-03-09T17:42:31.821978+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: cluster 2026-03-09T17:42:31.821978+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: audit 2026-03-09T17:42:32.234029+0000 mgr.y (mgr.14505) 631 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:32 vm00 bash[20770]: audit 2026-03-09T17:42:32.234029+0000 mgr.y (mgr.14505) 631 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:32.554300+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:32.554300+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: cluster 2026-03-09T17:42:32.558388+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: cluster 2026-03-09T17:42:32.558388+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:32.558718+0000 mon.c (mon.2) 790 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:32.558718+0000 mon.c (mon.2) 790 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:32.564101+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:32.564101+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: cluster 2026-03-09T17:42:32.868305+0000 mgr.y (mgr.14505) 632 : cluster [DBG] pgmap v1121: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: cluster 2026-03-09T17:42:32.868305+0000 mgr.y (mgr.14505) 632 : cluster [DBG] pgmap v1121: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: cluster 2026-03-09T17:42:33.554559+0000 mon.a (mon.0) 3452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: cluster 2026-03-09T17:42:33.554559+0000 mon.a (mon.0) 3452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:33.557779+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:33.557779+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:33.566078+0000 mon.c (mon.2) 791 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:33.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: audit 2026-03-09T17:42:33.566078+0000 mon.c (mon.2) 791 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:33.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: cluster 2026-03-09T17:42:33.567350+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T17:42:33.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:33 vm02 bash[23351]: cluster 2026-03-09T17:42:33.567350+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:32.554300+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:32.554300+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: cluster 2026-03-09T17:42:32.558388+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: cluster 2026-03-09T17:42:32.558388+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:32.558718+0000 mon.c (mon.2) 790 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:32.558718+0000 mon.c (mon.2) 790 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:32.564101+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:32.564101+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: cluster 2026-03-09T17:42:32.868305+0000 mgr.y (mgr.14505) 632 : cluster [DBG] pgmap v1121: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: cluster 2026-03-09T17:42:32.868305+0000 mgr.y (mgr.14505) 632 : cluster [DBG] pgmap v1121: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: cluster 2026-03-09T17:42:33.554559+0000 mon.a (mon.0) 3452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: cluster 2026-03-09T17:42:33.554559+0000 mon.a (mon.0) 3452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:33.557779+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:33.557779+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:33.566078+0000 mon.c (mon.2) 791 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: audit 2026-03-09T17:42:33.566078+0000 mon.c (mon.2) 791 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: cluster 2026-03-09T17:42:33.567350+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:33 vm00 bash[28333]: cluster 2026-03-09T17:42:33.567350+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:32.554300+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:32.554300+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: cluster 2026-03-09T17:42:32.558388+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: cluster 2026-03-09T17:42:32.558388+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:32.558718+0000 mon.c (mon.2) 790 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:32.558718+0000 mon.c (mon.2) 790 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:32.564101+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:32.564101+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: cluster 2026-03-09T17:42:32.868305+0000 mgr.y (mgr.14505) 632 : cluster [DBG] pgmap v1121: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: cluster 2026-03-09T17:42:32.868305+0000 mgr.y (mgr.14505) 632 : cluster [DBG] pgmap v1121: 268 pgs: 268 active+clean; 4.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: cluster 2026-03-09T17:42:33.554559+0000 mon.a (mon.0) 3452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: cluster 2026-03-09T17:42:33.554559+0000 mon.a (mon.0) 3452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T17:42:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:33.557779+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:33.557779+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T17:42:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:33.566078+0000 mon.c (mon.2) 791 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: audit 2026-03-09T17:42:33.566078+0000 mon.c (mon.2) 791 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: cluster 2026-03-09T17:42:33.567350+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T17:42:34.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:33 vm00 bash[20770]: cluster 2026-03-09T17:42:33.567350+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T17:42:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: audit 2026-03-09T17:42:33.567916+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: audit 2026-03-09T17:42:33.567916+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: audit 2026-03-09T17:42:34.560904+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:34.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: audit 2026-03-09T17:42:34.560904+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:34.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: cluster 2026-03-09T17:42:34.571593+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T17:42:34.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: cluster 2026-03-09T17:42:34.571593+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T17:42:34.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: audit 2026-03-09T17:42:34.572810+0000 mon.c (mon.2) 792 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:34.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: audit 2026-03-09T17:42:34.572810+0000 mon.c (mon.2) 792 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:34.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: audit 2026-03-09T17:42:34.573064+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:34.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:34 vm02 bash[23351]: audit 2026-03-09T17:42:34.573064+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: audit 2026-03-09T17:42:33.567916+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: audit 2026-03-09T17:42:33.567916+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: audit 2026-03-09T17:42:34.560904+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: audit 2026-03-09T17:42:34.560904+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: cluster 2026-03-09T17:42:34.571593+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: cluster 2026-03-09T17:42:34.571593+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: audit 2026-03-09T17:42:34.572810+0000 mon.c (mon.2) 792 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: audit 2026-03-09T17:42:34.572810+0000 mon.c (mon.2) 792 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: audit 2026-03-09T17:42:34.573064+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:34 vm00 bash[28333]: audit 2026-03-09T17:42:34.573064+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: audit 2026-03-09T17:42:33.567916+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: audit 2026-03-09T17:42:33.567916+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: audit 2026-03-09T17:42:34.560904+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: audit 2026-03-09T17:42:34.560904+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: cluster 2026-03-09T17:42:34.571593+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: cluster 2026-03-09T17:42:34.571593+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: audit 2026-03-09T17:42:34.572810+0000 mon.c (mon.2) 792 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: audit 2026-03-09T17:42:34.572810+0000 mon.c (mon.2) 792 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: audit 2026-03-09T17:42:34.573064+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:34 vm00 bash[20770]: audit 2026-03-09T17:42:34.573064+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T17:42:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:35 vm02 bash[23351]: cluster 2026-03-09T17:42:34.868587+0000 mgr.y (mgr.14505) 633 : cluster [DBG] pgmap v1124: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:35 vm02 bash[23351]: cluster 2026-03-09T17:42:34.868587+0000 mgr.y (mgr.14505) 633 : cluster [DBG] pgmap v1124: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:35 vm02 bash[23351]: audit 2026-03-09T17:42:35.563813+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:42:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:35 vm02 bash[23351]: audit 2026-03-09T17:42:35.563813+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:42:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:35 vm02 bash[23351]: cluster 2026-03-09T17:42:35.566478+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T17:42:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:35 vm02 bash[23351]: cluster 2026-03-09T17:42:35.566478+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:35 vm00 bash[28333]: cluster 2026-03-09T17:42:34.868587+0000 mgr.y (mgr.14505) 633 : cluster [DBG] pgmap v1124: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:35 vm00 bash[28333]: cluster 2026-03-09T17:42:34.868587+0000 mgr.y (mgr.14505) 633 : cluster [DBG] pgmap v1124: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:35 vm00 bash[28333]: audit 2026-03-09T17:42:35.563813+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:35 vm00 bash[28333]: audit 2026-03-09T17:42:35.563813+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:35 vm00 bash[28333]: cluster 2026-03-09T17:42:35.566478+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:35 vm00 bash[28333]: cluster 2026-03-09T17:42:35.566478+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:35 vm00 bash[20770]: cluster 2026-03-09T17:42:34.868587+0000 mgr.y (mgr.14505) 633 : cluster [DBG] pgmap v1124: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:35 vm00 bash[20770]: cluster 2026-03-09T17:42:34.868587+0000 mgr.y (mgr.14505) 633 : cluster [DBG] pgmap v1124: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:35 vm00 bash[20770]: audit 2026-03-09T17:42:35.563813+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:35 vm00 bash[20770]: audit 2026-03-09T17:42:35.563813+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:35 vm00 bash[20770]: cluster 2026-03-09T17:42:35.566478+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T17:42:36.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:35 vm00 bash[20770]: cluster 2026-03-09T17:42:35.566478+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T17:42:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:42:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:42:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:42:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:36 vm00 bash[28333]: cluster 2026-03-09T17:42:36.919562+0000 mon.a (mon.0) 3461 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:42:37.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:36 vm00 bash[28333]: cluster 2026-03-09T17:42:36.919562+0000 mon.a (mon.0) 3461 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:42:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:36 vm00 bash[20770]: cluster 2026-03-09T17:42:36.919562+0000 mon.a (mon.0) 3461 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:42:37.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:36 vm00 bash[20770]: cluster 2026-03-09T17:42:36.919562+0000 mon.a (mon.0) 3461 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:42:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:36 vm02 bash[23351]: cluster 2026-03-09T17:42:36.919562+0000 mon.a (mon.0) 3461 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:42:37.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:36 vm02 bash[23351]: cluster 2026-03-09T17:42:36.919562+0000 mon.a (mon.0) 3461 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T17:42:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:37 vm00 bash[28333]: cluster 2026-03-09T17:42:36.868890+0000 mgr.y (mgr.14505) 634 : cluster [DBG] pgmap v1126: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:37 vm00 bash[28333]: cluster 2026-03-09T17:42:36.868890+0000 mgr.y (mgr.14505) 634 : cluster [DBG] pgmap v1126: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:37 vm00 bash[20770]: cluster 2026-03-09T17:42:36.868890+0000 mgr.y (mgr.14505) 634 : cluster [DBG] pgmap v1126: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:37 vm00 bash[20770]: cluster 2026-03-09T17:42:36.868890+0000 mgr.y (mgr.14505) 634 : cluster [DBG] pgmap v1126: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:37 vm02 bash[23351]: cluster 2026-03-09T17:42:36.868890+0000 mgr.y (mgr.14505) 634 : cluster [DBG] pgmap v1126: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:37 vm02 bash[23351]: cluster 2026-03-09T17:42:36.868890+0000 mgr.y (mgr.14505) 634 : cluster [DBG] pgmap v1126: 268 pgs: 268 active+clean; 4.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T17:42:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:39 vm00 bash[28333]: cluster 2026-03-09T17:42:38.869796+0000 mgr.y (mgr.14505) 635 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:39 vm00 bash[28333]: cluster 2026-03-09T17:42:38.869796+0000 mgr.y (mgr.14505) 635 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:39 vm00 bash[20770]: cluster 2026-03-09T17:42:38.869796+0000 mgr.y (mgr.14505) 635 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:39 vm00 bash[20770]: cluster 2026-03-09T17:42:38.869796+0000 mgr.y (mgr.14505) 635 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:39 vm02 bash[23351]: cluster 2026-03-09T17:42:38.869796+0000 mgr.y (mgr.14505) 635 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:39 vm02 bash[23351]: cluster 2026-03-09T17:42:38.869796+0000 mgr.y (mgr.14505) 635 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:42.245 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:41 vm02 bash[23351]: cluster 2026-03-09T17:42:40.870163+0000 mgr.y (mgr.14505) 636 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 700 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:42:42.246 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:41 vm02 bash[23351]: cluster 2026-03-09T17:42:40.870163+0000 mgr.y (mgr.14505) 636 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 700 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:42:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:41 vm00 bash[28333]: cluster 2026-03-09T17:42:40.870163+0000 mgr.y (mgr.14505) 636 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 700 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:42:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:41 vm00 bash[28333]: cluster 2026-03-09T17:42:40.870163+0000 mgr.y (mgr.14505) 636 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 700 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:42:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:41 vm00 bash[20770]: cluster 2026-03-09T17:42:40.870163+0000 mgr.y (mgr.14505) 636 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 700 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:42:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:41 vm00 bash[20770]: cluster 2026-03-09T17:42:40.870163+0000 mgr.y (mgr.14505) 636 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 700 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T17:42:42.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:42:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:42:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:42 vm00 bash[28333]: audit 2026-03-09T17:42:42.244649+0000 mgr.y (mgr.14505) 637 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:42 vm00 bash[28333]: audit 2026-03-09T17:42:42.244649+0000 mgr.y (mgr.14505) 637 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:42 vm00 bash[20770]: audit 2026-03-09T17:42:42.244649+0000 mgr.y (mgr.14505) 637 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:42 vm00 bash[20770]: audit 2026-03-09T17:42:42.244649+0000 mgr.y (mgr.14505) 637 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:42 vm02 bash[23351]: audit 2026-03-09T17:42:42.244649+0000 mgr.y (mgr.14505) 637 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:42 vm02 bash[23351]: audit 2026-03-09T17:42:42.244649+0000 mgr.y (mgr.14505) 637 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:44 vm02 bash[23351]: cluster 2026-03-09T17:42:42.871040+0000 mgr.y (mgr.14505) 638 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:44 vm02 bash[23351]: cluster 2026-03-09T17:42:42.871040+0000 mgr.y (mgr.14505) 638 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:44 vm02 bash[23351]: audit 2026-03-09T17:42:43.374022+0000 mon.a (mon.0) 3462 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:44 vm02 bash[23351]: audit 2026-03-09T17:42:43.374022+0000 mon.a (mon.0) 3462 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:44 vm02 bash[23351]: audit 2026-03-09T17:42:43.375059+0000 mon.c (mon.2) 793 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:44 vm02 bash[23351]: audit 2026-03-09T17:42:43.375059+0000 mon.c (mon.2) 793 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:44 vm00 bash[28333]: cluster 2026-03-09T17:42:42.871040+0000 mgr.y (mgr.14505) 638 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:44 vm00 bash[28333]: cluster 2026-03-09T17:42:42.871040+0000 mgr.y (mgr.14505) 638 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:44 vm00 bash[28333]: audit 2026-03-09T17:42:43.374022+0000 mon.a (mon.0) 3462 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:44 vm00 bash[28333]: audit 2026-03-09T17:42:43.374022+0000 mon.a (mon.0) 3462 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:44 vm00 bash[28333]: audit 2026-03-09T17:42:43.375059+0000 mon.c (mon.2) 793 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:44 vm00 bash[28333]: audit 2026-03-09T17:42:43.375059+0000 mon.c (mon.2) 793 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:44 vm00 bash[20770]: cluster 2026-03-09T17:42:42.871040+0000 mgr.y (mgr.14505) 638 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:44 vm00 bash[20770]: cluster 2026-03-09T17:42:42.871040+0000 mgr.y (mgr.14505) 638 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:44 vm00 bash[20770]: audit 2026-03-09T17:42:43.374022+0000 mon.a (mon.0) 3462 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:44 vm00 bash[20770]: audit 2026-03-09T17:42:43.374022+0000 mon.a (mon.0) 3462 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:44 vm00 bash[20770]: audit 2026-03-09T17:42:43.375059+0000 mon.c (mon.2) 793 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:44 vm00 bash[20770]: audit 2026-03-09T17:42:43.375059+0000 mon.c (mon.2) 793 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:46 vm00 bash[28333]: cluster 2026-03-09T17:42:44.871444+0000 mgr.y (mgr.14505) 639 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:46 vm00 bash[28333]: cluster 2026-03-09T17:42:44.871444+0000 mgr.y (mgr.14505) 639 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:46 vm00 bash[28333]: audit 2026-03-09T17:42:45.583814+0000 mon.c (mon.2) 794 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:46 vm00 bash[28333]: audit 2026-03-09T17:42:45.583814+0000 mon.c (mon.2) 794 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:46 vm00 bash[28333]: audit 2026-03-09T17:42:45.584390+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:46 vm00 bash[28333]: audit 2026-03-09T17:42:45.584390+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:46 vm00 bash[20770]: cluster 2026-03-09T17:42:44.871444+0000 mgr.y (mgr.14505) 639 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:46 vm00 bash[20770]: cluster 2026-03-09T17:42:44.871444+0000 mgr.y (mgr.14505) 639 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:46 vm00 bash[20770]: audit 2026-03-09T17:42:45.583814+0000 mon.c (mon.2) 794 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:46 vm00 bash[20770]: audit 2026-03-09T17:42:45.583814+0000 mon.c (mon.2) 794 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:46 vm00 bash[20770]: audit 2026-03-09T17:42:45.584390+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:46 vm00 bash[20770]: audit 2026-03-09T17:42:45.584390+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:42:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:42:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:42:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:46 vm02 bash[23351]: cluster 2026-03-09T17:42:44.871444+0000 mgr.y (mgr.14505) 639 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:46 vm02 bash[23351]: cluster 2026-03-09T17:42:44.871444+0000 mgr.y (mgr.14505) 639 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:46 vm02 bash[23351]: audit 2026-03-09T17:42:45.583814+0000 mon.c (mon.2) 794 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:46 vm02 bash[23351]: audit 2026-03-09T17:42:45.583814+0000 mon.c (mon.2) 794 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:46 vm02 bash[23351]: audit 2026-03-09T17:42:45.584390+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:46 vm02 bash[23351]: audit 2026-03-09T17:42:45.584390+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:47 vm00 bash[28333]: audit 2026-03-09T17:42:46.382716+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:47 vm00 bash[28333]: audit 2026-03-09T17:42:46.382716+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:47 vm00 bash[28333]: cluster 2026-03-09T17:42:46.385832+0000 mon.a (mon.0) 3465 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:47 vm00 bash[28333]: cluster 2026-03-09T17:42:46.385832+0000 mon.a (mon.0) 3465 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:47 vm00 bash[28333]: audit 2026-03-09T17:42:46.392851+0000 mon.c (mon.2) 795 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:47 vm00 bash[28333]: audit 2026-03-09T17:42:46.392851+0000 mon.c (mon.2) 795 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:47 vm00 bash[28333]: audit 2026-03-09T17:42:46.400651+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:47 vm00 bash[28333]: audit 2026-03-09T17:42:46.400651+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:47 vm00 bash[20770]: audit 2026-03-09T17:42:46.382716+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:47 vm00 bash[20770]: audit 2026-03-09T17:42:46.382716+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:47 vm00 bash[20770]: cluster 2026-03-09T17:42:46.385832+0000 mon.a (mon.0) 3465 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:47 vm00 bash[20770]: cluster 2026-03-09T17:42:46.385832+0000 mon.a (mon.0) 3465 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:47 vm00 bash[20770]: audit 2026-03-09T17:42:46.392851+0000 mon.c (mon.2) 795 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:47 vm00 bash[20770]: audit 2026-03-09T17:42:46.392851+0000 mon.c (mon.2) 795 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:47 vm00 bash[20770]: audit 2026-03-09T17:42:46.400651+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:47 vm00 bash[20770]: audit 2026-03-09T17:42:46.400651+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:47 vm02 bash[23351]: audit 2026-03-09T17:42:46.382716+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:47 vm02 bash[23351]: audit 2026-03-09T17:42:46.382716+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:42:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:47 vm02 bash[23351]: cluster 2026-03-09T17:42:46.385832+0000 mon.a (mon.0) 3465 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T17:42:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:47 vm02 bash[23351]: cluster 2026-03-09T17:42:46.385832+0000 mon.a (mon.0) 3465 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T17:42:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:47 vm02 bash[23351]: audit 2026-03-09T17:42:46.392851+0000 mon.c (mon.2) 795 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:47 vm02 bash[23351]: audit 2026-03-09T17:42:46.392851+0000 mon.c (mon.2) 795 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:47 vm02 bash[23351]: audit 2026-03-09T17:42:46.400651+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:47 vm02 bash[23351]: audit 2026-03-09T17:42:46.400651+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: cluster 2026-03-09T17:42:46.871773+0000 mgr.y (mgr.14505) 640 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: cluster 2026-03-09T17:42:46.871773+0000 mgr.y (mgr.14505) 640 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.403380+0000 mon.a (mon.0) 3467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.403380+0000 mon.a (mon.0) 3467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: cluster 2026-03-09T17:42:47.417798+0000 mon.a (mon.0) 3468 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: cluster 2026-03-09T17:42:47.417798+0000 mon.a (mon.0) 3468 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.444981+0000 mon.c (mon.2) 796 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.444981+0000 mon.c (mon.2) 796 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.445174+0000 mon.a (mon.0) 3469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.445174+0000 mon.a (mon.0) 3469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.445506+0000 mon.c (mon.2) 797 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.445506+0000 mon.c (mon.2) 797 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.445666+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:48 vm00 bash[28333]: audit 2026-03-09T17:42:47.445666+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: cluster 2026-03-09T17:42:46.871773+0000 mgr.y (mgr.14505) 640 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: cluster 2026-03-09T17:42:46.871773+0000 mgr.y (mgr.14505) 640 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.403380+0000 mon.a (mon.0) 3467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.403380+0000 mon.a (mon.0) 3467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: cluster 2026-03-09T17:42:47.417798+0000 mon.a (mon.0) 3468 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: cluster 2026-03-09T17:42:47.417798+0000 mon.a (mon.0) 3468 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.444981+0000 mon.c (mon.2) 796 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.444981+0000 mon.c (mon.2) 796 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.445174+0000 mon.a (mon.0) 3469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.445174+0000 mon.a (mon.0) 3469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.445506+0000 mon.c (mon.2) 797 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.445506+0000 mon.c (mon.2) 797 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.445666+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:48 vm00 bash[20770]: audit 2026-03-09T17:42:47.445666+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: cluster 2026-03-09T17:42:46.871773+0000 mgr.y (mgr.14505) 640 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: cluster 2026-03-09T17:42:46.871773+0000 mgr.y (mgr.14505) 640 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.403380+0000 mon.a (mon.0) 3467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.403380+0000 mon.a (mon.0) 3467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]': finished 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: cluster 2026-03-09T17:42:47.417798+0000 mon.a (mon.0) 3468 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: cluster 2026-03-09T17:42:47.417798+0000 mon.a (mon.0) 3468 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.444981+0000 mon.c (mon.2) 796 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.444981+0000 mon.c (mon.2) 796 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.445174+0000 mon.a (mon.0) 3469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.445174+0000 mon.a (mon.0) 3469 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.445506+0000 mon.c (mon.2) 797 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.445506+0000 mon.c (mon.2) 797 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.445666+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:48 vm02 bash[23351]: audit 2026-03-09T17:42:47.445666+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-146"}]: dispatch 2026-03-09T17:42:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:49 vm00 bash[28333]: cluster 2026-03-09T17:42:48.422607+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T17:42:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:49 vm00 bash[28333]: cluster 2026-03-09T17:42:48.422607+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T17:42:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:49 vm00 bash[20770]: cluster 2026-03-09T17:42:48.422607+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T17:42:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:49 vm00 bash[20770]: cluster 2026-03-09T17:42:48.422607+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T17:42:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:49 vm02 bash[23351]: cluster 2026-03-09T17:42:48.422607+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T17:42:49.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:49 vm02 bash[23351]: cluster 2026-03-09T17:42:48.422607+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: cluster 2026-03-09T17:42:48.872071+0000 mgr.y (mgr.14505) 641 : cluster [DBG] pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: cluster 2026-03-09T17:42:48.872071+0000 mgr.y (mgr.14505) 641 : cluster [DBG] pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: cluster 2026-03-09T17:42:49.419683+0000 mon.a (mon.0) 3472 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: cluster 2026-03-09T17:42:49.419683+0000 mon.a (mon.0) 3472 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: cluster 2026-03-09T17:42:49.460391+0000 mon.a (mon.0) 3473 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: cluster 2026-03-09T17:42:49.460391+0000 mon.a (mon.0) 3473 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: audit 2026-03-09T17:42:49.465116+0000 mon.c (mon.2) 798 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: audit 2026-03-09T17:42:49.465116+0000 mon.c (mon.2) 798 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: audit 2026-03-09T17:42:49.467475+0000 mon.a (mon.0) 3474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:50 vm00 bash[28333]: audit 2026-03-09T17:42:49.467475+0000 mon.a (mon.0) 3474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: cluster 2026-03-09T17:42:48.872071+0000 mgr.y (mgr.14505) 641 : cluster [DBG] pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: cluster 2026-03-09T17:42:48.872071+0000 mgr.y (mgr.14505) 641 : cluster [DBG] pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: cluster 2026-03-09T17:42:49.419683+0000 mon.a (mon.0) 3472 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: cluster 2026-03-09T17:42:49.419683+0000 mon.a (mon.0) 3472 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: cluster 2026-03-09T17:42:49.460391+0000 mon.a (mon.0) 3473 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: cluster 2026-03-09T17:42:49.460391+0000 mon.a (mon.0) 3473 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T17:42:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: audit 2026-03-09T17:42:49.465116+0000 mon.c (mon.2) 798 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: audit 2026-03-09T17:42:49.465116+0000 mon.c (mon.2) 798 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: audit 2026-03-09T17:42:49.467475+0000 mon.a (mon.0) 3474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:50 vm00 bash[20770]: audit 2026-03-09T17:42:49.467475+0000 mon.a (mon.0) 3474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: cluster 2026-03-09T17:42:48.872071+0000 mgr.y (mgr.14505) 641 : cluster [DBG] pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T17:42:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: cluster 2026-03-09T17:42:48.872071+0000 mgr.y (mgr.14505) 641 : cluster [DBG] pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T17:42:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: cluster 2026-03-09T17:42:49.419683+0000 mon.a (mon.0) 3472 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:42:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: cluster 2026-03-09T17:42:49.419683+0000 mon.a (mon.0) 3472 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T17:42:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: cluster 2026-03-09T17:42:49.460391+0000 mon.a (mon.0) 3473 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T17:42:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: cluster 2026-03-09T17:42:49.460391+0000 mon.a (mon.0) 3473 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T17:42:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: audit 2026-03-09T17:42:49.465116+0000 mon.c (mon.2) 798 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: audit 2026-03-09T17:42:49.465116+0000 mon.c (mon.2) 798 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: audit 2026-03-09T17:42:49.467475+0000 mon.a (mon.0) 3474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:50.887 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:50 vm02 bash[23351]: audit 2026-03-09T17:42:49.467475+0000 mon.a (mon.0) 3474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: audit 2026-03-09T17:42:50.433697+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: audit 2026-03-09T17:42:50.433697+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: cluster 2026-03-09T17:42:50.447851+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: cluster 2026-03-09T17:42:50.447851+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: audit 2026-03-09T17:42:50.488864+0000 mon.c (mon.2) 799 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: audit 2026-03-09T17:42:50.488864+0000 mon.c (mon.2) 799 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: audit 2026-03-09T17:42:50.489146+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: audit 2026-03-09T17:42:50.489146+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: cluster 2026-03-09T17:42:50.872429+0000 mgr.y (mgr.14505) 642 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:51 vm00 bash[28333]: cluster 2026-03-09T17:42:50.872429+0000 mgr.y (mgr.14505) 642 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: audit 2026-03-09T17:42:50.433697+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: audit 2026-03-09T17:42:50.433697+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: cluster 2026-03-09T17:42:50.447851+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: cluster 2026-03-09T17:42:50.447851+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: audit 2026-03-09T17:42:50.488864+0000 mon.c (mon.2) 799 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: audit 2026-03-09T17:42:50.488864+0000 mon.c (mon.2) 799 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: audit 2026-03-09T17:42:50.489146+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: audit 2026-03-09T17:42:50.489146+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: cluster 2026-03-09T17:42:50.872429+0000 mgr.y (mgr.14505) 642 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:42:51.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:51 vm00 bash[20770]: cluster 2026-03-09T17:42:50.872429+0000 mgr.y (mgr.14505) 642 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: audit 2026-03-09T17:42:50.433697+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: audit 2026-03-09T17:42:50.433697+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: cluster 2026-03-09T17:42:50.447851+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: cluster 2026-03-09T17:42:50.447851+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: audit 2026-03-09T17:42:50.488864+0000 mon.c (mon.2) 799 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: audit 2026-03-09T17:42:50.488864+0000 mon.c (mon.2) 799 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: audit 2026-03-09T17:42:50.489146+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: audit 2026-03-09T17:42:50.489146+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: cluster 2026-03-09T17:42:50.872429+0000 mgr.y (mgr.14505) 642 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:42:51.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:51 vm02 bash[23351]: cluster 2026-03-09T17:42:50.872429+0000 mgr.y (mgr.14505) 642 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 982 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: audit 2026-03-09T17:42:51.473745+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: audit 2026-03-09T17:42:51.473745+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: cluster 2026-03-09T17:42:51.484198+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: cluster 2026-03-09T17:42:51.484198+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: audit 2026-03-09T17:42:51.494788+0000 mon.c (mon.2) 800 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: audit 2026-03-09T17:42:51.494788+0000 mon.c (mon.2) 800 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: audit 2026-03-09T17:42:51.495102+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: audit 2026-03-09T17:42:51.495102+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: audit 2026-03-09T17:42:52.254037+0000 mgr.y (mgr.14505) 643 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:52 vm02 bash[23351]: audit 2026-03-09T17:42:52.254037+0000 mgr.y (mgr.14505) 643 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:52.637 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:42:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: audit 2026-03-09T17:42:51.473745+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: audit 2026-03-09T17:42:51.473745+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: cluster 2026-03-09T17:42:51.484198+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: cluster 2026-03-09T17:42:51.484198+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: audit 2026-03-09T17:42:51.494788+0000 mon.c (mon.2) 800 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: audit 2026-03-09T17:42:51.494788+0000 mon.c (mon.2) 800 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: audit 2026-03-09T17:42:51.495102+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: audit 2026-03-09T17:42:51.495102+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: audit 2026-03-09T17:42:52.254037+0000 mgr.y (mgr.14505) 643 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:52 vm00 bash[28333]: audit 2026-03-09T17:42:52.254037+0000 mgr.y (mgr.14505) 643 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: audit 2026-03-09T17:42:51.473745+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: audit 2026-03-09T17:42:51.473745+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: cluster 2026-03-09T17:42:51.484198+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: cluster 2026-03-09T17:42:51.484198+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: audit 2026-03-09T17:42:51.494788+0000 mon.c (mon.2) 800 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: audit 2026-03-09T17:42:51.494788+0000 mon.c (mon.2) 800 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: audit 2026-03-09T17:42:51.495102+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: audit 2026-03-09T17:42:51.495102+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: audit 2026-03-09T17:42:52.254037+0000 mgr.y (mgr.14505) 643 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:52 vm00 bash[20770]: audit 2026-03-09T17:42:52.254037+0000 mgr.y (mgr.14505) 643 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.483401+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]': finished 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.483401+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]': finished 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: cluster 2026-03-09T17:42:52.487575+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: cluster 2026-03-09T17:42:52.487575+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.540210+0000 mon.c (mon.2) 801 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.540210+0000 mon.c (mon.2) 801 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.540555+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.540555+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.540947+0000 mon.c (mon.2) 802 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.540947+0000 mon.c (mon.2) 802 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.541189+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: audit 2026-03-09T17:42:52.541189+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: cluster 2026-03-09T17:42:52.873282+0000 mgr.y (mgr.14505) 644 : cluster [DBG] pgmap v1141: 268 pgs: 3 unknown, 265 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:42:53.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:53 vm02 bash[23351]: cluster 2026-03-09T17:42:52.873282+0000 mgr.y (mgr.14505) 644 : cluster [DBG] pgmap v1141: 268 pgs: 3 unknown, 265 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.483401+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]': finished 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.483401+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]': finished 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: cluster 2026-03-09T17:42:52.487575+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: cluster 2026-03-09T17:42:52.487575+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.540210+0000 mon.c (mon.2) 801 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.540210+0000 mon.c (mon.2) 801 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.540555+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.540555+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.540947+0000 mon.c (mon.2) 802 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.540947+0000 mon.c (mon.2) 802 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.541189+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: audit 2026-03-09T17:42:52.541189+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: cluster 2026-03-09T17:42:52.873282+0000 mgr.y (mgr.14505) 644 : cluster [DBG] pgmap v1141: 268 pgs: 3 unknown, 265 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:53 vm00 bash[28333]: cluster 2026-03-09T17:42:52.873282+0000 mgr.y (mgr.14505) 644 : cluster [DBG] pgmap v1141: 268 pgs: 3 unknown, 265 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.483401+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]': finished 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.483401+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]': finished 2026-03-09T17:42:54.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: cluster 2026-03-09T17:42:52.487575+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: cluster 2026-03-09T17:42:52.487575+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.540210+0000 mon.c (mon.2) 801 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.540210+0000 mon.c (mon.2) 801 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.540555+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.540555+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.540947+0000 mon.c (mon.2) 802 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.540947+0000 mon.c (mon.2) 802 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.541189+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: audit 2026-03-09T17:42:52.541189+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-148"}]: dispatch 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: cluster 2026-03-09T17:42:52.873282+0000 mgr.y (mgr.14505) 644 : cluster [DBG] pgmap v1141: 268 pgs: 3 unknown, 265 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:42:54.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:53 vm00 bash[20770]: cluster 2026-03-09T17:42:52.873282+0000 mgr.y (mgr.14505) 644 : cluster [DBG] pgmap v1141: 268 pgs: 3 unknown, 265 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T17:42:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:54 vm00 bash[28333]: cluster 2026-03-09T17:42:53.494462+0000 mon.a (mon.0) 3485 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:54 vm00 bash[28333]: cluster 2026-03-09T17:42:53.494462+0000 mon.a (mon.0) 3485 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:54 vm00 bash[28333]: cluster 2026-03-09T17:42:53.605458+0000 mon.a (mon.0) 3486 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T17:42:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:54 vm00 bash[28333]: cluster 2026-03-09T17:42:53.605458+0000 mon.a (mon.0) 3486 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T17:42:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:54 vm00 bash[20770]: cluster 2026-03-09T17:42:53.494462+0000 mon.a (mon.0) 3485 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:54 vm00 bash[20770]: cluster 2026-03-09T17:42:53.494462+0000 mon.a (mon.0) 3485 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:54 vm00 bash[20770]: cluster 2026-03-09T17:42:53.605458+0000 mon.a (mon.0) 3486 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T17:42:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:54 vm00 bash[20770]: cluster 2026-03-09T17:42:53.605458+0000 mon.a (mon.0) 3486 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T17:42:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:54 vm02 bash[23351]: cluster 2026-03-09T17:42:53.494462+0000 mon.a (mon.0) 3485 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:54 vm02 bash[23351]: cluster 2026-03-09T17:42:53.494462+0000 mon.a (mon.0) 3485 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:42:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:54 vm02 bash[23351]: cluster 2026-03-09T17:42:53.605458+0000 mon.a (mon.0) 3486 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T17:42:55.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:54 vm02 bash[23351]: cluster 2026-03-09T17:42:53.605458+0000 mon.a (mon.0) 3486 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:55 vm00 bash[28333]: cluster 2026-03-09T17:42:54.671389+0000 mon.a (mon.0) 3487 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:55 vm00 bash[28333]: cluster 2026-03-09T17:42:54.671389+0000 mon.a (mon.0) 3487 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:55 vm00 bash[28333]: audit 2026-03-09T17:42:54.674164+0000 mon.c (mon.2) 803 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:55 vm00 bash[28333]: audit 2026-03-09T17:42:54.674164+0000 mon.c (mon.2) 803 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:55 vm00 bash[28333]: audit 2026-03-09T17:42:54.676366+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:55 vm00 bash[28333]: audit 2026-03-09T17:42:54.676366+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:55 vm00 bash[28333]: cluster 2026-03-09T17:42:54.873615+0000 mgr.y (mgr.14505) 645 : cluster [DBG] pgmap v1144: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:55 vm00 bash[28333]: cluster 2026-03-09T17:42:54.873615+0000 mgr.y (mgr.14505) 645 : cluster [DBG] pgmap v1144: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:55 vm00 bash[20770]: cluster 2026-03-09T17:42:54.671389+0000 mon.a (mon.0) 3487 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:55 vm00 bash[20770]: cluster 2026-03-09T17:42:54.671389+0000 mon.a (mon.0) 3487 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:55 vm00 bash[20770]: audit 2026-03-09T17:42:54.674164+0000 mon.c (mon.2) 803 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:55 vm00 bash[20770]: audit 2026-03-09T17:42:54.674164+0000 mon.c (mon.2) 803 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:55 vm00 bash[20770]: audit 2026-03-09T17:42:54.676366+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:55 vm00 bash[20770]: audit 2026-03-09T17:42:54.676366+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:55 vm00 bash[20770]: cluster 2026-03-09T17:42:54.873615+0000 mgr.y (mgr.14505) 645 : cluster [DBG] pgmap v1144: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:42:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:55 vm00 bash[20770]: cluster 2026-03-09T17:42:54.873615+0000 mgr.y (mgr.14505) 645 : cluster [DBG] pgmap v1144: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:42:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:55 vm02 bash[23351]: cluster 2026-03-09T17:42:54.671389+0000 mon.a (mon.0) 3487 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T17:42:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:55 vm02 bash[23351]: cluster 2026-03-09T17:42:54.671389+0000 mon.a (mon.0) 3487 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T17:42:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:55 vm02 bash[23351]: audit 2026-03-09T17:42:54.674164+0000 mon.c (mon.2) 803 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:55 vm02 bash[23351]: audit 2026-03-09T17:42:54.674164+0000 mon.c (mon.2) 803 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:55 vm02 bash[23351]: audit 2026-03-09T17:42:54.676366+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:55 vm02 bash[23351]: audit 2026-03-09T17:42:54.676366+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:55 vm02 bash[23351]: cluster 2026-03-09T17:42:54.873615+0000 mgr.y (mgr.14505) 645 : cluster [DBG] pgmap v1144: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:42:56.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:55 vm02 bash[23351]: cluster 2026-03-09T17:42:54.873615+0000 mgr.y (mgr.14505) 645 : cluster [DBG] pgmap v1144: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T17:42:56.704 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:42:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:42:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.668493+0000 mon.a (mon.0) 3489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.668493+0000 mon.a (mon.0) 3489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: cluster 2026-03-09T17:42:55.683853+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: cluster 2026-03-09T17:42:55.683853+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.740800+0000 mon.c (mon.2) 804 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.740800+0000 mon.c (mon.2) 804 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.741351+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.741351+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.741973+0000 mon.c (mon.2) 805 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.741973+0000 mon.c (mon.2) 805 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.742395+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: audit 2026-03-09T17:42:55.742395+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: cluster 2026-03-09T17:42:56.676506+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:56 vm00 bash[28333]: cluster 2026-03-09T17:42:56.676506+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.668493+0000 mon.a (mon.0) 3489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.668493+0000 mon.a (mon.0) 3489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: cluster 2026-03-09T17:42:55.683853+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: cluster 2026-03-09T17:42:55.683853+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.740800+0000 mon.c (mon.2) 804 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.740800+0000 mon.c (mon.2) 804 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.741351+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.741351+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.741973+0000 mon.c (mon.2) 805 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.741973+0000 mon.c (mon.2) 805 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.742395+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: audit 2026-03-09T17:42:55.742395+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: cluster 2026-03-09T17:42:56.676506+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T17:42:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:56 vm00 bash[20770]: cluster 2026-03-09T17:42:56.676506+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.668493+0000 mon.a (mon.0) 3489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.668493+0000 mon.a (mon.0) 3489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: cluster 2026-03-09T17:42:55.683853+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: cluster 2026-03-09T17:42:55.683853+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.740800+0000 mon.c (mon.2) 804 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.740800+0000 mon.c (mon.2) 804 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.741351+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.741351+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.741973+0000 mon.c (mon.2) 805 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.741973+0000 mon.c (mon.2) 805 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.742395+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: audit 2026-03-09T17:42:55.742395+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-150"}]: dispatch 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: cluster 2026-03-09T17:42:56.676506+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T17:42:57.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:56 vm02 bash[23351]: cluster 2026-03-09T17:42:56.676506+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:57 vm00 bash[28333]: cluster 2026-03-09T17:42:56.873960+0000 mgr.y (mgr.14505) 646 : cluster [DBG] pgmap v1147: 236 pgs: 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:57 vm00 bash[28333]: cluster 2026-03-09T17:42:56.873960+0000 mgr.y (mgr.14505) 646 : cluster [DBG] pgmap v1147: 236 pgs: 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:57 vm00 bash[28333]: audit 2026-03-09T17:42:57.580902+0000 mon.c (mon.2) 806 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:57 vm00 bash[28333]: audit 2026-03-09T17:42:57.580902+0000 mon.c (mon.2) 806 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:57 vm00 bash[28333]: cluster 2026-03-09T17:42:57.700018+0000 mon.a (mon.0) 3494 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:57 vm00 bash[28333]: cluster 2026-03-09T17:42:57.700018+0000 mon.a (mon.0) 3494 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:57 vm00 bash[28333]: audit 2026-03-09T17:42:57.700911+0000 mon.c (mon.2) 807 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:57 vm00 bash[28333]: audit 2026-03-09T17:42:57.700911+0000 mon.c (mon.2) 807 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:57 vm00 bash[20770]: cluster 2026-03-09T17:42:56.873960+0000 mgr.y (mgr.14505) 646 : cluster [DBG] pgmap v1147: 236 pgs: 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:57 vm00 bash[20770]: cluster 2026-03-09T17:42:56.873960+0000 mgr.y (mgr.14505) 646 : cluster [DBG] pgmap v1147: 236 pgs: 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:57 vm00 bash[20770]: audit 2026-03-09T17:42:57.580902+0000 mon.c (mon.2) 806 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:57 vm00 bash[20770]: audit 2026-03-09T17:42:57.580902+0000 mon.c (mon.2) 806 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:57 vm00 bash[20770]: cluster 2026-03-09T17:42:57.700018+0000 mon.a (mon.0) 3494 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:57 vm00 bash[20770]: cluster 2026-03-09T17:42:57.700018+0000 mon.a (mon.0) 3494 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:57 vm00 bash[20770]: audit 2026-03-09T17:42:57.700911+0000 mon.c (mon.2) 807 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:57 vm00 bash[20770]: audit 2026-03-09T17:42:57.700911+0000 mon.c (mon.2) 807 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:57 vm02 bash[23351]: cluster 2026-03-09T17:42:56.873960+0000 mgr.y (mgr.14505) 646 : cluster [DBG] pgmap v1147: 236 pgs: 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:57 vm02 bash[23351]: cluster 2026-03-09T17:42:56.873960+0000 mgr.y (mgr.14505) 646 : cluster [DBG] pgmap v1147: 236 pgs: 236 active+clean; 4.3 MiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:42:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:57 vm02 bash[23351]: audit 2026-03-09T17:42:57.580902+0000 mon.c (mon.2) 806 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:42:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:57 vm02 bash[23351]: audit 2026-03-09T17:42:57.580902+0000 mon.c (mon.2) 806 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:42:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:57 vm02 bash[23351]: cluster 2026-03-09T17:42:57.700018+0000 mon.a (mon.0) 3494 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T17:42:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:57 vm02 bash[23351]: cluster 2026-03-09T17:42:57.700018+0000 mon.a (mon.0) 3494 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T17:42:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:57 vm02 bash[23351]: audit 2026-03-09T17:42:57.700911+0000 mon.c (mon.2) 807 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:58.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:57 vm02 bash[23351]: audit 2026-03-09T17:42:57.700911+0000 mon.c (mon.2) 807 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:57.703465+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:57.703465+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:57.941874+0000 mon.c (mon.2) 808 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:57.941874+0000 mon.c (mon.2) 808 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:57.942986+0000 mon.c (mon.2) 809 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:57.942986+0000 mon.c (mon.2) 809 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:57.948909+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:57.948909+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:58.385757+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:58.385757+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:58.388736+0000 mon.c (mon.2) 810 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:58.388736+0000 mon.c (mon.2) 810 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:58.679678+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: audit 2026-03-09T17:42:58.679678+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: cluster 2026-03-09T17:42:58.683006+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T17:42:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:58 vm00 bash[28333]: cluster 2026-03-09T17:42:58.683006+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:57.703465+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:57.703465+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:57.941874+0000 mon.c (mon.2) 808 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:57.941874+0000 mon.c (mon.2) 808 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:57.942986+0000 mon.c (mon.2) 809 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:57.942986+0000 mon.c (mon.2) 809 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:57.948909+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:57.948909+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:58.385757+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:58.385757+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:58.388736+0000 mon.c (mon.2) 810 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:58.388736+0000 mon.c (mon.2) 810 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:58.679678+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: audit 2026-03-09T17:42:58.679678+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: cluster 2026-03-09T17:42:58.683006+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T17:42:59.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:58 vm00 bash[20770]: cluster 2026-03-09T17:42:58.683006+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:57.703465+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:57.703465+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:57.941874+0000 mon.c (mon.2) 808 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:57.941874+0000 mon.c (mon.2) 808 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:57.942986+0000 mon.c (mon.2) 809 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:57.942986+0000 mon.c (mon.2) 809 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:57.948909+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:57.948909+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:58.385757+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:58.385757+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:58.388736+0000 mon.c (mon.2) 810 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:58.388736+0000 mon.c (mon.2) 810 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:58.679678+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: audit 2026-03-09T17:42:58.679678+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: cluster 2026-03-09T17:42:58.683006+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T17:42:59.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:58 vm02 bash[23351]: cluster 2026-03-09T17:42:58.683006+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: audit 2026-03-09T17:42:58.750956+0000 mon.c (mon.2) 811 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: audit 2026-03-09T17:42:58.750956+0000 mon.c (mon.2) 811 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: audit 2026-03-09T17:42:58.751217+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: audit 2026-03-09T17:42:58.751217+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: audit 2026-03-09T17:42:58.751880+0000 mon.c (mon.2) 812 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: audit 2026-03-09T17:42:58.751880+0000 mon.c (mon.2) 812 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: audit 2026-03-09T17:42:58.752132+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: audit 2026-03-09T17:42:58.752132+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: cluster 2026-03-09T17:42:58.874296+0000 mgr.y (mgr.14505) 647 : cluster [DBG] pgmap v1150: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: cluster 2026-03-09T17:42:58.874296+0000 mgr.y (mgr.14505) 647 : cluster [DBG] pgmap v1150: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: cluster 2026-03-09T17:42:59.685725+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:42:59 vm00 bash[28333]: cluster 2026-03-09T17:42:59.685725+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: audit 2026-03-09T17:42:58.750956+0000 mon.c (mon.2) 811 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: audit 2026-03-09T17:42:58.750956+0000 mon.c (mon.2) 811 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: audit 2026-03-09T17:42:58.751217+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: audit 2026-03-09T17:42:58.751217+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: audit 2026-03-09T17:42:58.751880+0000 mon.c (mon.2) 812 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: audit 2026-03-09T17:42:58.751880+0000 mon.c (mon.2) 812 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: audit 2026-03-09T17:42:58.752132+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: audit 2026-03-09T17:42:58.752132+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: cluster 2026-03-09T17:42:58.874296+0000 mgr.y (mgr.14505) 647 : cluster [DBG] pgmap v1150: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: cluster 2026-03-09T17:42:58.874296+0000 mgr.y (mgr.14505) 647 : cluster [DBG] pgmap v1150: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: cluster 2026-03-09T17:42:59.685725+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T17:43:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:42:59 vm00 bash[20770]: cluster 2026-03-09T17:42:59.685725+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: audit 2026-03-09T17:42:58.750956+0000 mon.c (mon.2) 811 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: audit 2026-03-09T17:42:58.750956+0000 mon.c (mon.2) 811 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: audit 2026-03-09T17:42:58.751217+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: audit 2026-03-09T17:42:58.751217+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: audit 2026-03-09T17:42:58.751880+0000 mon.c (mon.2) 812 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: audit 2026-03-09T17:42:58.751880+0000 mon.c (mon.2) 812 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: audit 2026-03-09T17:42:58.752132+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: audit 2026-03-09T17:42:58.752132+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-152"}]: dispatch 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: cluster 2026-03-09T17:42:58.874296+0000 mgr.y (mgr.14505) 647 : cluster [DBG] pgmap v1150: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: cluster 2026-03-09T17:42:58.874296+0000 mgr.y (mgr.14505) 647 : cluster [DBG] pgmap v1150: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:00.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: cluster 2026-03-09T17:42:59.685725+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T17:43:00.137 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:42:59 vm02 bash[23351]: cluster 2026-03-09T17:42:59.685725+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:01 vm00 bash[28333]: cluster 2026-03-09T17:43:00.740786+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:01 vm00 bash[28333]: cluster 2026-03-09T17:43:00.740786+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:01 vm00 bash[28333]: audit 2026-03-09T17:43:00.743137+0000 mon.c (mon.2) 813 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:01 vm00 bash[28333]: audit 2026-03-09T17:43:00.743137+0000 mon.c (mon.2) 813 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:01 vm00 bash[28333]: audit 2026-03-09T17:43:00.743410+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:01 vm00 bash[28333]: audit 2026-03-09T17:43:00.743410+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:01 vm00 bash[28333]: cluster 2026-03-09T17:43:00.874672+0000 mgr.y (mgr.14505) 648 : cluster [DBG] pgmap v1153: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:01 vm00 bash[28333]: cluster 2026-03-09T17:43:00.874672+0000 mgr.y (mgr.14505) 648 : cluster [DBG] pgmap v1153: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:01 vm00 bash[20770]: cluster 2026-03-09T17:43:00.740786+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:01 vm00 bash[20770]: cluster 2026-03-09T17:43:00.740786+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:01 vm00 bash[20770]: audit 2026-03-09T17:43:00.743137+0000 mon.c (mon.2) 813 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:01 vm00 bash[20770]: audit 2026-03-09T17:43:00.743137+0000 mon.c (mon.2) 813 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:01 vm00 bash[20770]: audit 2026-03-09T17:43:00.743410+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:01 vm00 bash[20770]: audit 2026-03-09T17:43:00.743410+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:01 vm00 bash[20770]: cluster 2026-03-09T17:43:00.874672+0000 mgr.y (mgr.14505) 648 : cluster [DBG] pgmap v1153: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:01 vm00 bash[20770]: cluster 2026-03-09T17:43:00.874672+0000 mgr.y (mgr.14505) 648 : cluster [DBG] pgmap v1153: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:01 vm02 bash[23351]: cluster 2026-03-09T17:43:00.740786+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T17:43:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:01 vm02 bash[23351]: cluster 2026-03-09T17:43:00.740786+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T17:43:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:01 vm02 bash[23351]: audit 2026-03-09T17:43:00.743137+0000 mon.c (mon.2) 813 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:01 vm02 bash[23351]: audit 2026-03-09T17:43:00.743137+0000 mon.c (mon.2) 813 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:01 vm02 bash[23351]: audit 2026-03-09T17:43:00.743410+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:01 vm02 bash[23351]: audit 2026-03-09T17:43:00.743410+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T17:43:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:01 vm02 bash[23351]: cluster 2026-03-09T17:43:00.874672+0000 mgr.y (mgr.14505) 648 : cluster [DBG] pgmap v1153: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:01 vm02 bash[23351]: cluster 2026-03-09T17:43:00.874672+0000 mgr.y (mgr.14505) 648 : cluster [DBG] pgmap v1153: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T17:43:02.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:43:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: cluster 2026-03-09T17:43:01.716004+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: cluster 2026-03-09T17:43:01.716004+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.728513+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.728513+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: cluster 2026-03-09T17:43:01.741135+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: cluster 2026-03-09T17:43:01.741135+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.757261+0000 mon.c (mon.2) 814 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.757261+0000 mon.c (mon.2) 814 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.759864+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.759864+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.784810+0000 mon.c (mon.2) 815 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.784810+0000 mon.c (mon.2) 815 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.785170+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.785170+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.785733+0000 mon.c (mon.2) 816 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.785733+0000 mon.c (mon.2) 816 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.786057+0000 mon.a (mon.0) 3510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:01.786057+0000 mon.a (mon.0) 3510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:02.262623+0000 mgr.y (mgr.14505) 649 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:03.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:02 vm00 bash[28333]: audit 2026-03-09T17:43:02.262623+0000 mgr.y (mgr.14505) 649 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: cluster 2026-03-09T17:43:01.716004+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: cluster 2026-03-09T17:43:01.716004+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.728513+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.728513+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: cluster 2026-03-09T17:43:01.741135+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: cluster 2026-03-09T17:43:01.741135+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.757261+0000 mon.c (mon.2) 814 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.757261+0000 mon.c (mon.2) 814 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.759864+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.759864+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.784810+0000 mon.c (mon.2) 815 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.784810+0000 mon.c (mon.2) 815 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.785170+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.785170+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.785733+0000 mon.c (mon.2) 816 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.785733+0000 mon.c (mon.2) 816 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.786057+0000 mon.a (mon.0) 3510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:01.786057+0000 mon.a (mon.0) 3510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:02.262623+0000 mgr.y (mgr.14505) 649 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:03.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:02 vm00 bash[20770]: audit 2026-03-09T17:43:02.262623+0000 mgr.y (mgr.14505) 649 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: cluster 2026-03-09T17:43:01.716004+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: cluster 2026-03-09T17:43:01.716004+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.728513+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.728513+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60118-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: cluster 2026-03-09T17:43:01.741135+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: cluster 2026-03-09T17:43:01.741135+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.757261+0000 mon.c (mon.2) 814 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.757261+0000 mon.c (mon.2) 814 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.759864+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.759864+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-60118-111","var": "dedup_tier","val": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.784810+0000 mon.c (mon.2) 815 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.784810+0000 mon.c (mon.2) 815 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.785170+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.785170+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.785733+0000 mon.c (mon.2) 816 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.785733+0000 mon.c (mon.2) 816 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.786057+0000 mon.a (mon.0) 3510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:01.786057+0000 mon.a (mon.0) 3510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-60118-111", "tierpool": "test-rados-api-vm00-60118-154"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:02.262623+0000 mgr.y (mgr.14505) 649 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:03.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:02 vm02 bash[23351]: audit 2026-03-09T17:43:02.262623+0000 mgr.y (mgr.14505) 649 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:03 vm00 bash[28333]: cluster 2026-03-09T17:43:02.752368+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T17:43:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:03 vm00 bash[28333]: cluster 2026-03-09T17:43:02.752368+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T17:43:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:03 vm00 bash[28333]: cluster 2026-03-09T17:43:02.875449+0000 mgr.y (mgr.14505) 650 : cluster [DBG] pgmap v1156: 236 pgs: 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:43:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:03 vm00 bash[28333]: cluster 2026-03-09T17:43:02.875449+0000 mgr.y (mgr.14505) 650 : cluster [DBG] pgmap v1156: 236 pgs: 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:43:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:03 vm00 bash[20770]: cluster 2026-03-09T17:43:02.752368+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T17:43:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:03 vm00 bash[20770]: cluster 2026-03-09T17:43:02.752368+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T17:43:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:03 vm00 bash[20770]: cluster 2026-03-09T17:43:02.875449+0000 mgr.y (mgr.14505) 650 : cluster [DBG] pgmap v1156: 236 pgs: 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:43:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:03 vm00 bash[20770]: cluster 2026-03-09T17:43:02.875449+0000 mgr.y (mgr.14505) 650 : cluster [DBG] pgmap v1156: 236 pgs: 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:43:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:03 vm02 bash[23351]: cluster 2026-03-09T17:43:02.752368+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T17:43:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:03 vm02 bash[23351]: cluster 2026-03-09T17:43:02.752368+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T17:43:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:03 vm02 bash[23351]: cluster 2026-03-09T17:43:02.875449+0000 mgr.y (mgr.14505) 650 : cluster [DBG] pgmap v1156: 236 pgs: 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:43:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:03 vm02 bash[23351]: cluster 2026-03-09T17:43:02.875449+0000 mgr.y (mgr.14505) 650 : cluster [DBG] pgmap v1156: 236 pgs: 236 active+clean; 4.3 MiB data, 984 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:04 vm00 bash[28333]: cluster 2026-03-09T17:43:03.761617+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:04 vm00 bash[28333]: cluster 2026-03-09T17:43:03.761617+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:04 vm00 bash[28333]: audit 2026-03-09T17:43:03.763995+0000 mon.c (mon.2) 817 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:04 vm00 bash[28333]: audit 2026-03-09T17:43:03.763995+0000 mon.c (mon.2) 817 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:04 vm00 bash[28333]: audit 2026-03-09T17:43:03.764460+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:04 vm00 bash[28333]: audit 2026-03-09T17:43:03.764460+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:04 vm00 bash[20770]: cluster 2026-03-09T17:43:03.761617+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:04 vm00 bash[20770]: cluster 2026-03-09T17:43:03.761617+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:04 vm00 bash[20770]: audit 2026-03-09T17:43:03.763995+0000 mon.c (mon.2) 817 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:04 vm00 bash[20770]: audit 2026-03-09T17:43:03.763995+0000 mon.c (mon.2) 817 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:04 vm00 bash[20770]: audit 2026-03-09T17:43:03.764460+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:04 vm00 bash[20770]: audit 2026-03-09T17:43:03.764460+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:04 vm02 bash[23351]: cluster 2026-03-09T17:43:03.761617+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T17:43:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:04 vm02 bash[23351]: cluster 2026-03-09T17:43:03.761617+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T17:43:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:04 vm02 bash[23351]: audit 2026-03-09T17:43:03.763995+0000 mon.c (mon.2) 817 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:04 vm02 bash[23351]: audit 2026-03-09T17:43:03.763995+0000 mon.c (mon.2) 817 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:04 vm02 bash[23351]: audit 2026-03-09T17:43:03.764460+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:04 vm02 bash[23351]: audit 2026-03-09T17:43:03.764460+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlush (7610 ms) 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FailedFlush 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FailedFlush (13286 ms) 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Flush 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Flush (8339 ms) 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushSnap 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushSnap (14032 ms) 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushTryFlushRaces 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushTryFlushRaces (8102 ms) 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlushReadRace 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlushReadRace (7469 ms) 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetRead 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: ok, hit_set contains 320:602f83fe:::foo:head 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetRead (9289 ms) 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetTrim 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: first is 1773078100 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,1773078105,1773078106,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,1773078105,1773078106,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,1773078105,1773078106,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,1773078105,1773078106,1773078108,1773078109,0 2026-03-09T17:43:05.786 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,1773078105,1773078106,1773078108,1773078109,0 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078100,1773078102,1773078103,1773078105,1773078106,1773078108,1773078109,0 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773078103,1773078105,1773078106,1773078108,1773078109,1773078111,1773078112,0 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: first now 1773078103, trimmed 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetTrim (20751 ms) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteOn2ndRead 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: foo0 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteOn2ndRead (14139 ms) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ProxyRead 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ProxyRead (18239 ms) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.CachePin 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.CachePin (22974 ms) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetRedirectRead 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetRedirectRead (5170 ms) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetChunkRead 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetChunkRead (3080 ms) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ManifestPromoteRead 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ManifestPromoteRead (3012 ms) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TrySetDedupTier 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TrySetDedupTier (3059 ms) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP (234568 ms total) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] Global test environment tear-down 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [==========] 77 tests from 4 test suites ran. (853502 ms total) 2026-03-09T17:43:05.787 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ PASSED ] 77 tests. 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59983 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59983 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60160 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60160 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60582 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60582 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60352 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60352 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60239 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60239 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60082 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60082 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60317 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60317 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60727 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60727 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60742 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60742 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60100 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60100 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60618 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60618 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59919 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59919 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60034 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60034 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60473 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60473 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59911 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59911 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59927 2026-03-09T17:43:05.788 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59927 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60830 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60830 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60193 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60193 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60883 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60883 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60278 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60278 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60703 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60703 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60921 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60921 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60208 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60208 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60504 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60504 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60798 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60798 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60641 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60641 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60851 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60851 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60438 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60438 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60135 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60135 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60907 2026-03-09T17:43:05.789 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60907 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: audit 2026-03-09T17:43:04.763303+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: audit 2026-03-09T17:43:04.763303+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: audit 2026-03-09T17:43:04.769019+0000 mon.c (mon.2) 818 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: audit 2026-03-09T17:43:04.769019+0000 mon.c (mon.2) 818 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: cluster 2026-03-09T17:43:04.772703+0000 mon.a (mon.0) 3515 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: cluster 2026-03-09T17:43:04.772703+0000 mon.a (mon.0) 3515 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: audit 2026-03-09T17:43:04.773022+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: audit 2026-03-09T17:43:04.773022+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: cluster 2026-03-09T17:43:04.875684+0000 mgr.y (mgr.14505) 651 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:05 vm02 bash[23351]: cluster 2026-03-09T17:43:04.875684+0000 mgr.y (mgr.14505) 651 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: audit 2026-03-09T17:43:04.763303+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: audit 2026-03-09T17:43:04.763303+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: audit 2026-03-09T17:43:04.769019+0000 mon.c (mon.2) 818 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: audit 2026-03-09T17:43:04.769019+0000 mon.c (mon.2) 818 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: cluster 2026-03-09T17:43:04.772703+0000 mon.a (mon.0) 3515 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: cluster 2026-03-09T17:43:04.772703+0000 mon.a (mon.0) 3515 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: audit 2026-03-09T17:43:04.773022+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: audit 2026-03-09T17:43:04.773022+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: cluster 2026-03-09T17:43:04.875684+0000 mgr.y (mgr.14505) 651 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:05 vm00 bash[28333]: cluster 2026-03-09T17:43:04.875684+0000 mgr.y (mgr.14505) 651 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: audit 2026-03-09T17:43:04.763303+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: audit 2026-03-09T17:43:04.763303+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: audit 2026-03-09T17:43:04.769019+0000 mon.c (mon.2) 818 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: audit 2026-03-09T17:43:04.769019+0000 mon.c (mon.2) 818 : audit [INF] from='client.? 192.168.123.100:0/2909873078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: cluster 2026-03-09T17:43:04.772703+0000 mon.a (mon.0) 3515 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: cluster 2026-03-09T17:43:04.772703+0000 mon.a (mon.0) 3515 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: audit 2026-03-09T17:43:04.773022+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: audit 2026-03-09T17:43:04.773022+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]: dispatch 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: cluster 2026-03-09T17:43:04.875684+0000 mgr.y (mgr.14505) 651 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:05 vm00 bash[20770]: cluster 2026-03-09T17:43:04.875684+0000 mgr.y (mgr.14505) 651 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:43:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:43:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:43:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:06 vm02 bash[23351]: audit 2026-03-09T17:43:05.767369+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:06 vm02 bash[23351]: audit 2026-03-09T17:43:05.767369+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:06 vm02 bash[23351]: cluster 2026-03-09T17:43:05.781322+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T17:43:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:06 vm02 bash[23351]: cluster 2026-03-09T17:43:05.781322+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T17:43:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:06 vm00 bash[28333]: audit 2026-03-09T17:43:05.767369+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:06 vm00 bash[28333]: audit 2026-03-09T17:43:05.767369+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:06 vm00 bash[28333]: cluster 2026-03-09T17:43:05.781322+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T17:43:07.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:06 vm00 bash[28333]: cluster 2026-03-09T17:43:05.781322+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T17:43:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:06 vm00 bash[20770]: audit 2026-03-09T17:43:05.767369+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:06 vm00 bash[20770]: audit 2026-03-09T17:43:05.767369+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-60118-111"}]': finished 2026-03-09T17:43:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:06 vm00 bash[20770]: cluster 2026-03-09T17:43:05.781322+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T17:43:07.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:06 vm00 bash[20770]: cluster 2026-03-09T17:43:05.781322+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T17:43:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:07 vm02 bash[23351]: cluster 2026-03-09T17:43:06.825967+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:07 vm02 bash[23351]: cluster 2026-03-09T17:43:06.825967+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:07 vm02 bash[23351]: cluster 2026-03-09T17:43:06.876009+0000 mgr.y (mgr.14505) 652 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:07 vm02 bash[23351]: cluster 2026-03-09T17:43:06.876009+0000 mgr.y (mgr.14505) 652 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:07 vm00 bash[28333]: cluster 2026-03-09T17:43:06.825967+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:07 vm00 bash[28333]: cluster 2026-03-09T17:43:06.825967+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:07 vm00 bash[28333]: cluster 2026-03-09T17:43:06.876009+0000 mgr.y (mgr.14505) 652 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:07 vm00 bash[28333]: cluster 2026-03-09T17:43:06.876009+0000 mgr.y (mgr.14505) 652 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:07 vm00 bash[20770]: cluster 2026-03-09T17:43:06.825967+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:07 vm00 bash[20770]: cluster 2026-03-09T17:43:06.825967+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:43:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:07 vm00 bash[20770]: cluster 2026-03-09T17:43:06.876009+0000 mgr.y (mgr.14505) 652 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:07 vm00 bash[20770]: cluster 2026-03-09T17:43:06.876009+0000 mgr.y (mgr.14505) 652 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 984 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:09 vm00 bash[28333]: cluster 2026-03-09T17:43:08.876809+0000 mgr.y (mgr.14505) 653 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:09 vm00 bash[28333]: cluster 2026-03-09T17:43:08.876809+0000 mgr.y (mgr.14505) 653 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:09 vm00 bash[20770]: cluster 2026-03-09T17:43:08.876809+0000 mgr.y (mgr.14505) 653 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:09 vm00 bash[20770]: cluster 2026-03-09T17:43:08.876809+0000 mgr.y (mgr.14505) 653 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:10.318 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:09 vm02 bash[23351]: cluster 2026-03-09T17:43:08.876809+0000 mgr.y (mgr.14505) 653 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:10.319 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:09 vm02 bash[23351]: cluster 2026-03-09T17:43:08.876809+0000 mgr.y (mgr.14505) 653 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:43:12.271 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:11 vm02 bash[23351]: cluster 2026-03-09T17:43:10.877269+0000 mgr.y (mgr.14505) 654 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T17:43:12.271 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:11 vm02 bash[23351]: cluster 2026-03-09T17:43:10.877269+0000 mgr.y (mgr.14505) 654 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T17:43:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:11 vm00 bash[28333]: cluster 2026-03-09T17:43:10.877269+0000 mgr.y (mgr.14505) 654 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T17:43:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:11 vm00 bash[28333]: cluster 2026-03-09T17:43:10.877269+0000 mgr.y (mgr.14505) 654 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T17:43:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:11 vm00 bash[20770]: cluster 2026-03-09T17:43:10.877269+0000 mgr.y (mgr.14505) 654 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T17:43:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:11 vm00 bash[20770]: cluster 2026-03-09T17:43:10.877269+0000 mgr.y (mgr.14505) 654 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T17:43:12.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:43:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:43:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:12 vm00 bash[28333]: audit 2026-03-09T17:43:12.271164+0000 mgr.y (mgr.14505) 655 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:12 vm00 bash[28333]: audit 2026-03-09T17:43:12.271164+0000 mgr.y (mgr.14505) 655 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:12 vm00 bash[20770]: audit 2026-03-09T17:43:12.271164+0000 mgr.y (mgr.14505) 655 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:12 vm00 bash[20770]: audit 2026-03-09T17:43:12.271164+0000 mgr.y (mgr.14505) 655 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:12 vm02 bash[23351]: audit 2026-03-09T17:43:12.271164+0000 mgr.y (mgr.14505) 655 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:12 vm02 bash[23351]: audit 2026-03-09T17:43:12.271164+0000 mgr.y (mgr.14505) 655 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:13 vm00 bash[28333]: cluster 2026-03-09T17:43:12.878075+0000 mgr.y (mgr.14505) 656 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:13 vm00 bash[28333]: cluster 2026-03-09T17:43:12.878075+0000 mgr.y (mgr.14505) 656 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:13 vm00 bash[28333]: audit 2026-03-09T17:43:13.396191+0000 mon.c (mon.2) 819 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:13 vm00 bash[28333]: audit 2026-03-09T17:43:13.396191+0000 mon.c (mon.2) 819 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:13 vm00 bash[20770]: cluster 2026-03-09T17:43:12.878075+0000 mgr.y (mgr.14505) 656 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:13 vm00 bash[20770]: cluster 2026-03-09T17:43:12.878075+0000 mgr.y (mgr.14505) 656 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:13 vm00 bash[20770]: audit 2026-03-09T17:43:13.396191+0000 mon.c (mon.2) 819 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:13 vm00 bash[20770]: audit 2026-03-09T17:43:13.396191+0000 mon.c (mon.2) 819 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:13 vm02 bash[23351]: cluster 2026-03-09T17:43:12.878075+0000 mgr.y (mgr.14505) 656 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:13 vm02 bash[23351]: cluster 2026-03-09T17:43:12.878075+0000 mgr.y (mgr.14505) 656 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:13 vm02 bash[23351]: audit 2026-03-09T17:43:13.396191+0000 mon.c (mon.2) 819 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:13 vm02 bash[23351]: audit 2026-03-09T17:43:13.396191+0000 mon.c (mon.2) 819 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:16 vm00 bash[28333]: cluster 2026-03-09T17:43:14.878608+0000 mgr.y (mgr.14505) 657 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:43:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:16 vm00 bash[28333]: cluster 2026-03-09T17:43:14.878608+0000 mgr.y (mgr.14505) 657 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:43:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:16 vm00 bash[20770]: cluster 2026-03-09T17:43:14.878608+0000 mgr.y (mgr.14505) 657 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:43:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:16 vm00 bash[20770]: cluster 2026-03-09T17:43:14.878608+0000 mgr.y (mgr.14505) 657 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:43:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:16 vm02 bash[23351]: cluster 2026-03-09T17:43:14.878608+0000 mgr.y (mgr.14505) 657 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:43:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:16 vm02 bash[23351]: cluster 2026-03-09T17:43:14.878608+0000 mgr.y (mgr.14505) 657 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T17:43:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:43:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:43:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:43:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:18 vm00 bash[28333]: cluster 2026-03-09T17:43:16.878913+0000 mgr.y (mgr.14505) 658 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T17:43:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:18 vm00 bash[28333]: cluster 2026-03-09T17:43:16.878913+0000 mgr.y (mgr.14505) 658 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T17:43:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:18 vm00 bash[20770]: cluster 2026-03-09T17:43:16.878913+0000 mgr.y (mgr.14505) 658 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T17:43:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:18 vm00 bash[20770]: cluster 2026-03-09T17:43:16.878913+0000 mgr.y (mgr.14505) 658 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T17:43:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:18 vm02 bash[23351]: cluster 2026-03-09T17:43:16.878913+0000 mgr.y (mgr.14505) 658 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T17:43:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:18 vm02 bash[23351]: cluster 2026-03-09T17:43:16.878913+0000 mgr.y (mgr.14505) 658 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T17:43:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:20 vm00 bash[28333]: cluster 2026-03-09T17:43:18.879644+0000 mgr.y (mgr.14505) 659 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:20 vm00 bash[28333]: cluster 2026-03-09T17:43:18.879644+0000 mgr.y (mgr.14505) 659 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:20 vm00 bash[20770]: cluster 2026-03-09T17:43:18.879644+0000 mgr.y (mgr.14505) 659 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:20 vm00 bash[20770]: cluster 2026-03-09T17:43:18.879644+0000 mgr.y (mgr.14505) 659 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:20 vm02 bash[23351]: cluster 2026-03-09T17:43:18.879644+0000 mgr.y (mgr.14505) 659 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:20 vm02 bash[23351]: cluster 2026-03-09T17:43:18.879644+0000 mgr.y (mgr.14505) 659 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:22 vm00 bash[28333]: cluster 2026-03-09T17:43:20.879972+0000 mgr.y (mgr.14505) 660 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:22 vm00 bash[28333]: cluster 2026-03-09T17:43:20.879972+0000 mgr.y (mgr.14505) 660 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:22 vm00 bash[20770]: cluster 2026-03-09T17:43:20.879972+0000 mgr.y (mgr.14505) 660 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:22 vm00 bash[20770]: cluster 2026-03-09T17:43:20.879972+0000 mgr.y (mgr.14505) 660 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:22 vm02 bash[23351]: cluster 2026-03-09T17:43:20.879972+0000 mgr.y (mgr.14505) 660 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:22 vm02 bash[23351]: cluster 2026-03-09T17:43:20.879972+0000 mgr.y (mgr.14505) 660 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:22.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:43:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:43:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:23 vm02 bash[23351]: audit 2026-03-09T17:43:22.277327+0000 mgr.y (mgr.14505) 661 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:23 vm02 bash[23351]: audit 2026-03-09T17:43:22.277327+0000 mgr.y (mgr.14505) 661 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:23 vm00 bash[28333]: audit 2026-03-09T17:43:22.277327+0000 mgr.y (mgr.14505) 661 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:23 vm00 bash[28333]: audit 2026-03-09T17:43:22.277327+0000 mgr.y (mgr.14505) 661 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:23 vm00 bash[20770]: audit 2026-03-09T17:43:22.277327+0000 mgr.y (mgr.14505) 661 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:23 vm00 bash[20770]: audit 2026-03-09T17:43:22.277327+0000 mgr.y (mgr.14505) 661 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:24 vm02 bash[23351]: cluster 2026-03-09T17:43:22.880565+0000 mgr.y (mgr.14505) 662 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:24 vm02 bash[23351]: cluster 2026-03-09T17:43:22.880565+0000 mgr.y (mgr.14505) 662 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:24 vm00 bash[28333]: cluster 2026-03-09T17:43:22.880565+0000 mgr.y (mgr.14505) 662 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:24 vm00 bash[28333]: cluster 2026-03-09T17:43:22.880565+0000 mgr.y (mgr.14505) 662 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:24 vm00 bash[20770]: cluster 2026-03-09T17:43:22.880565+0000 mgr.y (mgr.14505) 662 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:24 vm00 bash[20770]: cluster 2026-03-09T17:43:22.880565+0000 mgr.y (mgr.14505) 662 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:26.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:43:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:43:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:43:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:26 vm00 bash[20770]: cluster 2026-03-09T17:43:24.881034+0000 mgr.y (mgr.14505) 663 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:26 vm00 bash[20770]: cluster 2026-03-09T17:43:24.881034+0000 mgr.y (mgr.14505) 663 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:26 vm00 bash[28333]: cluster 2026-03-09T17:43:24.881034+0000 mgr.y (mgr.14505) 663 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:26 vm00 bash[28333]: cluster 2026-03-09T17:43:24.881034+0000 mgr.y (mgr.14505) 663 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:26 vm02 bash[23351]: cluster 2026-03-09T17:43:24.881034+0000 mgr.y (mgr.14505) 663 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:26 vm02 bash[23351]: cluster 2026-03-09T17:43:24.881034+0000 mgr.y (mgr.14505) 663 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:28 vm00 bash[28333]: cluster 2026-03-09T17:43:26.881341+0000 mgr.y (mgr.14505) 664 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:28 vm00 bash[28333]: cluster 2026-03-09T17:43:26.881341+0000 mgr.y (mgr.14505) 664 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:28 vm00 bash[20770]: cluster 2026-03-09T17:43:26.881341+0000 mgr.y (mgr.14505) 664 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:28 vm00 bash[20770]: cluster 2026-03-09T17:43:26.881341+0000 mgr.y (mgr.14505) 664 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:28 vm02 bash[23351]: cluster 2026-03-09T17:43:26.881341+0000 mgr.y (mgr.14505) 664 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:28 vm02 bash[23351]: cluster 2026-03-09T17:43:26.881341+0000 mgr.y (mgr.14505) 664 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:29 vm00 bash[28333]: audit 2026-03-09T17:43:28.403532+0000 mon.c (mon.2) 820 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:29 vm00 bash[28333]: audit 2026-03-09T17:43:28.403532+0000 mon.c (mon.2) 820 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:29 vm00 bash[20770]: audit 2026-03-09T17:43:28.403532+0000 mon.c (mon.2) 820 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:29 vm00 bash[20770]: audit 2026-03-09T17:43:28.403532+0000 mon.c (mon.2) 820 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:29 vm02 bash[23351]: audit 2026-03-09T17:43:28.403532+0000 mon.c (mon.2) 820 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:29 vm02 bash[23351]: audit 2026-03-09T17:43:28.403532+0000 mon.c (mon.2) 820 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:30 vm00 bash[28333]: cluster 2026-03-09T17:43:28.882046+0000 mgr.y (mgr.14505) 665 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:30 vm00 bash[28333]: cluster 2026-03-09T17:43:28.882046+0000 mgr.y (mgr.14505) 665 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:30 vm00 bash[20770]: cluster 2026-03-09T17:43:28.882046+0000 mgr.y (mgr.14505) 665 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:30 vm00 bash[20770]: cluster 2026-03-09T17:43:28.882046+0000 mgr.y (mgr.14505) 665 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:30 vm02 bash[23351]: cluster 2026-03-09T17:43:28.882046+0000 mgr.y (mgr.14505) 665 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:30 vm02 bash[23351]: cluster 2026-03-09T17:43:28.882046+0000 mgr.y (mgr.14505) 665 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:32 vm00 bash[28333]: cluster 2026-03-09T17:43:30.882499+0000 mgr.y (mgr.14505) 666 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:32 vm00 bash[28333]: cluster 2026-03-09T17:43:30.882499+0000 mgr.y (mgr.14505) 666 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:32 vm00 bash[20770]: cluster 2026-03-09T17:43:30.882499+0000 mgr.y (mgr.14505) 666 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:32 vm00 bash[20770]: cluster 2026-03-09T17:43:30.882499+0000 mgr.y (mgr.14505) 666 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:32 vm02 bash[23351]: cluster 2026-03-09T17:43:30.882499+0000 mgr.y (mgr.14505) 666 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:32 vm02 bash[23351]: cluster 2026-03-09T17:43:30.882499+0000 mgr.y (mgr.14505) 666 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:32.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:43:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:43:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:33 vm00 bash[28333]: audit 2026-03-09T17:43:32.285887+0000 mgr.y (mgr.14505) 667 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:33 vm00 bash[28333]: audit 2026-03-09T17:43:32.285887+0000 mgr.y (mgr.14505) 667 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:33 vm00 bash[20770]: audit 2026-03-09T17:43:32.285887+0000 mgr.y (mgr.14505) 667 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:33 vm00 bash[20770]: audit 2026-03-09T17:43:32.285887+0000 mgr.y (mgr.14505) 667 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:33 vm02 bash[23351]: audit 2026-03-09T17:43:32.285887+0000 mgr.y (mgr.14505) 667 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:33.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:33 vm02 bash[23351]: audit 2026-03-09T17:43:32.285887+0000 mgr.y (mgr.14505) 667 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:34 vm02 bash[23351]: cluster 2026-03-09T17:43:32.883283+0000 mgr.y (mgr.14505) 668 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:34 vm02 bash[23351]: cluster 2026-03-09T17:43:32.883283+0000 mgr.y (mgr.14505) 668 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:34 vm00 bash[28333]: cluster 2026-03-09T17:43:32.883283+0000 mgr.y (mgr.14505) 668 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:34 vm00 bash[28333]: cluster 2026-03-09T17:43:32.883283+0000 mgr.y (mgr.14505) 668 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:34 vm00 bash[20770]: cluster 2026-03-09T17:43:32.883283+0000 mgr.y (mgr.14505) 668 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:34 vm00 bash[20770]: cluster 2026-03-09T17:43:32.883283+0000 mgr.y (mgr.14505) 668 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:36 vm02 bash[23351]: cluster 2026-03-09T17:43:34.883713+0000 mgr.y (mgr.14505) 669 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:36 vm02 bash[23351]: cluster 2026-03-09T17:43:34.883713+0000 mgr.y (mgr.14505) 669 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:36 vm00 bash[28333]: cluster 2026-03-09T17:43:34.883713+0000 mgr.y (mgr.14505) 669 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:36 vm00 bash[28333]: cluster 2026-03-09T17:43:34.883713+0000 mgr.y (mgr.14505) 669 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:36 vm00 bash[20770]: cluster 2026-03-09T17:43:34.883713+0000 mgr.y (mgr.14505) 669 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:36 vm00 bash[20770]: cluster 2026-03-09T17:43:34.883713+0000 mgr.y (mgr.14505) 669 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:43:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:43:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:43:38.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:38 vm02 bash[23351]: cluster 2026-03-09T17:43:36.884038+0000 mgr.y (mgr.14505) 670 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:38.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:38 vm02 bash[23351]: cluster 2026-03-09T17:43:36.884038+0000 mgr.y (mgr.14505) 670 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:38 vm00 bash[28333]: cluster 2026-03-09T17:43:36.884038+0000 mgr.y (mgr.14505) 670 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:38 vm00 bash[28333]: cluster 2026-03-09T17:43:36.884038+0000 mgr.y (mgr.14505) 670 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:38 vm00 bash[20770]: cluster 2026-03-09T17:43:36.884038+0000 mgr.y (mgr.14505) 670 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:38 vm00 bash[20770]: cluster 2026-03-09T17:43:36.884038+0000 mgr.y (mgr.14505) 670 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:40 vm02 bash[23351]: cluster 2026-03-09T17:43:38.884718+0000 mgr.y (mgr.14505) 671 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:40 vm02 bash[23351]: cluster 2026-03-09T17:43:38.884718+0000 mgr.y (mgr.14505) 671 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:40 vm00 bash[28333]: cluster 2026-03-09T17:43:38.884718+0000 mgr.y (mgr.14505) 671 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:40 vm00 bash[28333]: cluster 2026-03-09T17:43:38.884718+0000 mgr.y (mgr.14505) 671 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:40 vm00 bash[20770]: cluster 2026-03-09T17:43:38.884718+0000 mgr.y (mgr.14505) 671 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:40 vm00 bash[20770]: cluster 2026-03-09T17:43:38.884718+0000 mgr.y (mgr.14505) 671 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:42 vm02 bash[23351]: cluster 2026-03-09T17:43:40.885138+0000 mgr.y (mgr.14505) 672 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:42 vm02 bash[23351]: cluster 2026-03-09T17:43:40.885138+0000 mgr.y (mgr.14505) 672 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:42.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:43:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:43:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:42 vm00 bash[28333]: cluster 2026-03-09T17:43:40.885138+0000 mgr.y (mgr.14505) 672 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:42 vm00 bash[28333]: cluster 2026-03-09T17:43:40.885138+0000 mgr.y (mgr.14505) 672 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:42 vm00 bash[20770]: cluster 2026-03-09T17:43:40.885138+0000 mgr.y (mgr.14505) 672 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:42 vm00 bash[20770]: cluster 2026-03-09T17:43:40.885138+0000 mgr.y (mgr.14505) 672 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:43 vm02 bash[23351]: audit 2026-03-09T17:43:42.296581+0000 mgr.y (mgr.14505) 673 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:43 vm02 bash[23351]: audit 2026-03-09T17:43:42.296581+0000 mgr.y (mgr.14505) 673 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:43 vm00 bash[28333]: audit 2026-03-09T17:43:42.296581+0000 mgr.y (mgr.14505) 673 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:43 vm00 bash[28333]: audit 2026-03-09T17:43:42.296581+0000 mgr.y (mgr.14505) 673 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:43 vm00 bash[20770]: audit 2026-03-09T17:43:42.296581+0000 mgr.y (mgr.14505) 673 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:43 vm00 bash[20770]: audit 2026-03-09T17:43:42.296581+0000 mgr.y (mgr.14505) 673 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:44 vm02 bash[23351]: cluster 2026-03-09T17:43:42.885799+0000 mgr.y (mgr.14505) 674 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:44 vm02 bash[23351]: cluster 2026-03-09T17:43:42.885799+0000 mgr.y (mgr.14505) 674 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:44 vm02 bash[23351]: audit 2026-03-09T17:43:43.409746+0000 mon.c (mon.2) 821 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:44 vm02 bash[23351]: audit 2026-03-09T17:43:43.409746+0000 mon.c (mon.2) 821 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:44 vm00 bash[28333]: cluster 2026-03-09T17:43:42.885799+0000 mgr.y (mgr.14505) 674 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:44 vm00 bash[28333]: cluster 2026-03-09T17:43:42.885799+0000 mgr.y (mgr.14505) 674 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:44 vm00 bash[28333]: audit 2026-03-09T17:43:43.409746+0000 mon.c (mon.2) 821 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:44 vm00 bash[28333]: audit 2026-03-09T17:43:43.409746+0000 mon.c (mon.2) 821 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:44 vm00 bash[20770]: cluster 2026-03-09T17:43:42.885799+0000 mgr.y (mgr.14505) 674 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:44 vm00 bash[20770]: cluster 2026-03-09T17:43:42.885799+0000 mgr.y (mgr.14505) 674 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:44 vm00 bash[20770]: audit 2026-03-09T17:43:43.409746+0000 mon.c (mon.2) 821 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:44 vm00 bash[20770]: audit 2026-03-09T17:43:43.409746+0000 mon.c (mon.2) 821 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:46 vm02 bash[23351]: cluster 2026-03-09T17:43:44.886202+0000 mgr.y (mgr.14505) 675 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:46 vm02 bash[23351]: cluster 2026-03-09T17:43:44.886202+0000 mgr.y (mgr.14505) 675 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:43:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:43:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:43:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:46 vm00 bash[28333]: cluster 2026-03-09T17:43:44.886202+0000 mgr.y (mgr.14505) 675 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:46 vm00 bash[28333]: cluster 2026-03-09T17:43:44.886202+0000 mgr.y (mgr.14505) 675 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:46 vm00 bash[20770]: cluster 2026-03-09T17:43:44.886202+0000 mgr.y (mgr.14505) 675 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:46 vm00 bash[20770]: cluster 2026-03-09T17:43:44.886202+0000 mgr.y (mgr.14505) 675 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:48 vm02 bash[23351]: cluster 2026-03-09T17:43:46.886576+0000 mgr.y (mgr.14505) 676 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:48 vm02 bash[23351]: cluster 2026-03-09T17:43:46.886576+0000 mgr.y (mgr.14505) 676 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:48 vm00 bash[20770]: cluster 2026-03-09T17:43:46.886576+0000 mgr.y (mgr.14505) 676 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:48 vm00 bash[20770]: cluster 2026-03-09T17:43:46.886576+0000 mgr.y (mgr.14505) 676 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:48 vm00 bash[28333]: cluster 2026-03-09T17:43:46.886576+0000 mgr.y (mgr.14505) 676 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:48 vm00 bash[28333]: cluster 2026-03-09T17:43:46.886576+0000 mgr.y (mgr.14505) 676 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:50 vm02 bash[23351]: cluster 2026-03-09T17:43:48.887205+0000 mgr.y (mgr.14505) 677 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:50 vm02 bash[23351]: cluster 2026-03-09T17:43:48.887205+0000 mgr.y (mgr.14505) 677 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:50 vm00 bash[20770]: cluster 2026-03-09T17:43:48.887205+0000 mgr.y (mgr.14505) 677 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:50 vm00 bash[20770]: cluster 2026-03-09T17:43:48.887205+0000 mgr.y (mgr.14505) 677 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:50 vm00 bash[28333]: cluster 2026-03-09T17:43:48.887205+0000 mgr.y (mgr.14505) 677 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:50 vm00 bash[28333]: cluster 2026-03-09T17:43:48.887205+0000 mgr.y (mgr.14505) 677 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:52 vm02 bash[23351]: cluster 2026-03-09T17:43:50.887460+0000 mgr.y (mgr.14505) 678 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:52 vm02 bash[23351]: cluster 2026-03-09T17:43:50.887460+0000 mgr.y (mgr.14505) 678 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:52.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:43:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:43:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:52 vm00 bash[28333]: cluster 2026-03-09T17:43:50.887460+0000 mgr.y (mgr.14505) 678 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:52 vm00 bash[28333]: cluster 2026-03-09T17:43:50.887460+0000 mgr.y (mgr.14505) 678 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:52 vm00 bash[20770]: cluster 2026-03-09T17:43:50.887460+0000 mgr.y (mgr.14505) 678 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:52 vm00 bash[20770]: cluster 2026-03-09T17:43:50.887460+0000 mgr.y (mgr.14505) 678 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:43:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:53 vm02 bash[23351]: audit 2026-03-09T17:43:52.302136+0000 mgr.y (mgr.14505) 679 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:53 vm02 bash[23351]: audit 2026-03-09T17:43:52.302136+0000 mgr.y (mgr.14505) 679 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:53 vm00 bash[20770]: audit 2026-03-09T17:43:52.302136+0000 mgr.y (mgr.14505) 679 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:53 vm00 bash[20770]: audit 2026-03-09T17:43:52.302136+0000 mgr.y (mgr.14505) 679 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:53 vm00 bash[28333]: audit 2026-03-09T17:43:52.302136+0000 mgr.y (mgr.14505) 679 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:53 vm00 bash[28333]: audit 2026-03-09T17:43:52.302136+0000 mgr.y (mgr.14505) 679 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:43:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:54 vm02 bash[23351]: cluster 2026-03-09T17:43:52.887985+0000 mgr.y (mgr.14505) 680 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:54 vm02 bash[23351]: cluster 2026-03-09T17:43:52.887985+0000 mgr.y (mgr.14505) 680 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:54 vm00 bash[20770]: cluster 2026-03-09T17:43:52.887985+0000 mgr.y (mgr.14505) 680 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:54 vm00 bash[20770]: cluster 2026-03-09T17:43:52.887985+0000 mgr.y (mgr.14505) 680 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:54 vm00 bash[28333]: cluster 2026-03-09T17:43:52.887985+0000 mgr.y (mgr.14505) 680 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:54 vm00 bash[28333]: cluster 2026-03-09T17:43:52.887985+0000 mgr.y (mgr.14505) 680 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:56 vm00 bash[20770]: cluster 2026-03-09T17:43:54.888396+0000 mgr.y (mgr.14505) 681 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:56 vm00 bash[20770]: cluster 2026-03-09T17:43:54.888396+0000 mgr.y (mgr.14505) 681 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:56 vm00 bash[28333]: cluster 2026-03-09T17:43:54.888396+0000 mgr.y (mgr.14505) 681 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:56 vm00 bash[28333]: cluster 2026-03-09T17:43:54.888396+0000 mgr.y (mgr.14505) 681 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:43:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:43:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:43:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:56 vm02 bash[23351]: cluster 2026-03-09T17:43:54.888396+0000 mgr.y (mgr.14505) 681 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:56 vm02 bash[23351]: cluster 2026-03-09T17:43:54.888396+0000 mgr.y (mgr.14505) 681 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:58 vm02 bash[23351]: cluster 2026-03-09T17:43:56.888740+0000 mgr.y (mgr.14505) 682 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:58 vm02 bash[23351]: cluster 2026-03-09T17:43:56.888740+0000 mgr.y (mgr.14505) 682 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:58 vm02 bash[23351]: audit 2026-03-09T17:43:57.994832+0000 mon.c (mon.2) 822 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:43:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:58 vm02 bash[23351]: audit 2026-03-09T17:43:57.994832+0000 mon.c (mon.2) 822 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:43:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:58 vm02 bash[23351]: audit 2026-03-09T17:43:58.331950+0000 mon.c (mon.2) 823 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:43:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:58 vm02 bash[23351]: audit 2026-03-09T17:43:58.331950+0000 mon.c (mon.2) 823 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:43:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:58 vm02 bash[23351]: audit 2026-03-09T17:43:58.333174+0000 mon.c (mon.2) 824 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:43:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:58 vm02 bash[23351]: audit 2026-03-09T17:43:58.333174+0000 mon.c (mon.2) 824 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:58 vm00 bash[20770]: cluster 2026-03-09T17:43:56.888740+0000 mgr.y (mgr.14505) 682 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:58 vm00 bash[20770]: cluster 2026-03-09T17:43:56.888740+0000 mgr.y (mgr.14505) 682 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:58 vm00 bash[20770]: audit 2026-03-09T17:43:57.994832+0000 mon.c (mon.2) 822 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:58 vm00 bash[20770]: audit 2026-03-09T17:43:57.994832+0000 mon.c (mon.2) 822 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:58 vm00 bash[20770]: audit 2026-03-09T17:43:58.331950+0000 mon.c (mon.2) 823 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:58 vm00 bash[20770]: audit 2026-03-09T17:43:58.331950+0000 mon.c (mon.2) 823 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:58 vm00 bash[20770]: audit 2026-03-09T17:43:58.333174+0000 mon.c (mon.2) 824 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:58 vm00 bash[20770]: audit 2026-03-09T17:43:58.333174+0000 mon.c (mon.2) 824 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:58 vm00 bash[28333]: cluster 2026-03-09T17:43:56.888740+0000 mgr.y (mgr.14505) 682 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:58 vm00 bash[28333]: cluster 2026-03-09T17:43:56.888740+0000 mgr.y (mgr.14505) 682 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:58 vm00 bash[28333]: audit 2026-03-09T17:43:57.994832+0000 mon.c (mon.2) 822 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:58 vm00 bash[28333]: audit 2026-03-09T17:43:57.994832+0000 mon.c (mon.2) 822 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:58 vm00 bash[28333]: audit 2026-03-09T17:43:58.331950+0000 mon.c (mon.2) 823 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:58 vm00 bash[28333]: audit 2026-03-09T17:43:58.331950+0000 mon.c (mon.2) 823 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:58 vm00 bash[28333]: audit 2026-03-09T17:43:58.333174+0000 mon.c (mon.2) 824 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:43:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:58 vm00 bash[28333]: audit 2026-03-09T17:43:58.333174+0000 mon.c (mon.2) 824 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:43:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:59 vm02 bash[23351]: audit 2026-03-09T17:43:58.434480+0000 mon.c (mon.2) 825 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:59 vm02 bash[23351]: audit 2026-03-09T17:43:58.434480+0000 mon.c (mon.2) 825 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:43:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:59 vm02 bash[23351]: audit 2026-03-09T17:43:58.483427+0000 mon.a (mon.0) 3520 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:43:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:59 vm02 bash[23351]: audit 2026-03-09T17:43:58.483427+0000 mon.a (mon.0) 3520 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:43:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:59 vm02 bash[23351]: cluster 2026-03-09T17:43:58.889477+0000 mgr.y (mgr.14505) 683 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:43:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:43:59 vm02 bash[23351]: cluster 2026-03-09T17:43:58.889477+0000 mgr.y (mgr.14505) 683 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:59 vm00 bash[20770]: audit 2026-03-09T17:43:58.434480+0000 mon.c (mon.2) 825 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:59 vm00 bash[20770]: audit 2026-03-09T17:43:58.434480+0000 mon.c (mon.2) 825 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:59 vm00 bash[20770]: audit 2026-03-09T17:43:58.483427+0000 mon.a (mon.0) 3520 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:59 vm00 bash[20770]: audit 2026-03-09T17:43:58.483427+0000 mon.a (mon.0) 3520 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:59 vm00 bash[20770]: cluster 2026-03-09T17:43:58.889477+0000 mgr.y (mgr.14505) 683 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:43:59 vm00 bash[20770]: cluster 2026-03-09T17:43:58.889477+0000 mgr.y (mgr.14505) 683 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:59 vm00 bash[28333]: audit 2026-03-09T17:43:58.434480+0000 mon.c (mon.2) 825 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:59 vm00 bash[28333]: audit 2026-03-09T17:43:58.434480+0000 mon.c (mon.2) 825 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:59 vm00 bash[28333]: audit 2026-03-09T17:43:58.483427+0000 mon.a (mon.0) 3520 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:59 vm00 bash[28333]: audit 2026-03-09T17:43:58.483427+0000 mon.a (mon.0) 3520 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:59 vm00 bash[28333]: cluster 2026-03-09T17:43:58.889477+0000 mgr.y (mgr.14505) 683 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:43:59 vm00 bash[28333]: cluster 2026-03-09T17:43:58.889477+0000 mgr.y (mgr.14505) 683 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:01 vm00 bash[20770]: cluster 2026-03-09T17:44:00.889904+0000 mgr.y (mgr.14505) 684 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:01 vm00 bash[20770]: cluster 2026-03-09T17:44:00.889904+0000 mgr.y (mgr.14505) 684 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:01 vm00 bash[28333]: cluster 2026-03-09T17:44:00.889904+0000 mgr.y (mgr.14505) 684 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:01 vm00 bash[28333]: cluster 2026-03-09T17:44:00.889904+0000 mgr.y (mgr.14505) 684 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:02.305 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:01 vm02 bash[23351]: cluster 2026-03-09T17:44:00.889904+0000 mgr.y (mgr.14505) 684 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:02.305 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:01 vm02 bash[23351]: cluster 2026-03-09T17:44:00.889904+0000 mgr.y (mgr.14505) 684 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:02.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:44:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:44:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:02 vm00 bash[28333]: audit 2026-03-09T17:44:02.305177+0000 mgr.y (mgr.14505) 685 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:03.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:02 vm00 bash[28333]: audit 2026-03-09T17:44:02.305177+0000 mgr.y (mgr.14505) 685 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:02 vm00 bash[20770]: audit 2026-03-09T17:44:02.305177+0000 mgr.y (mgr.14505) 685 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:03.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:02 vm00 bash[20770]: audit 2026-03-09T17:44:02.305177+0000 mgr.y (mgr.14505) 685 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:02 vm02 bash[23351]: audit 2026-03-09T17:44:02.305177+0000 mgr.y (mgr.14505) 685 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:02 vm02 bash[23351]: audit 2026-03-09T17:44:02.305177+0000 mgr.y (mgr.14505) 685 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:03 vm00 bash[20770]: cluster 2026-03-09T17:44:02.890553+0000 mgr.y (mgr.14505) 686 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:04.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:03 vm00 bash[20770]: cluster 2026-03-09T17:44:02.890553+0000 mgr.y (mgr.14505) 686 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:03 vm00 bash[28333]: cluster 2026-03-09T17:44:02.890553+0000 mgr.y (mgr.14505) 686 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:04.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:03 vm00 bash[28333]: cluster 2026-03-09T17:44:02.890553+0000 mgr.y (mgr.14505) 686 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:03 vm02 bash[23351]: cluster 2026-03-09T17:44:02.890553+0000 mgr.y (mgr.14505) 686 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:03 vm02 bash[23351]: cluster 2026-03-09T17:44:02.890553+0000 mgr.y (mgr.14505) 686 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:05 vm00 bash[20770]: cluster 2026-03-09T17:44:04.890953+0000 mgr.y (mgr.14505) 687 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:06.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:05 vm00 bash[20770]: cluster 2026-03-09T17:44:04.890953+0000 mgr.y (mgr.14505) 687 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:05 vm00 bash[28333]: cluster 2026-03-09T17:44:04.890953+0000 mgr.y (mgr.14505) 687 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:06.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:05 vm00 bash[28333]: cluster 2026-03-09T17:44:04.890953+0000 mgr.y (mgr.14505) 687 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:05 vm02 bash[23351]: cluster 2026-03-09T17:44:04.890953+0000 mgr.y (mgr.14505) 687 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:05 vm02 bash[23351]: cluster 2026-03-09T17:44:04.890953+0000 mgr.y (mgr.14505) 687 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:44:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:44:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:44:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:07 vm00 bash[20770]: cluster 2026-03-09T17:44:06.891293+0000 mgr.y (mgr.14505) 688 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:07 vm00 bash[20770]: cluster 2026-03-09T17:44:06.891293+0000 mgr.y (mgr.14505) 688 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:07 vm00 bash[28333]: cluster 2026-03-09T17:44:06.891293+0000 mgr.y (mgr.14505) 688 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:07 vm00 bash[28333]: cluster 2026-03-09T17:44:06.891293+0000 mgr.y (mgr.14505) 688 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:07 vm02 bash[23351]: cluster 2026-03-09T17:44:06.891293+0000 mgr.y (mgr.14505) 688 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:07 vm02 bash[23351]: cluster 2026-03-09T17:44:06.891293+0000 mgr.y (mgr.14505) 688 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:09 vm00 bash[20770]: cluster 2026-03-09T17:44:08.891956+0000 mgr.y (mgr.14505) 689 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:09 vm00 bash[20770]: cluster 2026-03-09T17:44:08.891956+0000 mgr.y (mgr.14505) 689 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:09 vm00 bash[28333]: cluster 2026-03-09T17:44:08.891956+0000 mgr.y (mgr.14505) 689 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:09 vm00 bash[28333]: cluster 2026-03-09T17:44:08.891956+0000 mgr.y (mgr.14505) 689 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:09 vm02 bash[23351]: cluster 2026-03-09T17:44:08.891956+0000 mgr.y (mgr.14505) 689 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:09 vm02 bash[23351]: cluster 2026-03-09T17:44:08.891956+0000 mgr.y (mgr.14505) 689 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:11 vm00 bash[28333]: cluster 2026-03-09T17:44:10.892278+0000 mgr.y (mgr.14505) 690 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:11 vm00 bash[28333]: cluster 2026-03-09T17:44:10.892278+0000 mgr.y (mgr.14505) 690 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:11 vm00 bash[20770]: cluster 2026-03-09T17:44:10.892278+0000 mgr.y (mgr.14505) 690 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:11 vm00 bash[20770]: cluster 2026-03-09T17:44:10.892278+0000 mgr.y (mgr.14505) 690 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:12.312 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:11 vm02 bash[23351]: cluster 2026-03-09T17:44:10.892278+0000 mgr.y (mgr.14505) 690 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:12.312 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:11 vm02 bash[23351]: cluster 2026-03-09T17:44:10.892278+0000 mgr.y (mgr.14505) 690 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:12.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:44:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:44:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:12 vm00 bash[28333]: audit 2026-03-09T17:44:12.311897+0000 mgr.y (mgr.14505) 691 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:12 vm00 bash[28333]: audit 2026-03-09T17:44:12.311897+0000 mgr.y (mgr.14505) 691 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:12 vm00 bash[20770]: audit 2026-03-09T17:44:12.311897+0000 mgr.y (mgr.14505) 691 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:12 vm00 bash[20770]: audit 2026-03-09T17:44:12.311897+0000 mgr.y (mgr.14505) 691 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:12 vm02 bash[23351]: audit 2026-03-09T17:44:12.311897+0000 mgr.y (mgr.14505) 691 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:12 vm02 bash[23351]: audit 2026-03-09T17:44:12.311897+0000 mgr.y (mgr.14505) 691 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:14 vm00 bash[28333]: cluster 2026-03-09T17:44:12.893066+0000 mgr.y (mgr.14505) 692 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:14 vm00 bash[28333]: cluster 2026-03-09T17:44:12.893066+0000 mgr.y (mgr.14505) 692 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:14 vm00 bash[28333]: audit 2026-03-09T17:44:13.514853+0000 mon.c (mon.2) 826 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:14 vm00 bash[28333]: audit 2026-03-09T17:44:13.514853+0000 mon.c (mon.2) 826 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:14 vm00 bash[20770]: cluster 2026-03-09T17:44:12.893066+0000 mgr.y (mgr.14505) 692 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:14 vm00 bash[20770]: cluster 2026-03-09T17:44:12.893066+0000 mgr.y (mgr.14505) 692 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:14 vm00 bash[20770]: audit 2026-03-09T17:44:13.514853+0000 mon.c (mon.2) 826 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:14 vm00 bash[20770]: audit 2026-03-09T17:44:13.514853+0000 mon.c (mon.2) 826 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:14 vm02 bash[23351]: cluster 2026-03-09T17:44:12.893066+0000 mgr.y (mgr.14505) 692 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:14 vm02 bash[23351]: cluster 2026-03-09T17:44:12.893066+0000 mgr.y (mgr.14505) 692 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:14 vm02 bash[23351]: audit 2026-03-09T17:44:13.514853+0000 mon.c (mon.2) 826 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:14 vm02 bash[23351]: audit 2026-03-09T17:44:13.514853+0000 mon.c (mon.2) 826 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:16 vm00 bash[28333]: cluster 2026-03-09T17:44:14.893574+0000 mgr.y (mgr.14505) 693 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:16 vm00 bash[28333]: cluster 2026-03-09T17:44:14.893574+0000 mgr.y (mgr.14505) 693 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:16 vm00 bash[20770]: cluster 2026-03-09T17:44:14.893574+0000 mgr.y (mgr.14505) 693 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:16 vm00 bash[20770]: cluster 2026-03-09T17:44:14.893574+0000 mgr.y (mgr.14505) 693 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:16 vm02 bash[23351]: cluster 2026-03-09T17:44:14.893574+0000 mgr.y (mgr.14505) 693 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:16 vm02 bash[23351]: cluster 2026-03-09T17:44:14.893574+0000 mgr.y (mgr.14505) 693 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:44:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:44:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:44:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:18 vm00 bash[28333]: cluster 2026-03-09T17:44:16.894045+0000 mgr.y (mgr.14505) 694 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:18 vm00 bash[28333]: cluster 2026-03-09T17:44:16.894045+0000 mgr.y (mgr.14505) 694 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:18 vm00 bash[20770]: cluster 2026-03-09T17:44:16.894045+0000 mgr.y (mgr.14505) 694 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:18 vm00 bash[20770]: cluster 2026-03-09T17:44:16.894045+0000 mgr.y (mgr.14505) 694 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:18 vm02 bash[23351]: cluster 2026-03-09T17:44:16.894045+0000 mgr.y (mgr.14505) 694 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:18 vm02 bash[23351]: cluster 2026-03-09T17:44:16.894045+0000 mgr.y (mgr.14505) 694 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:20 vm00 bash[28333]: cluster 2026-03-09T17:44:18.894750+0000 mgr.y (mgr.14505) 695 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:20 vm00 bash[28333]: cluster 2026-03-09T17:44:18.894750+0000 mgr.y (mgr.14505) 695 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:20 vm00 bash[20770]: cluster 2026-03-09T17:44:18.894750+0000 mgr.y (mgr.14505) 695 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:20 vm00 bash[20770]: cluster 2026-03-09T17:44:18.894750+0000 mgr.y (mgr.14505) 695 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:20 vm02 bash[23351]: cluster 2026-03-09T17:44:18.894750+0000 mgr.y (mgr.14505) 695 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:20.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:20 vm02 bash[23351]: cluster 2026-03-09T17:44:18.894750+0000 mgr.y (mgr.14505) 695 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:22 vm00 bash[28333]: cluster 2026-03-09T17:44:20.895065+0000 mgr.y (mgr.14505) 696 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:22 vm00 bash[28333]: cluster 2026-03-09T17:44:20.895065+0000 mgr.y (mgr.14505) 696 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:22 vm00 bash[20770]: cluster 2026-03-09T17:44:20.895065+0000 mgr.y (mgr.14505) 696 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:22 vm00 bash[20770]: cluster 2026-03-09T17:44:20.895065+0000 mgr.y (mgr.14505) 696 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:22.323 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:22 vm02 bash[23351]: cluster 2026-03-09T17:44:20.895065+0000 mgr.y (mgr.14505) 696 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:22.323 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:22 vm02 bash[23351]: cluster 2026-03-09T17:44:20.895065+0000 mgr.y (mgr.14505) 696 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:22.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:44:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:44:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:23 vm02 bash[23351]: audit 2026-03-09T17:44:22.322468+0000 mgr.y (mgr.14505) 697 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:23 vm02 bash[23351]: audit 2026-03-09T17:44:22.322468+0000 mgr.y (mgr.14505) 697 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:23 vm00 bash[28333]: audit 2026-03-09T17:44:22.322468+0000 mgr.y (mgr.14505) 697 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:23.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:23 vm00 bash[28333]: audit 2026-03-09T17:44:22.322468+0000 mgr.y (mgr.14505) 697 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:23 vm00 bash[20770]: audit 2026-03-09T17:44:22.322468+0000 mgr.y (mgr.14505) 697 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:23.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:23 vm00 bash[20770]: audit 2026-03-09T17:44:22.322468+0000 mgr.y (mgr.14505) 697 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:24 vm02 bash[23351]: cluster 2026-03-09T17:44:22.895782+0000 mgr.y (mgr.14505) 698 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:24 vm02 bash[23351]: cluster 2026-03-09T17:44:22.895782+0000 mgr.y (mgr.14505) 698 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:24 vm00 bash[28333]: cluster 2026-03-09T17:44:22.895782+0000 mgr.y (mgr.14505) 698 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:24 vm00 bash[28333]: cluster 2026-03-09T17:44:22.895782+0000 mgr.y (mgr.14505) 698 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:24 vm00 bash[20770]: cluster 2026-03-09T17:44:22.895782+0000 mgr.y (mgr.14505) 698 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:24 vm00 bash[20770]: cluster 2026-03-09T17:44:22.895782+0000 mgr.y (mgr.14505) 698 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:26 vm02 bash[23351]: cluster 2026-03-09T17:44:24.896146+0000 mgr.y (mgr.14505) 699 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:26 vm02 bash[23351]: cluster 2026-03-09T17:44:24.896146+0000 mgr.y (mgr.14505) 699 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:26 vm00 bash[28333]: cluster 2026-03-09T17:44:24.896146+0000 mgr.y (mgr.14505) 699 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:26 vm00 bash[28333]: cluster 2026-03-09T17:44:24.896146+0000 mgr.y (mgr.14505) 699 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:26 vm00 bash[20770]: cluster 2026-03-09T17:44:24.896146+0000 mgr.y (mgr.14505) 699 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:26 vm00 bash[20770]: cluster 2026-03-09T17:44:24.896146+0000 mgr.y (mgr.14505) 699 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:26.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:44:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:44:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:44:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:28 vm02 bash[23351]: cluster 2026-03-09T17:44:26.896416+0000 mgr.y (mgr.14505) 700 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:28 vm02 bash[23351]: cluster 2026-03-09T17:44:26.896416+0000 mgr.y (mgr.14505) 700 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:28 vm00 bash[28333]: cluster 2026-03-09T17:44:26.896416+0000 mgr.y (mgr.14505) 700 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:28 vm00 bash[28333]: cluster 2026-03-09T17:44:26.896416+0000 mgr.y (mgr.14505) 700 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:28 vm00 bash[20770]: cluster 2026-03-09T17:44:26.896416+0000 mgr.y (mgr.14505) 700 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:28 vm00 bash[20770]: cluster 2026-03-09T17:44:26.896416+0000 mgr.y (mgr.14505) 700 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:29 vm02 bash[23351]: audit 2026-03-09T17:44:28.521371+0000 mon.c (mon.2) 827 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:29 vm02 bash[23351]: audit 2026-03-09T17:44:28.521371+0000 mon.c (mon.2) 827 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:29 vm00 bash[28333]: audit 2026-03-09T17:44:28.521371+0000 mon.c (mon.2) 827 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:29 vm00 bash[28333]: audit 2026-03-09T17:44:28.521371+0000 mon.c (mon.2) 827 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:29 vm00 bash[20770]: audit 2026-03-09T17:44:28.521371+0000 mon.c (mon.2) 827 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:29 vm00 bash[20770]: audit 2026-03-09T17:44:28.521371+0000 mon.c (mon.2) 827 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:30 vm02 bash[23351]: cluster 2026-03-09T17:44:28.897137+0000 mgr.y (mgr.14505) 701 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:30 vm02 bash[23351]: cluster 2026-03-09T17:44:28.897137+0000 mgr.y (mgr.14505) 701 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:30 vm00 bash[28333]: cluster 2026-03-09T17:44:28.897137+0000 mgr.y (mgr.14505) 701 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:30 vm00 bash[28333]: cluster 2026-03-09T17:44:28.897137+0000 mgr.y (mgr.14505) 701 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:30 vm00 bash[20770]: cluster 2026-03-09T17:44:28.897137+0000 mgr.y (mgr.14505) 701 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:30 vm00 bash[20770]: cluster 2026-03-09T17:44:28.897137+0000 mgr.y (mgr.14505) 701 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:32 vm02 bash[23351]: cluster 2026-03-09T17:44:30.897431+0000 mgr.y (mgr.14505) 702 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:32 vm02 bash[23351]: cluster 2026-03-09T17:44:30.897431+0000 mgr.y (mgr.14505) 702 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:32.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:44:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:44:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:32 vm00 bash[28333]: cluster 2026-03-09T17:44:30.897431+0000 mgr.y (mgr.14505) 702 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:32 vm00 bash[28333]: cluster 2026-03-09T17:44:30.897431+0000 mgr.y (mgr.14505) 702 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:32 vm00 bash[20770]: cluster 2026-03-09T17:44:30.897431+0000 mgr.y (mgr.14505) 702 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:32 vm00 bash[20770]: cluster 2026-03-09T17:44:30.897431+0000 mgr.y (mgr.14505) 702 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:33 vm02 bash[23351]: audit 2026-03-09T17:44:32.325539+0000 mgr.y (mgr.14505) 703 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:33 vm02 bash[23351]: audit 2026-03-09T17:44:32.325539+0000 mgr.y (mgr.14505) 703 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:33 vm00 bash[28333]: audit 2026-03-09T17:44:32.325539+0000 mgr.y (mgr.14505) 703 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:33 vm00 bash[28333]: audit 2026-03-09T17:44:32.325539+0000 mgr.y (mgr.14505) 703 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:33 vm00 bash[20770]: audit 2026-03-09T17:44:32.325539+0000 mgr.y (mgr.14505) 703 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:33 vm00 bash[20770]: audit 2026-03-09T17:44:32.325539+0000 mgr.y (mgr.14505) 703 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:34 vm02 bash[23351]: cluster 2026-03-09T17:44:32.897901+0000 mgr.y (mgr.14505) 704 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:34 vm02 bash[23351]: cluster 2026-03-09T17:44:32.897901+0000 mgr.y (mgr.14505) 704 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:34 vm00 bash[28333]: cluster 2026-03-09T17:44:32.897901+0000 mgr.y (mgr.14505) 704 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:34 vm00 bash[28333]: cluster 2026-03-09T17:44:32.897901+0000 mgr.y (mgr.14505) 704 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:34 vm00 bash[20770]: cluster 2026-03-09T17:44:32.897901+0000 mgr.y (mgr.14505) 704 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:34 vm00 bash[20770]: cluster 2026-03-09T17:44:32.897901+0000 mgr.y (mgr.14505) 704 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:36 vm02 bash[23351]: cluster 2026-03-09T17:44:34.898229+0000 mgr.y (mgr.14505) 705 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:36 vm02 bash[23351]: cluster 2026-03-09T17:44:34.898229+0000 mgr.y (mgr.14505) 705 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:36 vm00 bash[28333]: cluster 2026-03-09T17:44:34.898229+0000 mgr.y (mgr.14505) 705 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:36 vm00 bash[28333]: cluster 2026-03-09T17:44:34.898229+0000 mgr.y (mgr.14505) 705 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:36 vm00 bash[20770]: cluster 2026-03-09T17:44:34.898229+0000 mgr.y (mgr.14505) 705 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:36 vm00 bash[20770]: cluster 2026-03-09T17:44:34.898229+0000 mgr.y (mgr.14505) 705 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:44:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:44:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:44:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:38 vm02 bash[23351]: cluster 2026-03-09T17:44:36.898491+0000 mgr.y (mgr.14505) 706 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:38 vm02 bash[23351]: cluster 2026-03-09T17:44:36.898491+0000 mgr.y (mgr.14505) 706 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:38 vm00 bash[28333]: cluster 2026-03-09T17:44:36.898491+0000 mgr.y (mgr.14505) 706 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:38 vm00 bash[28333]: cluster 2026-03-09T17:44:36.898491+0000 mgr.y (mgr.14505) 706 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:38 vm00 bash[20770]: cluster 2026-03-09T17:44:36.898491+0000 mgr.y (mgr.14505) 706 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:38 vm00 bash[20770]: cluster 2026-03-09T17:44:36.898491+0000 mgr.y (mgr.14505) 706 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:40 vm02 bash[23351]: cluster 2026-03-09T17:44:38.899220+0000 mgr.y (mgr.14505) 707 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:40 vm02 bash[23351]: cluster 2026-03-09T17:44:38.899220+0000 mgr.y (mgr.14505) 707 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:40 vm00 bash[20770]: cluster 2026-03-09T17:44:38.899220+0000 mgr.y (mgr.14505) 707 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:40 vm00 bash[20770]: cluster 2026-03-09T17:44:38.899220+0000 mgr.y (mgr.14505) 707 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:40 vm00 bash[28333]: cluster 2026-03-09T17:44:38.899220+0000 mgr.y (mgr.14505) 707 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:40 vm00 bash[28333]: cluster 2026-03-09T17:44:38.899220+0000 mgr.y (mgr.14505) 707 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:42 vm00 bash[28333]: cluster 2026-03-09T17:44:40.899620+0000 mgr.y (mgr.14505) 708 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:42 vm00 bash[28333]: cluster 2026-03-09T17:44:40.899620+0000 mgr.y (mgr.14505) 708 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:42 vm00 bash[20770]: cluster 2026-03-09T17:44:40.899620+0000 mgr.y (mgr.14505) 708 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:42 vm00 bash[20770]: cluster 2026-03-09T17:44:40.899620+0000 mgr.y (mgr.14505) 708 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:42 vm02 bash[23351]: cluster 2026-03-09T17:44:40.899620+0000 mgr.y (mgr.14505) 708 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:42 vm02 bash[23351]: cluster 2026-03-09T17:44:40.899620+0000 mgr.y (mgr.14505) 708 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:42.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:44:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:44:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:43 vm00 bash[28333]: audit 2026-03-09T17:44:42.329910+0000 mgr.y (mgr.14505) 709 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:43 vm00 bash[28333]: audit 2026-03-09T17:44:42.329910+0000 mgr.y (mgr.14505) 709 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:43 vm00 bash[20770]: audit 2026-03-09T17:44:42.329910+0000 mgr.y (mgr.14505) 709 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:43 vm00 bash[20770]: audit 2026-03-09T17:44:42.329910+0000 mgr.y (mgr.14505) 709 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:43 vm02 bash[23351]: audit 2026-03-09T17:44:42.329910+0000 mgr.y (mgr.14505) 709 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:43 vm02 bash[23351]: audit 2026-03-09T17:44:42.329910+0000 mgr.y (mgr.14505) 709 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:44 vm02 bash[23351]: cluster 2026-03-09T17:44:42.900318+0000 mgr.y (mgr.14505) 710 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:44 vm02 bash[23351]: cluster 2026-03-09T17:44:42.900318+0000 mgr.y (mgr.14505) 710 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:44 vm02 bash[23351]: audit 2026-03-09T17:44:43.527890+0000 mon.c (mon.2) 828 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:44 vm02 bash[23351]: audit 2026-03-09T17:44:43.527890+0000 mon.c (mon.2) 828 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:44 vm00 bash[28333]: cluster 2026-03-09T17:44:42.900318+0000 mgr.y (mgr.14505) 710 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:44 vm00 bash[28333]: cluster 2026-03-09T17:44:42.900318+0000 mgr.y (mgr.14505) 710 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:44 vm00 bash[28333]: audit 2026-03-09T17:44:43.527890+0000 mon.c (mon.2) 828 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:44 vm00 bash[28333]: audit 2026-03-09T17:44:43.527890+0000 mon.c (mon.2) 828 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:44 vm00 bash[20770]: cluster 2026-03-09T17:44:42.900318+0000 mgr.y (mgr.14505) 710 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:44 vm00 bash[20770]: cluster 2026-03-09T17:44:42.900318+0000 mgr.y (mgr.14505) 710 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:44 vm00 bash[20770]: audit 2026-03-09T17:44:43.527890+0000 mon.c (mon.2) 828 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:44 vm00 bash[20770]: audit 2026-03-09T17:44:43.527890+0000 mon.c (mon.2) 828 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:46 vm02 bash[23351]: cluster 2026-03-09T17:44:44.900718+0000 mgr.y (mgr.14505) 711 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:46 vm02 bash[23351]: cluster 2026-03-09T17:44:44.900718+0000 mgr.y (mgr.14505) 711 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:46 vm00 bash[28333]: cluster 2026-03-09T17:44:44.900718+0000 mgr.y (mgr.14505) 711 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:46 vm00 bash[28333]: cluster 2026-03-09T17:44:44.900718+0000 mgr.y (mgr.14505) 711 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:46 vm00 bash[20770]: cluster 2026-03-09T17:44:44.900718+0000 mgr.y (mgr.14505) 711 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:46 vm00 bash[20770]: cluster 2026-03-09T17:44:44.900718+0000 mgr.y (mgr.14505) 711 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:44:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:44:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:44:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:48 vm02 bash[23351]: cluster 2026-03-09T17:44:46.901057+0000 mgr.y (mgr.14505) 712 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:48 vm02 bash[23351]: cluster 2026-03-09T17:44:46.901057+0000 mgr.y (mgr.14505) 712 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:48 vm00 bash[28333]: cluster 2026-03-09T17:44:46.901057+0000 mgr.y (mgr.14505) 712 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:48 vm00 bash[28333]: cluster 2026-03-09T17:44:46.901057+0000 mgr.y (mgr.14505) 712 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:48 vm00 bash[20770]: cluster 2026-03-09T17:44:46.901057+0000 mgr.y (mgr.14505) 712 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:48 vm00 bash[20770]: cluster 2026-03-09T17:44:46.901057+0000 mgr.y (mgr.14505) 712 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:50 vm02 bash[23351]: cluster 2026-03-09T17:44:48.901790+0000 mgr.y (mgr.14505) 713 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:50 vm02 bash[23351]: cluster 2026-03-09T17:44:48.901790+0000 mgr.y (mgr.14505) 713 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:50 vm00 bash[28333]: cluster 2026-03-09T17:44:48.901790+0000 mgr.y (mgr.14505) 713 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:50 vm00 bash[28333]: cluster 2026-03-09T17:44:48.901790+0000 mgr.y (mgr.14505) 713 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:50 vm00 bash[20770]: cluster 2026-03-09T17:44:48.901790+0000 mgr.y (mgr.14505) 713 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:50 vm00 bash[20770]: cluster 2026-03-09T17:44:48.901790+0000 mgr.y (mgr.14505) 713 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:52 vm02 bash[23351]: cluster 2026-03-09T17:44:50.902144+0000 mgr.y (mgr.14505) 714 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:52 vm02 bash[23351]: cluster 2026-03-09T17:44:50.902144+0000 mgr.y (mgr.14505) 714 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:52.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:44:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:44:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:52 vm00 bash[28333]: cluster 2026-03-09T17:44:50.902144+0000 mgr.y (mgr.14505) 714 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:52 vm00 bash[28333]: cluster 2026-03-09T17:44:50.902144+0000 mgr.y (mgr.14505) 714 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:52 vm00 bash[20770]: cluster 2026-03-09T17:44:50.902144+0000 mgr.y (mgr.14505) 714 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:52 vm00 bash[20770]: cluster 2026-03-09T17:44:50.902144+0000 mgr.y (mgr.14505) 714 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:53 vm02 bash[23351]: audit 2026-03-09T17:44:52.334194+0000 mgr.y (mgr.14505) 715 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:53.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:53 vm02 bash[23351]: audit 2026-03-09T17:44:52.334194+0000 mgr.y (mgr.14505) 715 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:53 vm00 bash[28333]: audit 2026-03-09T17:44:52.334194+0000 mgr.y (mgr.14505) 715 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:53.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:53 vm00 bash[28333]: audit 2026-03-09T17:44:52.334194+0000 mgr.y (mgr.14505) 715 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:53 vm00 bash[20770]: audit 2026-03-09T17:44:52.334194+0000 mgr.y (mgr.14505) 715 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:53.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:53 vm00 bash[20770]: audit 2026-03-09T17:44:52.334194+0000 mgr.y (mgr.14505) 715 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:44:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:54 vm02 bash[23351]: cluster 2026-03-09T17:44:52.902631+0000 mgr.y (mgr.14505) 716 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:54 vm02 bash[23351]: cluster 2026-03-09T17:44:52.902631+0000 mgr.y (mgr.14505) 716 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:54.787 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:54 vm00 bash[28333]: cluster 2026-03-09T17:44:52.902631+0000 mgr.y (mgr.14505) 716 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:54 vm00 bash[28333]: cluster 2026-03-09T17:44:52.902631+0000 mgr.y (mgr.14505) 716 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:54 vm00 bash[20770]: cluster 2026-03-09T17:44:52.902631+0000 mgr.y (mgr.14505) 716 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:54 vm00 bash[20770]: cluster 2026-03-09T17:44:52.902631+0000 mgr.y (mgr.14505) 716 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:56.571 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:44:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:44:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:44:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:56 vm02 bash[23351]: cluster 2026-03-09T17:44:54.903025+0000 mgr.y (mgr.14505) 717 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:56 vm02 bash[23351]: cluster 2026-03-09T17:44:54.903025+0000 mgr.y (mgr.14505) 717 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:56 vm00 bash[28333]: cluster 2026-03-09T17:44:54.903025+0000 mgr.y (mgr.14505) 717 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:56 vm00 bash[28333]: cluster 2026-03-09T17:44:54.903025+0000 mgr.y (mgr.14505) 717 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:56 vm00 bash[20770]: cluster 2026-03-09T17:44:54.903025+0000 mgr.y (mgr.14505) 717 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:57.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:56 vm00 bash[20770]: cluster 2026-03-09T17:44:54.903025+0000 mgr.y (mgr.14505) 717 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:57 vm02 bash[23351]: cluster 2026-03-09T17:44:56.903386+0000 mgr.y (mgr.14505) 718 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:57 vm02 bash[23351]: cluster 2026-03-09T17:44:56.903386+0000 mgr.y (mgr.14505) 718 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:57 vm00 bash[28333]: cluster 2026-03-09T17:44:56.903386+0000 mgr.y (mgr.14505) 718 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:57 vm00 bash[28333]: cluster 2026-03-09T17:44:56.903386+0000 mgr.y (mgr.14505) 718 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:57 vm00 bash[20770]: cluster 2026-03-09T17:44:56.903386+0000 mgr.y (mgr.14505) 718 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:57 vm00 bash[20770]: cluster 2026-03-09T17:44:56.903386+0000 mgr.y (mgr.14505) 718 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:44:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:58 vm02 bash[23351]: audit 2026-03-09T17:44:58.543893+0000 mon.c (mon.2) 829 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:58 vm02 bash[23351]: audit 2026-03-09T17:44:58.543893+0000 mon.c (mon.2) 829 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:58 vm02 bash[23351]: audit 2026-03-09T17:44:58.551738+0000 mon.c (mon.2) 830 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:44:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:58 vm02 bash[23351]: audit 2026-03-09T17:44:58.551738+0000 mon.c (mon.2) 830 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:44:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:58 vm00 bash[28333]: audit 2026-03-09T17:44:58.543893+0000 mon.c (mon.2) 829 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:58 vm00 bash[28333]: audit 2026-03-09T17:44:58.543893+0000 mon.c (mon.2) 829 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:58 vm00 bash[28333]: audit 2026-03-09T17:44:58.551738+0000 mon.c (mon.2) 830 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:44:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:58 vm00 bash[28333]: audit 2026-03-09T17:44:58.551738+0000 mon.c (mon.2) 830 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:44:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:58 vm00 bash[20770]: audit 2026-03-09T17:44:58.543893+0000 mon.c (mon.2) 829 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:58 vm00 bash[20770]: audit 2026-03-09T17:44:58.543893+0000 mon.c (mon.2) 829 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:44:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:58 vm00 bash[20770]: audit 2026-03-09T17:44:58.551738+0000 mon.c (mon.2) 830 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:44:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:58 vm00 bash[20770]: audit 2026-03-09T17:44:58.551738+0000 mon.c (mon.2) 830 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:44:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:59 vm02 bash[23351]: cluster 2026-03-09T17:44:58.904037+0000 mgr.y (mgr.14505) 719 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:59 vm02 bash[23351]: cluster 2026-03-09T17:44:58.904037+0000 mgr.y (mgr.14505) 719 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:44:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:59 vm02 bash[23351]: audit 2026-03-09T17:44:58.905133+0000 mon.c (mon.2) 831 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:44:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:59 vm02 bash[23351]: audit 2026-03-09T17:44:58.905133+0000 mon.c (mon.2) 831 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:44:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:59 vm02 bash[23351]: audit 2026-03-09T17:44:58.905864+0000 mon.c (mon.2) 832 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:44:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:59 vm02 bash[23351]: audit 2026-03-09T17:44:58.905864+0000 mon.c (mon.2) 832 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:44:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:59 vm02 bash[23351]: audit 2026-03-09T17:44:58.960381+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:44:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:44:59 vm02 bash[23351]: audit 2026-03-09T17:44:58.960381+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:59 vm00 bash[28333]: cluster 2026-03-09T17:44:58.904037+0000 mgr.y (mgr.14505) 719 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:59 vm00 bash[28333]: cluster 2026-03-09T17:44:58.904037+0000 mgr.y (mgr.14505) 719 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:59 vm00 bash[28333]: audit 2026-03-09T17:44:58.905133+0000 mon.c (mon.2) 831 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:59 vm00 bash[28333]: audit 2026-03-09T17:44:58.905133+0000 mon.c (mon.2) 831 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:59 vm00 bash[28333]: audit 2026-03-09T17:44:58.905864+0000 mon.c (mon.2) 832 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:59 vm00 bash[28333]: audit 2026-03-09T17:44:58.905864+0000 mon.c (mon.2) 832 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:59 vm00 bash[28333]: audit 2026-03-09T17:44:58.960381+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:44:59 vm00 bash[28333]: audit 2026-03-09T17:44:58.960381+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:59 vm00 bash[20770]: cluster 2026-03-09T17:44:58.904037+0000 mgr.y (mgr.14505) 719 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:59 vm00 bash[20770]: cluster 2026-03-09T17:44:58.904037+0000 mgr.y (mgr.14505) 719 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:59 vm00 bash[20770]: audit 2026-03-09T17:44:58.905133+0000 mon.c (mon.2) 831 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:59 vm00 bash[20770]: audit 2026-03-09T17:44:58.905133+0000 mon.c (mon.2) 831 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:59 vm00 bash[20770]: audit 2026-03-09T17:44:58.905864+0000 mon.c (mon.2) 832 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:45:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:59 vm00 bash[20770]: audit 2026-03-09T17:44:58.905864+0000 mon.c (mon.2) 832 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:45:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:59 vm00 bash[20770]: audit 2026-03-09T17:44:58.960381+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:45:00.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:44:59 vm00 bash[20770]: audit 2026-03-09T17:44:58.960381+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:45:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:02 vm00 bash[28333]: cluster 2026-03-09T17:45:00.904358+0000 mgr.y (mgr.14505) 720 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:02 vm00 bash[28333]: cluster 2026-03-09T17:45:00.904358+0000 mgr.y (mgr.14505) 720 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:02 vm00 bash[20770]: cluster 2026-03-09T17:45:00.904358+0000 mgr.y (mgr.14505) 720 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:02 vm00 bash[20770]: cluster 2026-03-09T17:45:00.904358+0000 mgr.y (mgr.14505) 720 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:02.344 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:02 vm02 bash[23351]: cluster 2026-03-09T17:45:00.904358+0000 mgr.y (mgr.14505) 720 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:02.344 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:02 vm02 bash[23351]: cluster 2026-03-09T17:45:00.904358+0000 mgr.y (mgr.14505) 720 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:02.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:45:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:45:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:03 vm02 bash[23351]: audit 2026-03-09T17:45:02.343559+0000 mgr.y (mgr.14505) 721 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:03.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:03 vm02 bash[23351]: audit 2026-03-09T17:45:02.343559+0000 mgr.y (mgr.14505) 721 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:03.537 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:03 vm00 bash[28333]: audit 2026-03-09T17:45:02.343559+0000 mgr.y (mgr.14505) 721 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:03 vm00 bash[28333]: audit 2026-03-09T17:45:02.343559+0000 mgr.y (mgr.14505) 721 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:03 vm00 bash[20770]: audit 2026-03-09T17:45:02.343559+0000 mgr.y (mgr.14505) 721 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:03 vm00 bash[20770]: audit 2026-03-09T17:45:02.343559+0000 mgr.y (mgr.14505) 721 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:04 vm02 bash[23351]: cluster 2026-03-09T17:45:02.905039+0000 mgr.y (mgr.14505) 722 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:04 vm02 bash[23351]: cluster 2026-03-09T17:45:02.905039+0000 mgr.y (mgr.14505) 722 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:04 vm00 bash[28333]: cluster 2026-03-09T17:45:02.905039+0000 mgr.y (mgr.14505) 722 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:04 vm00 bash[28333]: cluster 2026-03-09T17:45:02.905039+0000 mgr.y (mgr.14505) 722 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:04 vm00 bash[20770]: cluster 2026-03-09T17:45:02.905039+0000 mgr.y (mgr.14505) 722 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:04 vm00 bash[20770]: cluster 2026-03-09T17:45:02.905039+0000 mgr.y (mgr.14505) 722 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:06 vm02 bash[23351]: cluster 2026-03-09T17:45:04.905440+0000 mgr.y (mgr.14505) 723 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:06.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:06 vm02 bash[23351]: cluster 2026-03-09T17:45:04.905440+0000 mgr.y (mgr.14505) 723 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:06 vm00 bash[28333]: cluster 2026-03-09T17:45:04.905440+0000 mgr.y (mgr.14505) 723 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:06 vm00 bash[28333]: cluster 2026-03-09T17:45:04.905440+0000 mgr.y (mgr.14505) 723 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:06 vm00 bash[20770]: cluster 2026-03-09T17:45:04.905440+0000 mgr.y (mgr.14505) 723 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:06 vm00 bash[20770]: cluster 2026-03-09T17:45:04.905440+0000 mgr.y (mgr.14505) 723 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:45:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:45:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:45:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:08 vm00 bash[28333]: cluster 2026-03-09T17:45:06.905774+0000 mgr.y (mgr.14505) 724 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:08 vm00 bash[28333]: cluster 2026-03-09T17:45:06.905774+0000 mgr.y (mgr.14505) 724 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:08 vm00 bash[20770]: cluster 2026-03-09T17:45:06.905774+0000 mgr.y (mgr.14505) 724 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:08 vm00 bash[20770]: cluster 2026-03-09T17:45:06.905774+0000 mgr.y (mgr.14505) 724 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:08 vm02 bash[23351]: cluster 2026-03-09T17:45:06.905774+0000 mgr.y (mgr.14505) 724 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:08 vm02 bash[23351]: cluster 2026-03-09T17:45:06.905774+0000 mgr.y (mgr.14505) 724 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:10 vm00 bash[28333]: cluster 2026-03-09T17:45:08.906604+0000 mgr.y (mgr.14505) 725 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:10 vm00 bash[28333]: cluster 2026-03-09T17:45:08.906604+0000 mgr.y (mgr.14505) 725 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:10 vm00 bash[20770]: cluster 2026-03-09T17:45:08.906604+0000 mgr.y (mgr.14505) 725 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:10 vm00 bash[20770]: cluster 2026-03-09T17:45:08.906604+0000 mgr.y (mgr.14505) 725 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:10 vm02 bash[23351]: cluster 2026-03-09T17:45:08.906604+0000 mgr.y (mgr.14505) 725 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:10 vm02 bash[23351]: cluster 2026-03-09T17:45:08.906604+0000 mgr.y (mgr.14505) 725 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:12 vm02 bash[23351]: cluster 2026-03-09T17:45:10.906905+0000 mgr.y (mgr.14505) 726 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:12 vm02 bash[23351]: cluster 2026-03-09T17:45:10.906905+0000 mgr.y (mgr.14505) 726 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:12.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:45:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:45:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:12 vm00 bash[28333]: cluster 2026-03-09T17:45:10.906905+0000 mgr.y (mgr.14505) 726 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:13.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:12 vm00 bash[28333]: cluster 2026-03-09T17:45:10.906905+0000 mgr.y (mgr.14505) 726 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:12 vm00 bash[20770]: cluster 2026-03-09T17:45:10.906905+0000 mgr.y (mgr.14505) 726 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:13.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:12 vm00 bash[20770]: cluster 2026-03-09T17:45:10.906905+0000 mgr.y (mgr.14505) 726 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:13 vm02 bash[23351]: audit 2026-03-09T17:45:12.352180+0000 mgr.y (mgr.14505) 727 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:13 vm02 bash[23351]: audit 2026-03-09T17:45:12.352180+0000 mgr.y (mgr.14505) 727 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:13 vm02 bash[23351]: cluster 2026-03-09T17:45:12.907529+0000 mgr.y (mgr.14505) 728 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:13 vm02 bash[23351]: cluster 2026-03-09T17:45:12.907529+0000 mgr.y (mgr.14505) 728 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:13 vm02 bash[23351]: audit 2026-03-09T17:45:13.550130+0000 mon.c (mon.2) 833 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:13 vm02 bash[23351]: audit 2026-03-09T17:45:13.550130+0000 mon.c (mon.2) 833 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:13 vm00 bash[28333]: audit 2026-03-09T17:45:12.352180+0000 mgr.y (mgr.14505) 727 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:13 vm00 bash[28333]: audit 2026-03-09T17:45:12.352180+0000 mgr.y (mgr.14505) 727 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:13 vm00 bash[28333]: cluster 2026-03-09T17:45:12.907529+0000 mgr.y (mgr.14505) 728 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:13 vm00 bash[28333]: cluster 2026-03-09T17:45:12.907529+0000 mgr.y (mgr.14505) 728 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:13 vm00 bash[28333]: audit 2026-03-09T17:45:13.550130+0000 mon.c (mon.2) 833 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:13 vm00 bash[28333]: audit 2026-03-09T17:45:13.550130+0000 mon.c (mon.2) 833 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:13 vm00 bash[20770]: audit 2026-03-09T17:45:12.352180+0000 mgr.y (mgr.14505) 727 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:13 vm00 bash[20770]: audit 2026-03-09T17:45:12.352180+0000 mgr.y (mgr.14505) 727 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:13 vm00 bash[20770]: cluster 2026-03-09T17:45:12.907529+0000 mgr.y (mgr.14505) 728 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:13 vm00 bash[20770]: cluster 2026-03-09T17:45:12.907529+0000 mgr.y (mgr.14505) 728 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:13 vm00 bash[20770]: audit 2026-03-09T17:45:13.550130+0000 mon.c (mon.2) 833 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:14.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:13 vm00 bash[20770]: audit 2026-03-09T17:45:13.550130+0000 mon.c (mon.2) 833 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:15 vm00 bash[28333]: cluster 2026-03-09T17:45:14.907885+0000 mgr.y (mgr.14505) 729 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:16.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:15 vm00 bash[28333]: cluster 2026-03-09T17:45:14.907885+0000 mgr.y (mgr.14505) 729 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:15 vm00 bash[20770]: cluster 2026-03-09T17:45:14.907885+0000 mgr.y (mgr.14505) 729 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:16.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:15 vm00 bash[20770]: cluster 2026-03-09T17:45:14.907885+0000 mgr.y (mgr.14505) 729 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:15 vm02 bash[23351]: cluster 2026-03-09T17:45:14.907885+0000 mgr.y (mgr.14505) 729 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:15 vm02 bash[23351]: cluster 2026-03-09T17:45:14.907885+0000 mgr.y (mgr.14505) 729 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:45:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:45:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:45:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:17 vm00 bash[28333]: cluster 2026-03-09T17:45:16.908222+0000 mgr.y (mgr.14505) 730 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:18.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:17 vm00 bash[28333]: cluster 2026-03-09T17:45:16.908222+0000 mgr.y (mgr.14505) 730 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:17 vm00 bash[20770]: cluster 2026-03-09T17:45:16.908222+0000 mgr.y (mgr.14505) 730 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:18.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:17 vm00 bash[20770]: cluster 2026-03-09T17:45:16.908222+0000 mgr.y (mgr.14505) 730 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:17 vm02 bash[23351]: cluster 2026-03-09T17:45:16.908222+0000 mgr.y (mgr.14505) 730 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:18.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:17 vm02 bash[23351]: cluster 2026-03-09T17:45:16.908222+0000 mgr.y (mgr.14505) 730 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:19 vm00 bash[28333]: cluster 2026-03-09T17:45:18.908853+0000 mgr.y (mgr.14505) 731 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:20.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:19 vm00 bash[28333]: cluster 2026-03-09T17:45:18.908853+0000 mgr.y (mgr.14505) 731 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:19 vm00 bash[20770]: cluster 2026-03-09T17:45:18.908853+0000 mgr.y (mgr.14505) 731 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:20.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:19 vm00 bash[20770]: cluster 2026-03-09T17:45:18.908853+0000 mgr.y (mgr.14505) 731 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:20.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:19 vm02 bash[23351]: cluster 2026-03-09T17:45:18.908853+0000 mgr.y (mgr.14505) 731 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:20.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:19 vm02 bash[23351]: cluster 2026-03-09T17:45:18.908853+0000 mgr.y (mgr.14505) 731 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:21 vm00 bash[28333]: cluster 2026-03-09T17:45:20.909133+0000 mgr.y (mgr.14505) 732 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:21 vm00 bash[28333]: cluster 2026-03-09T17:45:20.909133+0000 mgr.y (mgr.14505) 732 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:21 vm00 bash[20770]: cluster 2026-03-09T17:45:20.909133+0000 mgr.y (mgr.14505) 732 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:21 vm00 bash[20770]: cluster 2026-03-09T17:45:20.909133+0000 mgr.y (mgr.14505) 732 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:22.362 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:21 vm02 bash[23351]: cluster 2026-03-09T17:45:20.909133+0000 mgr.y (mgr.14505) 732 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:22.362 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:21 vm02 bash[23351]: cluster 2026-03-09T17:45:20.909133+0000 mgr.y (mgr.14505) 732 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:22.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:45:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:45:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:22 vm00 bash[28333]: audit 2026-03-09T17:45:22.362310+0000 mgr.y (mgr.14505) 733 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:23.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:22 vm00 bash[28333]: audit 2026-03-09T17:45:22.362310+0000 mgr.y (mgr.14505) 733 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:22 vm00 bash[20770]: audit 2026-03-09T17:45:22.362310+0000 mgr.y (mgr.14505) 733 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:23.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:22 vm00 bash[20770]: audit 2026-03-09T17:45:22.362310+0000 mgr.y (mgr.14505) 733 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:22 vm02 bash[23351]: audit 2026-03-09T17:45:22.362310+0000 mgr.y (mgr.14505) 733 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:23.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:22 vm02 bash[23351]: audit 2026-03-09T17:45:22.362310+0000 mgr.y (mgr.14505) 733 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:23 vm00 bash[28333]: cluster 2026-03-09T17:45:22.909591+0000 mgr.y (mgr.14505) 734 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:23 vm00 bash[28333]: cluster 2026-03-09T17:45:22.909591+0000 mgr.y (mgr.14505) 734 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:23 vm00 bash[20770]: cluster 2026-03-09T17:45:22.909591+0000 mgr.y (mgr.14505) 734 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:24.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:23 vm00 bash[20770]: cluster 2026-03-09T17:45:22.909591+0000 mgr.y (mgr.14505) 734 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:23 vm02 bash[23351]: cluster 2026-03-09T17:45:22.909591+0000 mgr.y (mgr.14505) 734 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:23 vm02 bash[23351]: cluster 2026-03-09T17:45:22.909591+0000 mgr.y (mgr.14505) 734 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:26 vm00 bash[28333]: cluster 2026-03-09T17:45:24.909932+0000 mgr.y (mgr.14505) 735 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:26.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:26 vm00 bash[28333]: cluster 2026-03-09T17:45:24.909932+0000 mgr.y (mgr.14505) 735 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:26 vm00 bash[20770]: cluster 2026-03-09T17:45:24.909932+0000 mgr.y (mgr.14505) 735 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:26.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:26 vm00 bash[20770]: cluster 2026-03-09T17:45:24.909932+0000 mgr.y (mgr.14505) 735 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:26 vm02 bash[23351]: cluster 2026-03-09T17:45:24.909932+0000 mgr.y (mgr.14505) 735 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:26 vm02 bash[23351]: cluster 2026-03-09T17:45:24.909932+0000 mgr.y (mgr.14505) 735 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:45:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:45:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:45:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:28 vm00 bash[28333]: cluster 2026-03-09T17:45:26.910174+0000 mgr.y (mgr.14505) 736 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:28 vm00 bash[28333]: cluster 2026-03-09T17:45:26.910174+0000 mgr.y (mgr.14505) 736 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:28 vm00 bash[20770]: cluster 2026-03-09T17:45:26.910174+0000 mgr.y (mgr.14505) 736 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:28 vm00 bash[20770]: cluster 2026-03-09T17:45:26.910174+0000 mgr.y (mgr.14505) 736 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:28 vm02 bash[23351]: cluster 2026-03-09T17:45:26.910174+0000 mgr.y (mgr.14505) 736 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:28 vm02 bash[23351]: cluster 2026-03-09T17:45:26.910174+0000 mgr.y (mgr.14505) 736 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:29 vm00 bash[28333]: audit 2026-03-09T17:45:28.556497+0000 mon.c (mon.2) 834 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:29 vm00 bash[28333]: audit 2026-03-09T17:45:28.556497+0000 mon.c (mon.2) 834 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:29 vm00 bash[20770]: audit 2026-03-09T17:45:28.556497+0000 mon.c (mon.2) 834 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:29 vm00 bash[20770]: audit 2026-03-09T17:45:28.556497+0000 mon.c (mon.2) 834 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:29 vm02 bash[23351]: audit 2026-03-09T17:45:28.556497+0000 mon.c (mon.2) 834 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:29 vm02 bash[23351]: audit 2026-03-09T17:45:28.556497+0000 mon.c (mon.2) 834 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:30 vm00 bash[28333]: cluster 2026-03-09T17:45:28.910926+0000 mgr.y (mgr.14505) 737 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:30 vm00 bash[28333]: cluster 2026-03-09T17:45:28.910926+0000 mgr.y (mgr.14505) 737 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:30 vm00 bash[20770]: cluster 2026-03-09T17:45:28.910926+0000 mgr.y (mgr.14505) 737 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:30 vm00 bash[20770]: cluster 2026-03-09T17:45:28.910926+0000 mgr.y (mgr.14505) 737 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:30 vm02 bash[23351]: cluster 2026-03-09T17:45:28.910926+0000 mgr.y (mgr.14505) 737 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:30 vm02 bash[23351]: cluster 2026-03-09T17:45:28.910926+0000 mgr.y (mgr.14505) 737 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:32.373 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:32 vm02 bash[23351]: cluster 2026-03-09T17:45:30.911239+0000 mgr.y (mgr.14505) 738 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:32.373 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:32 vm02 bash[23351]: cluster 2026-03-09T17:45:30.911239+0000 mgr.y (mgr.14505) 738 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:32 vm00 bash[28333]: cluster 2026-03-09T17:45:30.911239+0000 mgr.y (mgr.14505) 738 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:32 vm00 bash[28333]: cluster 2026-03-09T17:45:30.911239+0000 mgr.y (mgr.14505) 738 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:32 vm00 bash[20770]: cluster 2026-03-09T17:45:30.911239+0000 mgr.y (mgr.14505) 738 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:32 vm00 bash[20770]: cluster 2026-03-09T17:45:30.911239+0000 mgr.y (mgr.14505) 738 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:32.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:45:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:45:33.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:33 vm02 bash[23351]: audit 2026-03-09T17:45:32.372892+0000 mgr.y (mgr.14505) 739 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:33.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:33 vm02 bash[23351]: audit 2026-03-09T17:45:32.372892+0000 mgr.y (mgr.14505) 739 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:33 vm00 bash[20770]: audit 2026-03-09T17:45:32.372892+0000 mgr.y (mgr.14505) 739 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:33.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:33 vm00 bash[20770]: audit 2026-03-09T17:45:32.372892+0000 mgr.y (mgr.14505) 739 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:33 vm00 bash[28333]: audit 2026-03-09T17:45:32.372892+0000 mgr.y (mgr.14505) 739 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:33.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:33 vm00 bash[28333]: audit 2026-03-09T17:45:32.372892+0000 mgr.y (mgr.14505) 739 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:34.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:34 vm02 bash[23351]: cluster 2026-03-09T17:45:32.911848+0000 mgr.y (mgr.14505) 740 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:34 vm02 bash[23351]: cluster 2026-03-09T17:45:32.911848+0000 mgr.y (mgr.14505) 740 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:34 vm00 bash[28333]: cluster 2026-03-09T17:45:32.911848+0000 mgr.y (mgr.14505) 740 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:34 vm00 bash[28333]: cluster 2026-03-09T17:45:32.911848+0000 mgr.y (mgr.14505) 740 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:34 vm00 bash[20770]: cluster 2026-03-09T17:45:32.911848+0000 mgr.y (mgr.14505) 740 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:34 vm00 bash[20770]: cluster 2026-03-09T17:45:32.911848+0000 mgr.y (mgr.14505) 740 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:36 vm02 bash[23351]: cluster 2026-03-09T17:45:34.912178+0000 mgr.y (mgr.14505) 741 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:36 vm02 bash[23351]: cluster 2026-03-09T17:45:34.912178+0000 mgr.y (mgr.14505) 741 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:36 vm00 bash[28333]: cluster 2026-03-09T17:45:34.912178+0000 mgr.y (mgr.14505) 741 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:36 vm00 bash[28333]: cluster 2026-03-09T17:45:34.912178+0000 mgr.y (mgr.14505) 741 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:36 vm00 bash[20770]: cluster 2026-03-09T17:45:34.912178+0000 mgr.y (mgr.14505) 741 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:36 vm00 bash[20770]: cluster 2026-03-09T17:45:34.912178+0000 mgr.y (mgr.14505) 741 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:45:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:45:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:45:38.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:38 vm02 bash[23351]: cluster 2026-03-09T17:45:36.912530+0000 mgr.y (mgr.14505) 742 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:38.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:38 vm02 bash[23351]: cluster 2026-03-09T17:45:36.912530+0000 mgr.y (mgr.14505) 742 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:38 vm00 bash[28333]: cluster 2026-03-09T17:45:36.912530+0000 mgr.y (mgr.14505) 742 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:38 vm00 bash[28333]: cluster 2026-03-09T17:45:36.912530+0000 mgr.y (mgr.14505) 742 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:38 vm00 bash[20770]: cluster 2026-03-09T17:45:36.912530+0000 mgr.y (mgr.14505) 742 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:38 vm00 bash[20770]: cluster 2026-03-09T17:45:36.912530+0000 mgr.y (mgr.14505) 742 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:40 vm02 bash[23351]: cluster 2026-03-09T17:45:38.913305+0000 mgr.y (mgr.14505) 743 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:40 vm02 bash[23351]: cluster 2026-03-09T17:45:38.913305+0000 mgr.y (mgr.14505) 743 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:40 vm00 bash[28333]: cluster 2026-03-09T17:45:38.913305+0000 mgr.y (mgr.14505) 743 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:40 vm00 bash[28333]: cluster 2026-03-09T17:45:38.913305+0000 mgr.y (mgr.14505) 743 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:40 vm00 bash[20770]: cluster 2026-03-09T17:45:38.913305+0000 mgr.y (mgr.14505) 743 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:40 vm00 bash[20770]: cluster 2026-03-09T17:45:38.913305+0000 mgr.y (mgr.14505) 743 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:42.382 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:42 vm02 bash[23351]: cluster 2026-03-09T17:45:40.913624+0000 mgr.y (mgr.14505) 744 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:42.382 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:42 vm02 bash[23351]: cluster 2026-03-09T17:45:40.913624+0000 mgr.y (mgr.14505) 744 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:42 vm00 bash[28333]: cluster 2026-03-09T17:45:40.913624+0000 mgr.y (mgr.14505) 744 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:42 vm00 bash[28333]: cluster 2026-03-09T17:45:40.913624+0000 mgr.y (mgr.14505) 744 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:42 vm00 bash[20770]: cluster 2026-03-09T17:45:40.913624+0000 mgr.y (mgr.14505) 744 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:42 vm00 bash[20770]: cluster 2026-03-09T17:45:40.913624+0000 mgr.y (mgr.14505) 744 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:42.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:45:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:45:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:43 vm02 bash[23351]: audit 2026-03-09T17:45:42.381614+0000 mgr.y (mgr.14505) 745 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:43.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:43 vm02 bash[23351]: audit 2026-03-09T17:45:42.381614+0000 mgr.y (mgr.14505) 745 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:43 vm00 bash[28333]: audit 2026-03-09T17:45:42.381614+0000 mgr.y (mgr.14505) 745 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:43.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:43 vm00 bash[28333]: audit 2026-03-09T17:45:42.381614+0000 mgr.y (mgr.14505) 745 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:43 vm00 bash[20770]: audit 2026-03-09T17:45:42.381614+0000 mgr.y (mgr.14505) 745 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:43.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:43 vm00 bash[20770]: audit 2026-03-09T17:45:42.381614+0000 mgr.y (mgr.14505) 745 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:44.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:44 vm02 bash[23351]: cluster 2026-03-09T17:45:42.914251+0000 mgr.y (mgr.14505) 746 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:44 vm02 bash[23351]: cluster 2026-03-09T17:45:42.914251+0000 mgr.y (mgr.14505) 746 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:44 vm02 bash[23351]: audit 2026-03-09T17:45:43.563120+0000 mon.c (mon.2) 835 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:44.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:44 vm02 bash[23351]: audit 2026-03-09T17:45:43.563120+0000 mon.c (mon.2) 835 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:44 vm00 bash[28333]: cluster 2026-03-09T17:45:42.914251+0000 mgr.y (mgr.14505) 746 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:44 vm00 bash[28333]: cluster 2026-03-09T17:45:42.914251+0000 mgr.y (mgr.14505) 746 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:44 vm00 bash[28333]: audit 2026-03-09T17:45:43.563120+0000 mon.c (mon.2) 835 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:44 vm00 bash[28333]: audit 2026-03-09T17:45:43.563120+0000 mon.c (mon.2) 835 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:44 vm00 bash[20770]: cluster 2026-03-09T17:45:42.914251+0000 mgr.y (mgr.14505) 746 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:44 vm00 bash[20770]: cluster 2026-03-09T17:45:42.914251+0000 mgr.y (mgr.14505) 746 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:44 vm00 bash[20770]: audit 2026-03-09T17:45:43.563120+0000 mon.c (mon.2) 835 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:44 vm00 bash[20770]: audit 2026-03-09T17:45:43.563120+0000 mon.c (mon.2) 835 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:46.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:46 vm02 bash[23351]: cluster 2026-03-09T17:45:44.914639+0000 mgr.y (mgr.14505) 747 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:46.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:46 vm02 bash[23351]: cluster 2026-03-09T17:45:44.914639+0000 mgr.y (mgr.14505) 747 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:46 vm00 bash[28333]: cluster 2026-03-09T17:45:44.914639+0000 mgr.y (mgr.14505) 747 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:46 vm00 bash[28333]: cluster 2026-03-09T17:45:44.914639+0000 mgr.y (mgr.14505) 747 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:46 vm00 bash[20770]: cluster 2026-03-09T17:45:44.914639+0000 mgr.y (mgr.14505) 747 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:46 vm00 bash[20770]: cluster 2026-03-09T17:45:44.914639+0000 mgr.y (mgr.14505) 747 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:45:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:45:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:45:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:48 vm02 bash[23351]: cluster 2026-03-09T17:45:46.914980+0000 mgr.y (mgr.14505) 748 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:48 vm02 bash[23351]: cluster 2026-03-09T17:45:46.914980+0000 mgr.y (mgr.14505) 748 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:48 vm00 bash[28333]: cluster 2026-03-09T17:45:46.914980+0000 mgr.y (mgr.14505) 748 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:48 vm00 bash[28333]: cluster 2026-03-09T17:45:46.914980+0000 mgr.y (mgr.14505) 748 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:48 vm00 bash[20770]: cluster 2026-03-09T17:45:46.914980+0000 mgr.y (mgr.14505) 748 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:48 vm00 bash[20770]: cluster 2026-03-09T17:45:46.914980+0000 mgr.y (mgr.14505) 748 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:50 vm02 bash[23351]: cluster 2026-03-09T17:45:48.915651+0000 mgr.y (mgr.14505) 749 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:50.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:50 vm02 bash[23351]: cluster 2026-03-09T17:45:48.915651+0000 mgr.y (mgr.14505) 749 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:50 vm00 bash[28333]: cluster 2026-03-09T17:45:48.915651+0000 mgr.y (mgr.14505) 749 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:50 vm00 bash[28333]: cluster 2026-03-09T17:45:48.915651+0000 mgr.y (mgr.14505) 749 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:50 vm00 bash[20770]: cluster 2026-03-09T17:45:48.915651+0000 mgr.y (mgr.14505) 749 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:50 vm00 bash[20770]: cluster 2026-03-09T17:45:48.915651+0000 mgr.y (mgr.14505) 749 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:52.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:52 vm02 bash[23351]: cluster 2026-03-09T17:45:50.915982+0000 mgr.y (mgr.14505) 750 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:52.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:52 vm02 bash[23351]: cluster 2026-03-09T17:45:50.915982+0000 mgr.y (mgr.14505) 750 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:52 vm00 bash[28333]: cluster 2026-03-09T17:45:50.915982+0000 mgr.y (mgr.14505) 750 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:52 vm00 bash[28333]: cluster 2026-03-09T17:45:50.915982+0000 mgr.y (mgr.14505) 750 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:52 vm00 bash[20770]: cluster 2026-03-09T17:45:50.915982+0000 mgr.y (mgr.14505) 750 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:52 vm00 bash[20770]: cluster 2026-03-09T17:45:50.915982+0000 mgr.y (mgr.14505) 750 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:52.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:45:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:45:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:53 vm00 bash[28333]: audit 2026-03-09T17:45:52.389494+0000 mgr.y (mgr.14505) 751 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:53 vm00 bash[28333]: audit 2026-03-09T17:45:52.389494+0000 mgr.y (mgr.14505) 751 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:53 vm00 bash[20770]: audit 2026-03-09T17:45:52.389494+0000 mgr.y (mgr.14505) 751 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:53 vm00 bash[20770]: audit 2026-03-09T17:45:52.389494+0000 mgr.y (mgr.14505) 751 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:53.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:53 vm02 bash[23351]: audit 2026-03-09T17:45:52.389494+0000 mgr.y (mgr.14505) 751 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:53.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:53 vm02 bash[23351]: audit 2026-03-09T17:45:52.389494+0000 mgr.y (mgr.14505) 751 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:45:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:54 vm00 bash[28333]: cluster 2026-03-09T17:45:52.916469+0000 mgr.y (mgr.14505) 752 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:54 vm00 bash[28333]: cluster 2026-03-09T17:45:52.916469+0000 mgr.y (mgr.14505) 752 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:54 vm00 bash[20770]: cluster 2026-03-09T17:45:52.916469+0000 mgr.y (mgr.14505) 752 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:54 vm00 bash[20770]: cluster 2026-03-09T17:45:52.916469+0000 mgr.y (mgr.14505) 752 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:54 vm02 bash[23351]: cluster 2026-03-09T17:45:52.916469+0000 mgr.y (mgr.14505) 752 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:54 vm02 bash[23351]: cluster 2026-03-09T17:45:52.916469+0000 mgr.y (mgr.14505) 752 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:45:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:56 vm00 bash[28333]: cluster 2026-03-09T17:45:54.916809+0000 mgr.y (mgr.14505) 753 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:56 vm00 bash[28333]: cluster 2026-03-09T17:45:54.916809+0000 mgr.y (mgr.14505) 753 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:56 vm00 bash[20770]: cluster 2026-03-09T17:45:54.916809+0000 mgr.y (mgr.14505) 753 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:56 vm00 bash[20770]: cluster 2026-03-09T17:45:54.916809+0000 mgr.y (mgr.14505) 753 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:45:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:45:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:45:56.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:56 vm02 bash[23351]: cluster 2026-03-09T17:45:54.916809+0000 mgr.y (mgr.14505) 753 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:56.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:56 vm02 bash[23351]: cluster 2026-03-09T17:45:54.916809+0000 mgr.y (mgr.14505) 753 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:58 vm00 bash[28333]: cluster 2026-03-09T17:45:56.917058+0000 mgr.y (mgr.14505) 754 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:58 vm00 bash[28333]: cluster 2026-03-09T17:45:56.917058+0000 mgr.y (mgr.14505) 754 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:58 vm00 bash[20770]: cluster 2026-03-09T17:45:56.917058+0000 mgr.y (mgr.14505) 754 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:58 vm00 bash[20770]: cluster 2026-03-09T17:45:56.917058+0000 mgr.y (mgr.14505) 754 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:58.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:58 vm02 bash[23351]: cluster 2026-03-09T17:45:56.917058+0000 mgr.y (mgr.14505) 754 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:58.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:58 vm02 bash[23351]: cluster 2026-03-09T17:45:56.917058+0000 mgr.y (mgr.14505) 754 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:45:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:59 vm00 bash[28333]: audit 2026-03-09T17:45:58.568846+0000 mon.c (mon.2) 836 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:59 vm00 bash[28333]: audit 2026-03-09T17:45:58.568846+0000 mon.c (mon.2) 836 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:59 vm00 bash[28333]: audit 2026-03-09T17:45:59.002438+0000 mon.c (mon.2) 837 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:45:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:45:59 vm00 bash[28333]: audit 2026-03-09T17:45:59.002438+0000 mon.c (mon.2) 837 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:45:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:59 vm00 bash[20770]: audit 2026-03-09T17:45:58.568846+0000 mon.c (mon.2) 836 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:59 vm00 bash[20770]: audit 2026-03-09T17:45:58.568846+0000 mon.c (mon.2) 836 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:59 vm00 bash[20770]: audit 2026-03-09T17:45:59.002438+0000 mon.c (mon.2) 837 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:45:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:45:59 vm00 bash[20770]: audit 2026-03-09T17:45:59.002438+0000 mon.c (mon.2) 837 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:45:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:59 vm02 bash[23351]: audit 2026-03-09T17:45:58.568846+0000 mon.c (mon.2) 836 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:59 vm02 bash[23351]: audit 2026-03-09T17:45:58.568846+0000 mon.c (mon.2) 836 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:45:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:59 vm02 bash[23351]: audit 2026-03-09T17:45:59.002438+0000 mon.c (mon.2) 837 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:45:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:45:59 vm02 bash[23351]: audit 2026-03-09T17:45:59.002438+0000 mon.c (mon.2) 837 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:00 vm00 bash[28333]: cluster 2026-03-09T17:45:58.917686+0000 mgr.y (mgr.14505) 755 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:00 vm00 bash[28333]: cluster 2026-03-09T17:45:58.917686+0000 mgr.y (mgr.14505) 755 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:00 vm00 bash[28333]: audit 2026-03-09T17:45:59.324567+0000 mon.c (mon.2) 838 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:00 vm00 bash[28333]: audit 2026-03-09T17:45:59.324567+0000 mon.c (mon.2) 838 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:00 vm00 bash[28333]: audit 2026-03-09T17:45:59.325622+0000 mon.c (mon.2) 839 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:00 vm00 bash[28333]: audit 2026-03-09T17:45:59.325622+0000 mon.c (mon.2) 839 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:00 vm00 bash[28333]: audit 2026-03-09T17:45:59.333036+0000 mon.a (mon.0) 3522 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:00 vm00 bash[28333]: audit 2026-03-09T17:45:59.333036+0000 mon.a (mon.0) 3522 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:00 vm00 bash[20770]: cluster 2026-03-09T17:45:58.917686+0000 mgr.y (mgr.14505) 755 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:00 vm00 bash[20770]: cluster 2026-03-09T17:45:58.917686+0000 mgr.y (mgr.14505) 755 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:00 vm00 bash[20770]: audit 2026-03-09T17:45:59.324567+0000 mon.c (mon.2) 838 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:00 vm00 bash[20770]: audit 2026-03-09T17:45:59.324567+0000 mon.c (mon.2) 838 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:00 vm00 bash[20770]: audit 2026-03-09T17:45:59.325622+0000 mon.c (mon.2) 839 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:00 vm00 bash[20770]: audit 2026-03-09T17:45:59.325622+0000 mon.c (mon.2) 839 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:00 vm00 bash[20770]: audit 2026-03-09T17:45:59.333036+0000 mon.a (mon.0) 3522 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:46:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:00 vm00 bash[20770]: audit 2026-03-09T17:45:59.333036+0000 mon.a (mon.0) 3522 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:46:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:00 vm02 bash[23351]: cluster 2026-03-09T17:45:58.917686+0000 mgr.y (mgr.14505) 755 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:00 vm02 bash[23351]: cluster 2026-03-09T17:45:58.917686+0000 mgr.y (mgr.14505) 755 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:00 vm02 bash[23351]: audit 2026-03-09T17:45:59.324567+0000 mon.c (mon.2) 838 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:46:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:00 vm02 bash[23351]: audit 2026-03-09T17:45:59.324567+0000 mon.c (mon.2) 838 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:46:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:00 vm02 bash[23351]: audit 2026-03-09T17:45:59.325622+0000 mon.c (mon.2) 839 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:46:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:00 vm02 bash[23351]: audit 2026-03-09T17:45:59.325622+0000 mon.c (mon.2) 839 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:46:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:00 vm02 bash[23351]: audit 2026-03-09T17:45:59.333036+0000 mon.a (mon.0) 3522 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:46:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:00 vm02 bash[23351]: audit 2026-03-09T17:45:59.333036+0000 mon.a (mon.0) 3522 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:46:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:02 vm00 bash[28333]: cluster 2026-03-09T17:46:00.917969+0000 mgr.y (mgr.14505) 756 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:02 vm00 bash[28333]: cluster 2026-03-09T17:46:00.917969+0000 mgr.y (mgr.14505) 756 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:02 vm00 bash[20770]: cluster 2026-03-09T17:46:00.917969+0000 mgr.y (mgr.14505) 756 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:02 vm00 bash[20770]: cluster 2026-03-09T17:46:00.917969+0000 mgr.y (mgr.14505) 756 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:02.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:02 vm02 bash[23351]: cluster 2026-03-09T17:46:00.917969+0000 mgr.y (mgr.14505) 756 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:02.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:02 vm02 bash[23351]: cluster 2026-03-09T17:46:00.917969+0000 mgr.y (mgr.14505) 756 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:02.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:46:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:46:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:03 vm00 bash[28333]: audit 2026-03-09T17:46:02.397319+0000 mgr.y (mgr.14505) 757 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:03 vm00 bash[28333]: audit 2026-03-09T17:46:02.397319+0000 mgr.y (mgr.14505) 757 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:03 vm00 bash[20770]: audit 2026-03-09T17:46:02.397319+0000 mgr.y (mgr.14505) 757 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:03 vm00 bash[20770]: audit 2026-03-09T17:46:02.397319+0000 mgr.y (mgr.14505) 757 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:03 vm02 bash[23351]: audit 2026-03-09T17:46:02.397319+0000 mgr.y (mgr.14505) 757 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:03 vm02 bash[23351]: audit 2026-03-09T17:46:02.397319+0000 mgr.y (mgr.14505) 757 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:04 vm00 bash[28333]: cluster 2026-03-09T17:46:02.918593+0000 mgr.y (mgr.14505) 758 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:04 vm00 bash[28333]: cluster 2026-03-09T17:46:02.918593+0000 mgr.y (mgr.14505) 758 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:04 vm00 bash[20770]: cluster 2026-03-09T17:46:02.918593+0000 mgr.y (mgr.14505) 758 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:04 vm00 bash[20770]: cluster 2026-03-09T17:46:02.918593+0000 mgr.y (mgr.14505) 758 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:04 vm02 bash[23351]: cluster 2026-03-09T17:46:02.918593+0000 mgr.y (mgr.14505) 758 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:04 vm02 bash[23351]: cluster 2026-03-09T17:46:02.918593+0000 mgr.y (mgr.14505) 758 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:06 vm00 bash[28333]: cluster 2026-03-09T17:46:04.919064+0000 mgr.y (mgr.14505) 759 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:06 vm00 bash[28333]: cluster 2026-03-09T17:46:04.919064+0000 mgr.y (mgr.14505) 759 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:06 vm00 bash[20770]: cluster 2026-03-09T17:46:04.919064+0000 mgr.y (mgr.14505) 759 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:06 vm00 bash[20770]: cluster 2026-03-09T17:46:04.919064+0000 mgr.y (mgr.14505) 759 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:46:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:46:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:46:06.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:06 vm02 bash[23351]: cluster 2026-03-09T17:46:04.919064+0000 mgr.y (mgr.14505) 759 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:06.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:06 vm02 bash[23351]: cluster 2026-03-09T17:46:04.919064+0000 mgr.y (mgr.14505) 759 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:08 vm00 bash[28333]: cluster 2026-03-09T17:46:06.919383+0000 mgr.y (mgr.14505) 760 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:08 vm00 bash[28333]: cluster 2026-03-09T17:46:06.919383+0000 mgr.y (mgr.14505) 760 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:08 vm00 bash[20770]: cluster 2026-03-09T17:46:06.919383+0000 mgr.y (mgr.14505) 760 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:08 vm00 bash[20770]: cluster 2026-03-09T17:46:06.919383+0000 mgr.y (mgr.14505) 760 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:08 vm02 bash[23351]: cluster 2026-03-09T17:46:06.919383+0000 mgr.y (mgr.14505) 760 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:08 vm02 bash[23351]: cluster 2026-03-09T17:46:06.919383+0000 mgr.y (mgr.14505) 760 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:10 vm00 bash[28333]: cluster 2026-03-09T17:46:08.920084+0000 mgr.y (mgr.14505) 761 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:10 vm00 bash[28333]: cluster 2026-03-09T17:46:08.920084+0000 mgr.y (mgr.14505) 761 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:10 vm00 bash[20770]: cluster 2026-03-09T17:46:08.920084+0000 mgr.y (mgr.14505) 761 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:10 vm00 bash[20770]: cluster 2026-03-09T17:46:08.920084+0000 mgr.y (mgr.14505) 761 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:10 vm02 bash[23351]: cluster 2026-03-09T17:46:08.920084+0000 mgr.y (mgr.14505) 761 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:10 vm02 bash[23351]: cluster 2026-03-09T17:46:08.920084+0000 mgr.y (mgr.14505) 761 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:12 vm00 bash[28333]: cluster 2026-03-09T17:46:10.920454+0000 mgr.y (mgr.14505) 762 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:12 vm00 bash[28333]: cluster 2026-03-09T17:46:10.920454+0000 mgr.y (mgr.14505) 762 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:12 vm00 bash[20770]: cluster 2026-03-09T17:46:10.920454+0000 mgr.y (mgr.14505) 762 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:12 vm00 bash[20770]: cluster 2026-03-09T17:46:10.920454+0000 mgr.y (mgr.14505) 762 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:12 vm02 bash[23351]: cluster 2026-03-09T17:46:10.920454+0000 mgr.y (mgr.14505) 762 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:12 vm02 bash[23351]: cluster 2026-03-09T17:46:10.920454+0000 mgr.y (mgr.14505) 762 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:12.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:46:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:46:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:13 vm00 bash[28333]: audit 2026-03-09T17:46:12.399066+0000 mgr.y (mgr.14505) 763 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:13 vm00 bash[28333]: audit 2026-03-09T17:46:12.399066+0000 mgr.y (mgr.14505) 763 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:13 vm00 bash[20770]: audit 2026-03-09T17:46:12.399066+0000 mgr.y (mgr.14505) 763 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:13 vm00 bash[20770]: audit 2026-03-09T17:46:12.399066+0000 mgr.y (mgr.14505) 763 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:13.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:13 vm02 bash[23351]: audit 2026-03-09T17:46:12.399066+0000 mgr.y (mgr.14505) 763 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:13.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:13 vm02 bash[23351]: audit 2026-03-09T17:46:12.399066+0000 mgr.y (mgr.14505) 763 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:14 vm00 bash[28333]: cluster 2026-03-09T17:46:12.921062+0000 mgr.y (mgr.14505) 764 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:14 vm00 bash[28333]: cluster 2026-03-09T17:46:12.921062+0000 mgr.y (mgr.14505) 764 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:14 vm00 bash[28333]: audit 2026-03-09T17:46:13.575565+0000 mon.c (mon.2) 840 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:14 vm00 bash[28333]: audit 2026-03-09T17:46:13.575565+0000 mon.c (mon.2) 840 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:14 vm00 bash[20770]: cluster 2026-03-09T17:46:12.921062+0000 mgr.y (mgr.14505) 764 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:14 vm00 bash[20770]: cluster 2026-03-09T17:46:12.921062+0000 mgr.y (mgr.14505) 764 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:14 vm00 bash[20770]: audit 2026-03-09T17:46:13.575565+0000 mon.c (mon.2) 840 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:14 vm00 bash[20770]: audit 2026-03-09T17:46:13.575565+0000 mon.c (mon.2) 840 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:14 vm02 bash[23351]: cluster 2026-03-09T17:46:12.921062+0000 mgr.y (mgr.14505) 764 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:14 vm02 bash[23351]: cluster 2026-03-09T17:46:12.921062+0000 mgr.y (mgr.14505) 764 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:14 vm02 bash[23351]: audit 2026-03-09T17:46:13.575565+0000 mon.c (mon.2) 840 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:14 vm02 bash[23351]: audit 2026-03-09T17:46:13.575565+0000 mon.c (mon.2) 840 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:16 vm00 bash[28333]: cluster 2026-03-09T17:46:14.921384+0000 mgr.y (mgr.14505) 765 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:16 vm00 bash[28333]: cluster 2026-03-09T17:46:14.921384+0000 mgr.y (mgr.14505) 765 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:16 vm00 bash[20770]: cluster 2026-03-09T17:46:14.921384+0000 mgr.y (mgr.14505) 765 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:16 vm00 bash[20770]: cluster 2026-03-09T17:46:14.921384+0000 mgr.y (mgr.14505) 765 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:46:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:46:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:46:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:16 vm02 bash[23351]: cluster 2026-03-09T17:46:14.921384+0000 mgr.y (mgr.14505) 765 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:16 vm02 bash[23351]: cluster 2026-03-09T17:46:14.921384+0000 mgr.y (mgr.14505) 765 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:18 vm00 bash[28333]: cluster 2026-03-09T17:46:16.921763+0000 mgr.y (mgr.14505) 766 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:18 vm00 bash[28333]: cluster 2026-03-09T17:46:16.921763+0000 mgr.y (mgr.14505) 766 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:18 vm00 bash[20770]: cluster 2026-03-09T17:46:16.921763+0000 mgr.y (mgr.14505) 766 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:18 vm00 bash[20770]: cluster 2026-03-09T17:46:16.921763+0000 mgr.y (mgr.14505) 766 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:18 vm02 bash[23351]: cluster 2026-03-09T17:46:16.921763+0000 mgr.y (mgr.14505) 766 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:18 vm02 bash[23351]: cluster 2026-03-09T17:46:16.921763+0000 mgr.y (mgr.14505) 766 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:20 vm00 bash[28333]: cluster 2026-03-09T17:46:18.922522+0000 mgr.y (mgr.14505) 767 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:20 vm00 bash[28333]: cluster 2026-03-09T17:46:18.922522+0000 mgr.y (mgr.14505) 767 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:20 vm00 bash[20770]: cluster 2026-03-09T17:46:18.922522+0000 mgr.y (mgr.14505) 767 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:20 vm00 bash[20770]: cluster 2026-03-09T17:46:18.922522+0000 mgr.y (mgr.14505) 767 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:20 vm02 bash[23351]: cluster 2026-03-09T17:46:18.922522+0000 mgr.y (mgr.14505) 767 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:20 vm02 bash[23351]: cluster 2026-03-09T17:46:18.922522+0000 mgr.y (mgr.14505) 767 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:22 vm02 bash[23351]: cluster 2026-03-09T17:46:20.922822+0000 mgr.y (mgr.14505) 768 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:22 vm02 bash[23351]: cluster 2026-03-09T17:46:20.922822+0000 mgr.y (mgr.14505) 768 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:22.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:46:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:46:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:22 vm00 bash[28333]: cluster 2026-03-09T17:46:20.922822+0000 mgr.y (mgr.14505) 768 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:22 vm00 bash[28333]: cluster 2026-03-09T17:46:20.922822+0000 mgr.y (mgr.14505) 768 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:22 vm00 bash[20770]: cluster 2026-03-09T17:46:20.922822+0000 mgr.y (mgr.14505) 768 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:22 vm00 bash[20770]: cluster 2026-03-09T17:46:20.922822+0000 mgr.y (mgr.14505) 768 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:23.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:23 vm02 bash[23351]: audit 2026-03-09T17:46:22.402603+0000 mgr.y (mgr.14505) 769 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:23 vm02 bash[23351]: audit 2026-03-09T17:46:22.402603+0000 mgr.y (mgr.14505) 769 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:23 vm00 bash[28333]: audit 2026-03-09T17:46:22.402603+0000 mgr.y (mgr.14505) 769 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:23 vm00 bash[28333]: audit 2026-03-09T17:46:22.402603+0000 mgr.y (mgr.14505) 769 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:23 vm00 bash[20770]: audit 2026-03-09T17:46:22.402603+0000 mgr.y (mgr.14505) 769 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:23 vm00 bash[20770]: audit 2026-03-09T17:46:22.402603+0000 mgr.y (mgr.14505) 769 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:24 vm02 bash[23351]: cluster 2026-03-09T17:46:22.923440+0000 mgr.y (mgr.14505) 770 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:24 vm02 bash[23351]: cluster 2026-03-09T17:46:22.923440+0000 mgr.y (mgr.14505) 770 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:24 vm00 bash[28333]: cluster 2026-03-09T17:46:22.923440+0000 mgr.y (mgr.14505) 770 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:24 vm00 bash[28333]: cluster 2026-03-09T17:46:22.923440+0000 mgr.y (mgr.14505) 770 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:24 vm00 bash[20770]: cluster 2026-03-09T17:46:22.923440+0000 mgr.y (mgr.14505) 770 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:24 vm00 bash[20770]: cluster 2026-03-09T17:46:22.923440+0000 mgr.y (mgr.14505) 770 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:26 vm02 bash[23351]: cluster 2026-03-09T17:46:24.923815+0000 mgr.y (mgr.14505) 771 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:26 vm02 bash[23351]: cluster 2026-03-09T17:46:24.923815+0000 mgr.y (mgr.14505) 771 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:26 vm00 bash[28333]: cluster 2026-03-09T17:46:24.923815+0000 mgr.y (mgr.14505) 771 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:26 vm00 bash[28333]: cluster 2026-03-09T17:46:24.923815+0000 mgr.y (mgr.14505) 771 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:26 vm00 bash[20770]: cluster 2026-03-09T17:46:24.923815+0000 mgr.y (mgr.14505) 771 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:26 vm00 bash[20770]: cluster 2026-03-09T17:46:24.923815+0000 mgr.y (mgr.14505) 771 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:46:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:46:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:46:28.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:27 vm02 bash[23351]: cluster 2026-03-09T17:46:26.924165+0000 mgr.y (mgr.14505) 772 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:28.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:27 vm02 bash[23351]: cluster 2026-03-09T17:46:26.924165+0000 mgr.y (mgr.14505) 772 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:27 vm00 bash[28333]: cluster 2026-03-09T17:46:26.924165+0000 mgr.y (mgr.14505) 772 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:28.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:27 vm00 bash[28333]: cluster 2026-03-09T17:46:26.924165+0000 mgr.y (mgr.14505) 772 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:27 vm00 bash[20770]: cluster 2026-03-09T17:46:26.924165+0000 mgr.y (mgr.14505) 772 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:28.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:27 vm00 bash[20770]: cluster 2026-03-09T17:46:26.924165+0000 mgr.y (mgr.14505) 772 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:29.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:28 vm02 bash[23351]: audit 2026-03-09T17:46:28.582512+0000 mon.c (mon.2) 841 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:29.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:28 vm02 bash[23351]: audit 2026-03-09T17:46:28.582512+0000 mon.c (mon.2) 841 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:28 vm00 bash[28333]: audit 2026-03-09T17:46:28.582512+0000 mon.c (mon.2) 841 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:29.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:28 vm00 bash[28333]: audit 2026-03-09T17:46:28.582512+0000 mon.c (mon.2) 841 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:28 vm00 bash[20770]: audit 2026-03-09T17:46:28.582512+0000 mon.c (mon.2) 841 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:29.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:28 vm00 bash[20770]: audit 2026-03-09T17:46:28.582512+0000 mon.c (mon.2) 841 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:30.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:29 vm02 bash[23351]: cluster 2026-03-09T17:46:28.924756+0000 mgr.y (mgr.14505) 773 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:30.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:29 vm02 bash[23351]: cluster 2026-03-09T17:46:28.924756+0000 mgr.y (mgr.14505) 773 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:29 vm00 bash[28333]: cluster 2026-03-09T17:46:28.924756+0000 mgr.y (mgr.14505) 773 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:29 vm00 bash[28333]: cluster 2026-03-09T17:46:28.924756+0000 mgr.y (mgr.14505) 773 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:29 vm00 bash[20770]: cluster 2026-03-09T17:46:28.924756+0000 mgr.y (mgr.14505) 773 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:29 vm00 bash[20770]: cluster 2026-03-09T17:46:28.924756+0000 mgr.y (mgr.14505) 773 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:31 vm00 bash[28333]: cluster 2026-03-09T17:46:30.925005+0000 mgr.y (mgr.14505) 774 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:32.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:31 vm00 bash[28333]: cluster 2026-03-09T17:46:30.925005+0000 mgr.y (mgr.14505) 774 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:31 vm00 bash[20770]: cluster 2026-03-09T17:46:30.925005+0000 mgr.y (mgr.14505) 774 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:32.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:31 vm00 bash[20770]: cluster 2026-03-09T17:46:30.925005+0000 mgr.y (mgr.14505) 774 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:32.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:31 vm02 bash[23351]: cluster 2026-03-09T17:46:30.925005+0000 mgr.y (mgr.14505) 774 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:32.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:31 vm02 bash[23351]: cluster 2026-03-09T17:46:30.925005+0000 mgr.y (mgr.14505) 774 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:32.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:46:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:46:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:32 vm00 bash[28333]: audit 2026-03-09T17:46:32.410890+0000 mgr.y (mgr.14505) 775 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:33.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:32 vm00 bash[28333]: audit 2026-03-09T17:46:32.410890+0000 mgr.y (mgr.14505) 775 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:32 vm00 bash[20770]: audit 2026-03-09T17:46:32.410890+0000 mgr.y (mgr.14505) 775 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:33.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:32 vm00 bash[20770]: audit 2026-03-09T17:46:32.410890+0000 mgr.y (mgr.14505) 775 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:33.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:32 vm02 bash[23351]: audit 2026-03-09T17:46:32.410890+0000 mgr.y (mgr.14505) 775 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:33.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:32 vm02 bash[23351]: audit 2026-03-09T17:46:32.410890+0000 mgr.y (mgr.14505) 775 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:33 vm00 bash[28333]: cluster 2026-03-09T17:46:32.925773+0000 mgr.y (mgr.14505) 776 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:34.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:33 vm00 bash[28333]: cluster 2026-03-09T17:46:32.925773+0000 mgr.y (mgr.14505) 776 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:33 vm00 bash[20770]: cluster 2026-03-09T17:46:32.925773+0000 mgr.y (mgr.14505) 776 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:34.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:33 vm00 bash[20770]: cluster 2026-03-09T17:46:32.925773+0000 mgr.y (mgr.14505) 776 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:34.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:33 vm02 bash[23351]: cluster 2026-03-09T17:46:32.925773+0000 mgr.y (mgr.14505) 776 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:33 vm02 bash[23351]: cluster 2026-03-09T17:46:32.925773+0000 mgr.y (mgr.14505) 776 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:36.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:36 vm00 bash[28333]: cluster 2026-03-09T17:46:34.926113+0000 mgr.y (mgr.14505) 777 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:36.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:36 vm00 bash[28333]: cluster 2026-03-09T17:46:34.926113+0000 mgr.y (mgr.14505) 777 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:36.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:36 vm00 bash[20770]: cluster 2026-03-09T17:46:34.926113+0000 mgr.y (mgr.14505) 777 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:36.302 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:36 vm00 bash[20770]: cluster 2026-03-09T17:46:34.926113+0000 mgr.y (mgr.14505) 777 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:36.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:36 vm02 bash[23351]: cluster 2026-03-09T17:46:34.926113+0000 mgr.y (mgr.14505) 777 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:36.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:36 vm02 bash[23351]: cluster 2026-03-09T17:46:34.926113+0000 mgr.y (mgr.14505) 777 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:46:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:46:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:46:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:38 vm00 bash[28333]: cluster 2026-03-09T17:46:36.926401+0000 mgr.y (mgr.14505) 778 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:38 vm00 bash[28333]: cluster 2026-03-09T17:46:36.926401+0000 mgr.y (mgr.14505) 778 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:38 vm00 bash[20770]: cluster 2026-03-09T17:46:36.926401+0000 mgr.y (mgr.14505) 778 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:38 vm00 bash[20770]: cluster 2026-03-09T17:46:36.926401+0000 mgr.y (mgr.14505) 778 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:38.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:38 vm02 bash[23351]: cluster 2026-03-09T17:46:36.926401+0000 mgr.y (mgr.14505) 778 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:38.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:38 vm02 bash[23351]: cluster 2026-03-09T17:46:36.926401+0000 mgr.y (mgr.14505) 778 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 982 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:40 vm00 bash[28333]: cluster 2026-03-09T17:46:38.927129+0000 mgr.y (mgr.14505) 779 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:40 vm00 bash[28333]: cluster 2026-03-09T17:46:38.927129+0000 mgr.y (mgr.14505) 779 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:40 vm00 bash[20770]: cluster 2026-03-09T17:46:38.927129+0000 mgr.y (mgr.14505) 779 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:40 vm00 bash[20770]: cluster 2026-03-09T17:46:38.927129+0000 mgr.y (mgr.14505) 779 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:40 vm02 bash[23351]: cluster 2026-03-09T17:46:38.927129+0000 mgr.y (mgr.14505) 779 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:40.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:40 vm02 bash[23351]: cluster 2026-03-09T17:46:38.927129+0000 mgr.y (mgr.14505) 779 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:42 vm00 bash[28333]: cluster 2026-03-09T17:46:40.927470+0000 mgr.y (mgr.14505) 780 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:42.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:42 vm00 bash[28333]: cluster 2026-03-09T17:46:40.927470+0000 mgr.y (mgr.14505) 780 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:42 vm00 bash[20770]: cluster 2026-03-09T17:46:40.927470+0000 mgr.y (mgr.14505) 780 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:42.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:42 vm00 bash[20770]: cluster 2026-03-09T17:46:40.927470+0000 mgr.y (mgr.14505) 780 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:42 vm02 bash[23351]: cluster 2026-03-09T17:46:40.927470+0000 mgr.y (mgr.14505) 780 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:42.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:42 vm02 bash[23351]: cluster 2026-03-09T17:46:40.927470+0000 mgr.y (mgr.14505) 780 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:42.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:46:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:46:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:43 vm00 bash[28333]: audit 2026-03-09T17:46:42.421596+0000 mgr.y (mgr.14505) 781 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:43.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:43 vm00 bash[28333]: audit 2026-03-09T17:46:42.421596+0000 mgr.y (mgr.14505) 781 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:43 vm00 bash[20770]: audit 2026-03-09T17:46:42.421596+0000 mgr.y (mgr.14505) 781 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:43.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:43 vm00 bash[20770]: audit 2026-03-09T17:46:42.421596+0000 mgr.y (mgr.14505) 781 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:43.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:43 vm02 bash[23351]: audit 2026-03-09T17:46:42.421596+0000 mgr.y (mgr.14505) 781 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:43.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:43 vm02 bash[23351]: audit 2026-03-09T17:46:42.421596+0000 mgr.y (mgr.14505) 781 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:44 vm00 bash[20770]: cluster 2026-03-09T17:46:42.928077+0000 mgr.y (mgr.14505) 782 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:44 vm00 bash[20770]: cluster 2026-03-09T17:46:42.928077+0000 mgr.y (mgr.14505) 782 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:44 vm00 bash[20770]: audit 2026-03-09T17:46:43.588387+0000 mon.c (mon.2) 842 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:44.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:44 vm00 bash[20770]: audit 2026-03-09T17:46:43.588387+0000 mon.c (mon.2) 842 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:44 vm00 bash[28333]: cluster 2026-03-09T17:46:42.928077+0000 mgr.y (mgr.14505) 782 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:44 vm00 bash[28333]: cluster 2026-03-09T17:46:42.928077+0000 mgr.y (mgr.14505) 782 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:44 vm00 bash[28333]: audit 2026-03-09T17:46:43.588387+0000 mon.c (mon.2) 842 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:44.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:44 vm00 bash[28333]: audit 2026-03-09T17:46:43.588387+0000 mon.c (mon.2) 842 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:44.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:44 vm02 bash[23351]: cluster 2026-03-09T17:46:42.928077+0000 mgr.y (mgr.14505) 782 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:44.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:44 vm02 bash[23351]: cluster 2026-03-09T17:46:42.928077+0000 mgr.y (mgr.14505) 782 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:44.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:44 vm02 bash[23351]: audit 2026-03-09T17:46:43.588387+0000 mon.c (mon.2) 842 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:44.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:44 vm02 bash[23351]: audit 2026-03-09T17:46:43.588387+0000 mon.c (mon.2) 842 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:46.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:46 vm02 bash[23351]: cluster 2026-03-09T17:46:44.928440+0000 mgr.y (mgr.14505) 783 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:46.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:46 vm02 bash[23351]: cluster 2026-03-09T17:46:44.928440+0000 mgr.y (mgr.14505) 783 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:46 vm00 bash[28333]: cluster 2026-03-09T17:46:44.928440+0000 mgr.y (mgr.14505) 783 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:46 vm00 bash[28333]: cluster 2026-03-09T17:46:44.928440+0000 mgr.y (mgr.14505) 783 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:46 vm00 bash[20770]: cluster 2026-03-09T17:46:44.928440+0000 mgr.y (mgr.14505) 783 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:46 vm00 bash[20770]: cluster 2026-03-09T17:46:44.928440+0000 mgr.y (mgr.14505) 783 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:46:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:46:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:46:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:48 vm02 bash[23351]: cluster 2026-03-09T17:46:46.928754+0000 mgr.y (mgr.14505) 784 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:48 vm02 bash[23351]: cluster 2026-03-09T17:46:46.928754+0000 mgr.y (mgr.14505) 784 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:48 vm00 bash[28333]: cluster 2026-03-09T17:46:46.928754+0000 mgr.y (mgr.14505) 784 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:48 vm00 bash[28333]: cluster 2026-03-09T17:46:46.928754+0000 mgr.y (mgr.14505) 784 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:48 vm00 bash[20770]: cluster 2026-03-09T17:46:46.928754+0000 mgr.y (mgr.14505) 784 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:48 vm00 bash[20770]: cluster 2026-03-09T17:46:46.928754+0000 mgr.y (mgr.14505) 784 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:50 vm02 bash[23351]: cluster 2026-03-09T17:46:48.929374+0000 mgr.y (mgr.14505) 785 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:50 vm02 bash[23351]: cluster 2026-03-09T17:46:48.929374+0000 mgr.y (mgr.14505) 785 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:50 vm00 bash[28333]: cluster 2026-03-09T17:46:48.929374+0000 mgr.y (mgr.14505) 785 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:50 vm00 bash[28333]: cluster 2026-03-09T17:46:48.929374+0000 mgr.y (mgr.14505) 785 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:50 vm00 bash[20770]: cluster 2026-03-09T17:46:48.929374+0000 mgr.y (mgr.14505) 785 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:50 vm00 bash[20770]: cluster 2026-03-09T17:46:48.929374+0000 mgr.y (mgr.14505) 785 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:46:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:52 vm02 bash[23351]: cluster 2026-03-09T17:46:50.929717+0000 mgr.y (mgr.14505) 786 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:52 vm02 bash[23351]: cluster 2026-03-09T17:46:50.929717+0000 mgr.y (mgr.14505) 786 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:52 vm00 bash[28333]: cluster 2026-03-09T17:46:50.929717+0000 mgr.y (mgr.14505) 786 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:52 vm00 bash[28333]: cluster 2026-03-09T17:46:50.929717+0000 mgr.y (mgr.14505) 786 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:52 vm00 bash[20770]: cluster 2026-03-09T17:46:50.929717+0000 mgr.y (mgr.14505) 786 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:52 vm00 bash[20770]: cluster 2026-03-09T17:46:50.929717+0000 mgr.y (mgr.14505) 786 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:52.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:46:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:46:53.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:53 vm02 bash[23351]: audit 2026-03-09T17:46:52.424011+0000 mgr.y (mgr.14505) 787 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:53.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:53 vm02 bash[23351]: audit 2026-03-09T17:46:52.424011+0000 mgr.y (mgr.14505) 787 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:53 vm00 bash[28333]: audit 2026-03-09T17:46:52.424011+0000 mgr.y (mgr.14505) 787 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:53 vm00 bash[28333]: audit 2026-03-09T17:46:52.424011+0000 mgr.y (mgr.14505) 787 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:53 vm00 bash[20770]: audit 2026-03-09T17:46:52.424011+0000 mgr.y (mgr.14505) 787 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:53 vm00 bash[20770]: audit 2026-03-09T17:46:52.424011+0000 mgr.y (mgr.14505) 787 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:46:54.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:54 vm02 bash[23351]: cluster 2026-03-09T17:46:52.930340+0000 mgr.y (mgr.14505) 788 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:54 vm02 bash[23351]: cluster 2026-03-09T17:46:52.930340+0000 mgr.y (mgr.14505) 788 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:54 vm00 bash[28333]: cluster 2026-03-09T17:46:52.930340+0000 mgr.y (mgr.14505) 788 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:54 vm00 bash[28333]: cluster 2026-03-09T17:46:52.930340+0000 mgr.y (mgr.14505) 788 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:54 vm00 bash[20770]: cluster 2026-03-09T17:46:52.930340+0000 mgr.y (mgr.14505) 788 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:54 vm00 bash[20770]: cluster 2026-03-09T17:46:52.930340+0000 mgr.y (mgr.14505) 788 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:46:56.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:56 vm02 bash[23351]: cluster 2026-03-09T17:46:54.930714+0000 mgr.y (mgr.14505) 789 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:56.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:56 vm02 bash[23351]: cluster 2026-03-09T17:46:54.930714+0000 mgr.y (mgr.14505) 789 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:56 vm00 bash[28333]: cluster 2026-03-09T17:46:54.930714+0000 mgr.y (mgr.14505) 789 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:56 vm00 bash[28333]: cluster 2026-03-09T17:46:54.930714+0000 mgr.y (mgr.14505) 789 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:56 vm00 bash[20770]: cluster 2026-03-09T17:46:54.930714+0000 mgr.y (mgr.14505) 789 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:56 vm00 bash[20770]: cluster 2026-03-09T17:46:54.930714+0000 mgr.y (mgr.14505) 789 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:46:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:46:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:46:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:58 vm02 bash[23351]: cluster 2026-03-09T17:46:56.931020+0000 mgr.y (mgr.14505) 790 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:58 vm02 bash[23351]: cluster 2026-03-09T17:46:56.931020+0000 mgr.y (mgr.14505) 790 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:58 vm00 bash[28333]: cluster 2026-03-09T17:46:56.931020+0000 mgr.y (mgr.14505) 790 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:58 vm00 bash[28333]: cluster 2026-03-09T17:46:56.931020+0000 mgr.y (mgr.14505) 790 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:58 vm00 bash[20770]: cluster 2026-03-09T17:46:56.931020+0000 mgr.y (mgr.14505) 790 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:58 vm00 bash[20770]: cluster 2026-03-09T17:46:56.931020+0000 mgr.y (mgr.14505) 790 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:46:59.119 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:59 vm02 bash[23351]: audit 2026-03-09T17:46:58.594546+0000 mon.c (mon.2) 843 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:59.119 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:46:59 vm02 bash[23351]: audit 2026-03-09T17:46:58.594546+0000 mon.c (mon.2) 843 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:59 vm00 bash[28333]: audit 2026-03-09T17:46:58.594546+0000 mon.c (mon.2) 843 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:46:59 vm00 bash[28333]: audit 2026-03-09T17:46:58.594546+0000 mon.c (mon.2) 843 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:59 vm00 bash[20770]: audit 2026-03-09T17:46:58.594546+0000 mon.c (mon.2) 843 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:46:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:46:59 vm00 bash[20770]: audit 2026-03-09T17:46:58.594546+0000 mon.c (mon.2) 843 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: cluster 2026-03-09T17:46:58.931629+0000 mgr.y (mgr.14505) 791 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: cluster 2026-03-09T17:46:58.931629+0000 mgr.y (mgr.14505) 791 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: audit 2026-03-09T17:46:59.373305+0000 mon.c (mon.2) 844 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: audit 2026-03-09T17:46:59.373305+0000 mon.c (mon.2) 844 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: audit 2026-03-09T17:46:59.678641+0000 mon.c (mon.2) 845 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: audit 2026-03-09T17:46:59.678641+0000 mon.c (mon.2) 845 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: audit 2026-03-09T17:46:59.679685+0000 mon.c (mon.2) 846 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: audit 2026-03-09T17:46:59.679685+0000 mon.c (mon.2) 846 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: audit 2026-03-09T17:46:59.686630+0000 mon.a (mon.0) 3523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:47:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:00 vm02 bash[23351]: audit 2026-03-09T17:46:59.686630+0000 mon.a (mon.0) 3523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: cluster 2026-03-09T17:46:58.931629+0000 mgr.y (mgr.14505) 791 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: cluster 2026-03-09T17:46:58.931629+0000 mgr.y (mgr.14505) 791 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: audit 2026-03-09T17:46:59.373305+0000 mon.c (mon.2) 844 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: audit 2026-03-09T17:46:59.373305+0000 mon.c (mon.2) 844 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: audit 2026-03-09T17:46:59.678641+0000 mon.c (mon.2) 845 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: audit 2026-03-09T17:46:59.678641+0000 mon.c (mon.2) 845 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: audit 2026-03-09T17:46:59.679685+0000 mon.c (mon.2) 846 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: audit 2026-03-09T17:46:59.679685+0000 mon.c (mon.2) 846 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: audit 2026-03-09T17:46:59.686630+0000 mon.a (mon.0) 3523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:00 vm00 bash[28333]: audit 2026-03-09T17:46:59.686630+0000 mon.a (mon.0) 3523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: cluster 2026-03-09T17:46:58.931629+0000 mgr.y (mgr.14505) 791 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: cluster 2026-03-09T17:46:58.931629+0000 mgr.y (mgr.14505) 791 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: audit 2026-03-09T17:46:59.373305+0000 mon.c (mon.2) 844 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: audit 2026-03-09T17:46:59.373305+0000 mon.c (mon.2) 844 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: audit 2026-03-09T17:46:59.678641+0000 mon.c (mon.2) 845 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: audit 2026-03-09T17:46:59.678641+0000 mon.c (mon.2) 845 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: audit 2026-03-09T17:46:59.679685+0000 mon.c (mon.2) 846 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: audit 2026-03-09T17:46:59.679685+0000 mon.c (mon.2) 846 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: audit 2026-03-09T17:46:59.686630+0000 mon.a (mon.0) 3523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:47:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:00 vm00 bash[20770]: audit 2026-03-09T17:46:59.686630+0000 mon.a (mon.0) 3523 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:47:02.429 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:02 vm02 bash[23351]: cluster 2026-03-09T17:47:00.931915+0000 mgr.y (mgr.14505) 792 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:02.429 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:02 vm02 bash[23351]: cluster 2026-03-09T17:47:00.931915+0000 mgr.y (mgr.14505) 792 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:02 vm00 bash[28333]: cluster 2026-03-09T17:47:00.931915+0000 mgr.y (mgr.14505) 792 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:02 vm00 bash[28333]: cluster 2026-03-09T17:47:00.931915+0000 mgr.y (mgr.14505) 792 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:02 vm00 bash[20770]: cluster 2026-03-09T17:47:00.931915+0000 mgr.y (mgr.14505) 792 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:02 vm00 bash[20770]: cluster 2026-03-09T17:47:00.931915+0000 mgr.y (mgr.14505) 792 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:02.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:47:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:47:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:03 vm00 bash[28333]: audit 2026-03-09T17:47:02.429390+0000 mgr.y (mgr.14505) 793 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:03 vm00 bash[28333]: audit 2026-03-09T17:47:02.429390+0000 mgr.y (mgr.14505) 793 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:03 vm00 bash[20770]: audit 2026-03-09T17:47:02.429390+0000 mgr.y (mgr.14505) 793 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:03 vm00 bash[20770]: audit 2026-03-09T17:47:02.429390+0000 mgr.y (mgr.14505) 793 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:03 vm02 bash[23351]: audit 2026-03-09T17:47:02.429390+0000 mgr.y (mgr.14505) 793 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:03.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:03 vm02 bash[23351]: audit 2026-03-09T17:47:02.429390+0000 mgr.y (mgr.14505) 793 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:04 vm00 bash[28333]: cluster 2026-03-09T17:47:02.932523+0000 mgr.y (mgr.14505) 794 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:04 vm00 bash[28333]: cluster 2026-03-09T17:47:02.932523+0000 mgr.y (mgr.14505) 794 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:04 vm00 bash[20770]: cluster 2026-03-09T17:47:02.932523+0000 mgr.y (mgr.14505) 794 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:04 vm00 bash[20770]: cluster 2026-03-09T17:47:02.932523+0000 mgr.y (mgr.14505) 794 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:04.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:04 vm02 bash[23351]: cluster 2026-03-09T17:47:02.932523+0000 mgr.y (mgr.14505) 794 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:04.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:04 vm02 bash[23351]: cluster 2026-03-09T17:47:02.932523+0000 mgr.y (mgr.14505) 794 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:06 vm00 bash[28333]: cluster 2026-03-09T17:47:04.932874+0000 mgr.y (mgr.14505) 795 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:06 vm00 bash[28333]: cluster 2026-03-09T17:47:04.932874+0000 mgr.y (mgr.14505) 795 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:06 vm00 bash[20770]: cluster 2026-03-09T17:47:04.932874+0000 mgr.y (mgr.14505) 795 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:06 vm00 bash[20770]: cluster 2026-03-09T17:47:04.932874+0000 mgr.y (mgr.14505) 795 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:47:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:47:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:47:06.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:06 vm02 bash[23351]: cluster 2026-03-09T17:47:04.932874+0000 mgr.y (mgr.14505) 795 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:06.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:06 vm02 bash[23351]: cluster 2026-03-09T17:47:04.932874+0000 mgr.y (mgr.14505) 795 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:08 vm00 bash[28333]: cluster 2026-03-09T17:47:06.933152+0000 mgr.y (mgr.14505) 796 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:08 vm00 bash[28333]: cluster 2026-03-09T17:47:06.933152+0000 mgr.y (mgr.14505) 796 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:08 vm00 bash[20770]: cluster 2026-03-09T17:47:06.933152+0000 mgr.y (mgr.14505) 796 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:08 vm00 bash[20770]: cluster 2026-03-09T17:47:06.933152+0000 mgr.y (mgr.14505) 796 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:08 vm02 bash[23351]: cluster 2026-03-09T17:47:06.933152+0000 mgr.y (mgr.14505) 796 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:08 vm02 bash[23351]: cluster 2026-03-09T17:47:06.933152+0000 mgr.y (mgr.14505) 796 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:10 vm00 bash[28333]: cluster 2026-03-09T17:47:08.933920+0000 mgr.y (mgr.14505) 797 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:10 vm00 bash[28333]: cluster 2026-03-09T17:47:08.933920+0000 mgr.y (mgr.14505) 797 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:10 vm00 bash[20770]: cluster 2026-03-09T17:47:08.933920+0000 mgr.y (mgr.14505) 797 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:10 vm00 bash[20770]: cluster 2026-03-09T17:47:08.933920+0000 mgr.y (mgr.14505) 797 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:10 vm02 bash[23351]: cluster 2026-03-09T17:47:08.933920+0000 mgr.y (mgr.14505) 797 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:10 vm02 bash[23351]: cluster 2026-03-09T17:47:08.933920+0000 mgr.y (mgr.14505) 797 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:12 vm00 bash[28333]: cluster 2026-03-09T17:47:10.934214+0000 mgr.y (mgr.14505) 798 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:12 vm00 bash[28333]: cluster 2026-03-09T17:47:10.934214+0000 mgr.y (mgr.14505) 798 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:12 vm00 bash[20770]: cluster 2026-03-09T17:47:10.934214+0000 mgr.y (mgr.14505) 798 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:12 vm00 bash[20770]: cluster 2026-03-09T17:47:10.934214+0000 mgr.y (mgr.14505) 798 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:12 vm02 bash[23351]: cluster 2026-03-09T17:47:10.934214+0000 mgr.y (mgr.14505) 798 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:12 vm02 bash[23351]: cluster 2026-03-09T17:47:10.934214+0000 mgr.y (mgr.14505) 798 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:12.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:47:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:47:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:13 vm00 bash[28333]: audit 2026-03-09T17:47:12.440035+0000 mgr.y (mgr.14505) 799 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:13.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:13 vm00 bash[28333]: audit 2026-03-09T17:47:12.440035+0000 mgr.y (mgr.14505) 799 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:13 vm00 bash[20770]: audit 2026-03-09T17:47:12.440035+0000 mgr.y (mgr.14505) 799 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:13.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:13 vm00 bash[20770]: audit 2026-03-09T17:47:12.440035+0000 mgr.y (mgr.14505) 799 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:13.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:13 vm02 bash[23351]: audit 2026-03-09T17:47:12.440035+0000 mgr.y (mgr.14505) 799 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:13.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:13 vm02 bash[23351]: audit 2026-03-09T17:47:12.440035+0000 mgr.y (mgr.14505) 799 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:14 vm00 bash[28333]: cluster 2026-03-09T17:47:12.934786+0000 mgr.y (mgr.14505) 800 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:14 vm00 bash[28333]: cluster 2026-03-09T17:47:12.934786+0000 mgr.y (mgr.14505) 800 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:14 vm00 bash[28333]: audit 2026-03-09T17:47:13.600631+0000 mon.c (mon.2) 847 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:14 vm00 bash[28333]: audit 2026-03-09T17:47:13.600631+0000 mon.c (mon.2) 847 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:14 vm00 bash[20770]: cluster 2026-03-09T17:47:12.934786+0000 mgr.y (mgr.14505) 800 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:14 vm00 bash[20770]: cluster 2026-03-09T17:47:12.934786+0000 mgr.y (mgr.14505) 800 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:14 vm00 bash[20770]: audit 2026-03-09T17:47:13.600631+0000 mon.c (mon.2) 847 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:14 vm00 bash[20770]: audit 2026-03-09T17:47:13.600631+0000 mon.c (mon.2) 847 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:14 vm02 bash[23351]: cluster 2026-03-09T17:47:12.934786+0000 mgr.y (mgr.14505) 800 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:14 vm02 bash[23351]: cluster 2026-03-09T17:47:12.934786+0000 mgr.y (mgr.14505) 800 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:14 vm02 bash[23351]: audit 2026-03-09T17:47:13.600631+0000 mon.c (mon.2) 847 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:14 vm02 bash[23351]: audit 2026-03-09T17:47:13.600631+0000 mon.c (mon.2) 847 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:16 vm00 bash[28333]: cluster 2026-03-09T17:47:14.935121+0000 mgr.y (mgr.14505) 801 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:16 vm00 bash[28333]: cluster 2026-03-09T17:47:14.935121+0000 mgr.y (mgr.14505) 801 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:16 vm00 bash[20770]: cluster 2026-03-09T17:47:14.935121+0000 mgr.y (mgr.14505) 801 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:16 vm00 bash[20770]: cluster 2026-03-09T17:47:14.935121+0000 mgr.y (mgr.14505) 801 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:47:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:47:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:47:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:16 vm02 bash[23351]: cluster 2026-03-09T17:47:14.935121+0000 mgr.y (mgr.14505) 801 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:16 vm02 bash[23351]: cluster 2026-03-09T17:47:14.935121+0000 mgr.y (mgr.14505) 801 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:18 vm02 bash[23351]: cluster 2026-03-09T17:47:16.935437+0000 mgr.y (mgr.14505) 802 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:18 vm02 bash[23351]: cluster 2026-03-09T17:47:16.935437+0000 mgr.y (mgr.14505) 802 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:18 vm00 bash[28333]: cluster 2026-03-09T17:47:16.935437+0000 mgr.y (mgr.14505) 802 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:18 vm00 bash[28333]: cluster 2026-03-09T17:47:16.935437+0000 mgr.y (mgr.14505) 802 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:18 vm00 bash[20770]: cluster 2026-03-09T17:47:16.935437+0000 mgr.y (mgr.14505) 802 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:18 vm00 bash[20770]: cluster 2026-03-09T17:47:16.935437+0000 mgr.y (mgr.14505) 802 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:20.328 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:47:20 vm02 bash[51223]: logger=cleanup t=2026-03-09T17:47:20.031948551Z level=info msg="Completed cleanup jobs" duration=1.526389ms 2026-03-09T17:47:20.328 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:47:20 vm02 bash[51223]: logger=plugins.update.checker t=2026-03-09T17:47:20.198940977Z level=info msg="Update check succeeded" duration=59.832013ms 2026-03-09T17:47:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:20 vm02 bash[23351]: cluster 2026-03-09T17:47:18.936096+0000 mgr.y (mgr.14505) 803 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:20 vm02 bash[23351]: cluster 2026-03-09T17:47:18.936096+0000 mgr.y (mgr.14505) 803 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:20 vm00 bash[28333]: cluster 2026-03-09T17:47:18.936096+0000 mgr.y (mgr.14505) 803 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:20 vm00 bash[28333]: cluster 2026-03-09T17:47:18.936096+0000 mgr.y (mgr.14505) 803 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:20 vm00 bash[20770]: cluster 2026-03-09T17:47:18.936096+0000 mgr.y (mgr.14505) 803 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:20 vm00 bash[20770]: cluster 2026-03-09T17:47:18.936096+0000 mgr.y (mgr.14505) 803 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:22 vm02 bash[23351]: cluster 2026-03-09T17:47:20.936427+0000 mgr.y (mgr.14505) 804 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:22 vm02 bash[23351]: cluster 2026-03-09T17:47:20.936427+0000 mgr.y (mgr.14505) 804 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:22.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:47:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:47:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:22 vm00 bash[28333]: cluster 2026-03-09T17:47:20.936427+0000 mgr.y (mgr.14505) 804 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:22 vm00 bash[28333]: cluster 2026-03-09T17:47:20.936427+0000 mgr.y (mgr.14505) 804 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:22 vm00 bash[20770]: cluster 2026-03-09T17:47:20.936427+0000 mgr.y (mgr.14505) 804 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:22 vm00 bash[20770]: cluster 2026-03-09T17:47:20.936427+0000 mgr.y (mgr.14505) 804 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:23.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:23 vm02 bash[23351]: audit 2026-03-09T17:47:22.441470+0000 mgr.y (mgr.14505) 805 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:23 vm02 bash[23351]: audit 2026-03-09T17:47:22.441470+0000 mgr.y (mgr.14505) 805 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:23 vm00 bash[28333]: audit 2026-03-09T17:47:22.441470+0000 mgr.y (mgr.14505) 805 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:23 vm00 bash[28333]: audit 2026-03-09T17:47:22.441470+0000 mgr.y (mgr.14505) 805 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:23 vm00 bash[20770]: audit 2026-03-09T17:47:22.441470+0000 mgr.y (mgr.14505) 805 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:23 vm00 bash[20770]: audit 2026-03-09T17:47:22.441470+0000 mgr.y (mgr.14505) 805 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:24 vm02 bash[23351]: cluster 2026-03-09T17:47:22.937118+0000 mgr.y (mgr.14505) 806 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:24 vm02 bash[23351]: cluster 2026-03-09T17:47:22.937118+0000 mgr.y (mgr.14505) 806 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:24 vm00 bash[28333]: cluster 2026-03-09T17:47:22.937118+0000 mgr.y (mgr.14505) 806 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:24 vm00 bash[28333]: cluster 2026-03-09T17:47:22.937118+0000 mgr.y (mgr.14505) 806 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:24 vm00 bash[20770]: cluster 2026-03-09T17:47:22.937118+0000 mgr.y (mgr.14505) 806 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:24 vm00 bash[20770]: cluster 2026-03-09T17:47:22.937118+0000 mgr.y (mgr.14505) 806 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:26 vm02 bash[23351]: cluster 2026-03-09T17:47:24.937621+0000 mgr.y (mgr.14505) 807 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:26 vm02 bash[23351]: cluster 2026-03-09T17:47:24.937621+0000 mgr.y (mgr.14505) 807 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:26 vm00 bash[28333]: cluster 2026-03-09T17:47:24.937621+0000 mgr.y (mgr.14505) 807 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:26 vm00 bash[28333]: cluster 2026-03-09T17:47:24.937621+0000 mgr.y (mgr.14505) 807 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:26 vm00 bash[20770]: cluster 2026-03-09T17:47:24.937621+0000 mgr.y (mgr.14505) 807 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:26 vm00 bash[20770]: cluster 2026-03-09T17:47:24.937621+0000 mgr.y (mgr.14505) 807 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:47:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:47:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:47:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:28 vm02 bash[23351]: cluster 2026-03-09T17:47:26.937909+0000 mgr.y (mgr.14505) 808 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:28 vm02 bash[23351]: cluster 2026-03-09T17:47:26.937909+0000 mgr.y (mgr.14505) 808 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:28 vm00 bash[28333]: cluster 2026-03-09T17:47:26.937909+0000 mgr.y (mgr.14505) 808 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:28 vm00 bash[28333]: cluster 2026-03-09T17:47:26.937909+0000 mgr.y (mgr.14505) 808 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:28 vm00 bash[20770]: cluster 2026-03-09T17:47:26.937909+0000 mgr.y (mgr.14505) 808 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:28 vm00 bash[20770]: cluster 2026-03-09T17:47:26.937909+0000 mgr.y (mgr.14505) 808 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:29.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:29 vm02 bash[23351]: audit 2026-03-09T17:47:28.608128+0000 mon.c (mon.2) 848 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:29 vm02 bash[23351]: audit 2026-03-09T17:47:28.608128+0000 mon.c (mon.2) 848 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:29 vm00 bash[28333]: audit 2026-03-09T17:47:28.608128+0000 mon.c (mon.2) 848 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:29 vm00 bash[28333]: audit 2026-03-09T17:47:28.608128+0000 mon.c (mon.2) 848 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:29 vm00 bash[20770]: audit 2026-03-09T17:47:28.608128+0000 mon.c (mon.2) 848 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:29 vm00 bash[20770]: audit 2026-03-09T17:47:28.608128+0000 mon.c (mon.2) 848 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:30 vm02 bash[23351]: cluster 2026-03-09T17:47:28.938583+0000 mgr.y (mgr.14505) 809 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:30 vm02 bash[23351]: cluster 2026-03-09T17:47:28.938583+0000 mgr.y (mgr.14505) 809 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:30 vm00 bash[28333]: cluster 2026-03-09T17:47:28.938583+0000 mgr.y (mgr.14505) 809 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:30 vm00 bash[28333]: cluster 2026-03-09T17:47:28.938583+0000 mgr.y (mgr.14505) 809 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:30 vm00 bash[20770]: cluster 2026-03-09T17:47:28.938583+0000 mgr.y (mgr.14505) 809 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:30 vm00 bash[20770]: cluster 2026-03-09T17:47:28.938583+0000 mgr.y (mgr.14505) 809 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:32 vm00 bash[28333]: cluster 2026-03-09T17:47:30.938889+0000 mgr.y (mgr.14505) 810 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:32 vm00 bash[28333]: cluster 2026-03-09T17:47:30.938889+0000 mgr.y (mgr.14505) 810 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:32 vm00 bash[20770]: cluster 2026-03-09T17:47:30.938889+0000 mgr.y (mgr.14505) 810 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:32 vm00 bash[20770]: cluster 2026-03-09T17:47:30.938889+0000 mgr.y (mgr.14505) 810 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:32.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:32 vm02 bash[23351]: cluster 2026-03-09T17:47:30.938889+0000 mgr.y (mgr.14505) 810 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:32 vm02 bash[23351]: cluster 2026-03-09T17:47:30.938889+0000 mgr.y (mgr.14505) 810 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:32.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:47:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:47:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:33 vm00 bash[28333]: audit 2026-03-09T17:47:32.452172+0000 mgr.y (mgr.14505) 811 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:33 vm00 bash[28333]: audit 2026-03-09T17:47:32.452172+0000 mgr.y (mgr.14505) 811 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:33 vm00 bash[20770]: audit 2026-03-09T17:47:32.452172+0000 mgr.y (mgr.14505) 811 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:33 vm00 bash[20770]: audit 2026-03-09T17:47:32.452172+0000 mgr.y (mgr.14505) 811 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:33.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:33 vm02 bash[23351]: audit 2026-03-09T17:47:32.452172+0000 mgr.y (mgr.14505) 811 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:33 vm02 bash[23351]: audit 2026-03-09T17:47:32.452172+0000 mgr.y (mgr.14505) 811 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:34 vm00 bash[28333]: cluster 2026-03-09T17:47:32.939465+0000 mgr.y (mgr.14505) 812 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:34 vm00 bash[28333]: cluster 2026-03-09T17:47:32.939465+0000 mgr.y (mgr.14505) 812 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:34 vm00 bash[20770]: cluster 2026-03-09T17:47:32.939465+0000 mgr.y (mgr.14505) 812 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:34 vm00 bash[20770]: cluster 2026-03-09T17:47:32.939465+0000 mgr.y (mgr.14505) 812 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:34 vm02 bash[23351]: cluster 2026-03-09T17:47:32.939465+0000 mgr.y (mgr.14505) 812 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:34 vm02 bash[23351]: cluster 2026-03-09T17:47:32.939465+0000 mgr.y (mgr.14505) 812 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:36 vm00 bash[28333]: cluster 2026-03-09T17:47:34.939987+0000 mgr.y (mgr.14505) 813 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:36 vm00 bash[28333]: cluster 2026-03-09T17:47:34.939987+0000 mgr.y (mgr.14505) 813 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:36 vm00 bash[20770]: cluster 2026-03-09T17:47:34.939987+0000 mgr.y (mgr.14505) 813 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:36 vm00 bash[20770]: cluster 2026-03-09T17:47:34.939987+0000 mgr.y (mgr.14505) 813 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:47:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:47:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:47:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:36 vm02 bash[23351]: cluster 2026-03-09T17:47:34.939987+0000 mgr.y (mgr.14505) 813 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:36 vm02 bash[23351]: cluster 2026-03-09T17:47:34.939987+0000 mgr.y (mgr.14505) 813 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:38 vm00 bash[28333]: cluster 2026-03-09T17:47:36.940344+0000 mgr.y (mgr.14505) 814 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:38 vm00 bash[28333]: cluster 2026-03-09T17:47:36.940344+0000 mgr.y (mgr.14505) 814 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:38 vm00 bash[20770]: cluster 2026-03-09T17:47:36.940344+0000 mgr.y (mgr.14505) 814 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:38 vm00 bash[20770]: cluster 2026-03-09T17:47:36.940344+0000 mgr.y (mgr.14505) 814 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:38 vm02 bash[23351]: cluster 2026-03-09T17:47:36.940344+0000 mgr.y (mgr.14505) 814 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:38 vm02 bash[23351]: cluster 2026-03-09T17:47:36.940344+0000 mgr.y (mgr.14505) 814 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:40 vm00 bash[28333]: cluster 2026-03-09T17:47:38.940966+0000 mgr.y (mgr.14505) 815 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:40 vm00 bash[28333]: cluster 2026-03-09T17:47:38.940966+0000 mgr.y (mgr.14505) 815 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:40 vm00 bash[20770]: cluster 2026-03-09T17:47:38.940966+0000 mgr.y (mgr.14505) 815 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:40 vm00 bash[20770]: cluster 2026-03-09T17:47:38.940966+0000 mgr.y (mgr.14505) 815 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:40 vm02 bash[23351]: cluster 2026-03-09T17:47:38.940966+0000 mgr.y (mgr.14505) 815 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:40 vm02 bash[23351]: cluster 2026-03-09T17:47:38.940966+0000 mgr.y (mgr.14505) 815 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:42 vm00 bash[28333]: cluster 2026-03-09T17:47:40.941269+0000 mgr.y (mgr.14505) 816 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:42 vm00 bash[28333]: cluster 2026-03-09T17:47:40.941269+0000 mgr.y (mgr.14505) 816 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:42 vm00 bash[20770]: cluster 2026-03-09T17:47:40.941269+0000 mgr.y (mgr.14505) 816 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:42 vm00 bash[20770]: cluster 2026-03-09T17:47:40.941269+0000 mgr.y (mgr.14505) 816 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:42.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:42 vm02 bash[23351]: cluster 2026-03-09T17:47:40.941269+0000 mgr.y (mgr.14505) 816 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:42.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:42 vm02 bash[23351]: cluster 2026-03-09T17:47:40.941269+0000 mgr.y (mgr.14505) 816 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:42.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:47:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:47:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:43 vm00 bash[28333]: audit 2026-03-09T17:47:42.462743+0000 mgr.y (mgr.14505) 817 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:43 vm00 bash[28333]: audit 2026-03-09T17:47:42.462743+0000 mgr.y (mgr.14505) 817 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:43 vm00 bash[20770]: audit 2026-03-09T17:47:42.462743+0000 mgr.y (mgr.14505) 817 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:43 vm00 bash[20770]: audit 2026-03-09T17:47:42.462743+0000 mgr.y (mgr.14505) 817 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:43.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:43 vm02 bash[23351]: audit 2026-03-09T17:47:42.462743+0000 mgr.y (mgr.14505) 817 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:43.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:43 vm02 bash[23351]: audit 2026-03-09T17:47:42.462743+0000 mgr.y (mgr.14505) 817 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:44 vm00 bash[28333]: cluster 2026-03-09T17:47:42.941826+0000 mgr.y (mgr.14505) 818 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:44 vm00 bash[28333]: cluster 2026-03-09T17:47:42.941826+0000 mgr.y (mgr.14505) 818 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:44 vm00 bash[28333]: audit 2026-03-09T17:47:43.613989+0000 mon.c (mon.2) 849 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:44 vm00 bash[28333]: audit 2026-03-09T17:47:43.613989+0000 mon.c (mon.2) 849 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:44 vm00 bash[20770]: cluster 2026-03-09T17:47:42.941826+0000 mgr.y (mgr.14505) 818 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:44 vm00 bash[20770]: cluster 2026-03-09T17:47:42.941826+0000 mgr.y (mgr.14505) 818 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:44 vm00 bash[20770]: audit 2026-03-09T17:47:43.613989+0000 mon.c (mon.2) 849 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:44 vm00 bash[20770]: audit 2026-03-09T17:47:43.613989+0000 mon.c (mon.2) 849 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:44 vm02 bash[23351]: cluster 2026-03-09T17:47:42.941826+0000 mgr.y (mgr.14505) 818 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:44 vm02 bash[23351]: cluster 2026-03-09T17:47:42.941826+0000 mgr.y (mgr.14505) 818 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:44 vm02 bash[23351]: audit 2026-03-09T17:47:43.613989+0000 mon.c (mon.2) 849 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:44 vm02 bash[23351]: audit 2026-03-09T17:47:43.613989+0000 mon.c (mon.2) 849 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:46 vm00 bash[28333]: cluster 2026-03-09T17:47:44.942193+0000 mgr.y (mgr.14505) 819 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:46 vm00 bash[28333]: cluster 2026-03-09T17:47:44.942193+0000 mgr.y (mgr.14505) 819 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:46 vm00 bash[20770]: cluster 2026-03-09T17:47:44.942193+0000 mgr.y (mgr.14505) 819 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:46 vm00 bash[20770]: cluster 2026-03-09T17:47:44.942193+0000 mgr.y (mgr.14505) 819 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:47:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:47:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:47:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:46 vm02 bash[23351]: cluster 2026-03-09T17:47:44.942193+0000 mgr.y (mgr.14505) 819 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:46 vm02 bash[23351]: cluster 2026-03-09T17:47:44.942193+0000 mgr.y (mgr.14505) 819 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:48 vm00 bash[28333]: cluster 2026-03-09T17:47:46.942474+0000 mgr.y (mgr.14505) 820 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:48 vm00 bash[28333]: cluster 2026-03-09T17:47:46.942474+0000 mgr.y (mgr.14505) 820 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:48 vm00 bash[20770]: cluster 2026-03-09T17:47:46.942474+0000 mgr.y (mgr.14505) 820 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:48 vm00 bash[20770]: cluster 2026-03-09T17:47:46.942474+0000 mgr.y (mgr.14505) 820 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:48 vm02 bash[23351]: cluster 2026-03-09T17:47:46.942474+0000 mgr.y (mgr.14505) 820 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:48 vm02 bash[23351]: cluster 2026-03-09T17:47:46.942474+0000 mgr.y (mgr.14505) 820 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:49 vm00 bash[28333]: cluster 2026-03-09T17:47:48.943134+0000 mgr.y (mgr.14505) 821 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:49.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:49 vm00 bash[28333]: cluster 2026-03-09T17:47:48.943134+0000 mgr.y (mgr.14505) 821 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:49 vm00 bash[20770]: cluster 2026-03-09T17:47:48.943134+0000 mgr.y (mgr.14505) 821 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:49.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:49 vm00 bash[20770]: cluster 2026-03-09T17:47:48.943134+0000 mgr.y (mgr.14505) 821 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:49.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:49 vm02 bash[23351]: cluster 2026-03-09T17:47:48.943134+0000 mgr.y (mgr.14505) 821 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:49.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:49 vm02 bash[23351]: cluster 2026-03-09T17:47:48.943134+0000 mgr.y (mgr.14505) 821 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:52 vm00 bash[28333]: cluster 2026-03-09T17:47:50.943416+0000 mgr.y (mgr.14505) 822 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:52.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:52 vm00 bash[28333]: cluster 2026-03-09T17:47:50.943416+0000 mgr.y (mgr.14505) 822 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:52 vm00 bash[20770]: cluster 2026-03-09T17:47:50.943416+0000 mgr.y (mgr.14505) 822 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:52.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:52 vm00 bash[20770]: cluster 2026-03-09T17:47:50.943416+0000 mgr.y (mgr.14505) 822 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:52.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:52 vm02 bash[23351]: cluster 2026-03-09T17:47:50.943416+0000 mgr.y (mgr.14505) 822 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:52.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:52 vm02 bash[23351]: cluster 2026-03-09T17:47:50.943416+0000 mgr.y (mgr.14505) 822 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:52.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:47:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:47:53.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:53 vm00 bash[28333]: audit 2026-03-09T17:47:52.474074+0000 mgr.y (mgr.14505) 823 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:53.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:53 vm00 bash[28333]: audit 2026-03-09T17:47:52.474074+0000 mgr.y (mgr.14505) 823 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:53.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:53 vm00 bash[20770]: audit 2026-03-09T17:47:52.474074+0000 mgr.y (mgr.14505) 823 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:53.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:53 vm00 bash[20770]: audit 2026-03-09T17:47:52.474074+0000 mgr.y (mgr.14505) 823 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:53.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:53 vm02 bash[23351]: audit 2026-03-09T17:47:52.474074+0000 mgr.y (mgr.14505) 823 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:53.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:53 vm02 bash[23351]: audit 2026-03-09T17:47:52.474074+0000 mgr.y (mgr.14505) 823 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:47:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:54 vm00 bash[28333]: cluster 2026-03-09T17:47:52.943919+0000 mgr.y (mgr.14505) 824 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:54.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:54 vm00 bash[28333]: cluster 2026-03-09T17:47:52.943919+0000 mgr.y (mgr.14505) 824 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:54 vm00 bash[20770]: cluster 2026-03-09T17:47:52.943919+0000 mgr.y (mgr.14505) 824 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:54.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:54 vm00 bash[20770]: cluster 2026-03-09T17:47:52.943919+0000 mgr.y (mgr.14505) 824 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:54.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:54 vm02 bash[23351]: cluster 2026-03-09T17:47:52.943919+0000 mgr.y (mgr.14505) 824 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:54.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:54 vm02 bash[23351]: cluster 2026-03-09T17:47:52.943919+0000 mgr.y (mgr.14505) 824 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:47:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:56 vm00 bash[28333]: cluster 2026-03-09T17:47:54.944314+0000 mgr.y (mgr.14505) 825 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:56.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:56 vm00 bash[28333]: cluster 2026-03-09T17:47:54.944314+0000 mgr.y (mgr.14505) 825 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:56 vm00 bash[20770]: cluster 2026-03-09T17:47:54.944314+0000 mgr.y (mgr.14505) 825 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:56.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:56 vm00 bash[20770]: cluster 2026-03-09T17:47:54.944314+0000 mgr.y (mgr.14505) 825 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:56.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:56 vm02 bash[23351]: cluster 2026-03-09T17:47:54.944314+0000 mgr.y (mgr.14505) 825 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:56.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:56 vm02 bash[23351]: cluster 2026-03-09T17:47:54.944314+0000 mgr.y (mgr.14505) 825 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:47:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:47:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:47:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:58 vm00 bash[20770]: cluster 2026-03-09T17:47:56.944689+0000 mgr.y (mgr.14505) 826 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:58.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:58 vm00 bash[20770]: cluster 2026-03-09T17:47:56.944689+0000 mgr.y (mgr.14505) 826 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:58 vm00 bash[28333]: cluster 2026-03-09T17:47:56.944689+0000 mgr.y (mgr.14505) 826 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:58.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:58 vm00 bash[28333]: cluster 2026-03-09T17:47:56.944689+0000 mgr.y (mgr.14505) 826 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:58 vm02 bash[23351]: cluster 2026-03-09T17:47:56.944689+0000 mgr.y (mgr.14505) 826 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:58 vm02 bash[23351]: cluster 2026-03-09T17:47:56.944689+0000 mgr.y (mgr.14505) 826 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:47:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:59 vm00 bash[20770]: audit 2026-03-09T17:47:58.620964+0000 mon.c (mon.2) 850 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:59.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:47:59 vm00 bash[20770]: audit 2026-03-09T17:47:58.620964+0000 mon.c (mon.2) 850 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:59.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:59 vm00 bash[28333]: audit 2026-03-09T17:47:58.620964+0000 mon.c (mon.2) 850 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:59.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:47:59 vm00 bash[28333]: audit 2026-03-09T17:47:58.620964+0000 mon.c (mon.2) 850 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:59.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:59 vm02 bash[23351]: audit 2026-03-09T17:47:58.620964+0000 mon.c (mon.2) 850 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:47:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:47:59 vm02 bash[23351]: audit 2026-03-09T17:47:58.620964+0000 mon.c (mon.2) 850 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:00 vm00 bash[20770]: cluster 2026-03-09T17:47:58.945370+0000 mgr.y (mgr.14505) 827 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:00 vm00 bash[20770]: cluster 2026-03-09T17:47:58.945370+0000 mgr.y (mgr.14505) 827 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:00 vm00 bash[20770]: audit 2026-03-09T17:47:59.728526+0000 mon.c (mon.2) 851 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:48:00.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:00 vm00 bash[20770]: audit 2026-03-09T17:47:59.728526+0000 mon.c (mon.2) 851 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:48:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:00 vm00 bash[28333]: cluster 2026-03-09T17:47:58.945370+0000 mgr.y (mgr.14505) 827 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:00 vm00 bash[28333]: cluster 2026-03-09T17:47:58.945370+0000 mgr.y (mgr.14505) 827 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:00 vm00 bash[28333]: audit 2026-03-09T17:47:59.728526+0000 mon.c (mon.2) 851 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:48:00.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:00 vm00 bash[28333]: audit 2026-03-09T17:47:59.728526+0000 mon.c (mon.2) 851 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:48:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:00 vm02 bash[23351]: cluster 2026-03-09T17:47:58.945370+0000 mgr.y (mgr.14505) 827 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:00 vm02 bash[23351]: cluster 2026-03-09T17:47:58.945370+0000 mgr.y (mgr.14505) 827 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:00 vm02 bash[23351]: audit 2026-03-09T17:47:59.728526+0000 mon.c (mon.2) 851 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:48:00.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:00 vm02 bash[23351]: audit 2026-03-09T17:47:59.728526+0000 mon.c (mon.2) 851 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.064562+0000 mon.c (mon.2) 852 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.064562+0000 mon.c (mon.2) 852 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.065329+0000 mon.a (mon.0) 3524 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.065329+0000 mon.a (mon.0) 3524 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.066190+0000 mon.c (mon.2) 853 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.066190+0000 mon.c (mon.2) 853 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.066428+0000 mon.a (mon.0) 3525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.066428+0000 mon.a (mon.0) 3525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.067321+0000 mon.c (mon.2) 854 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.067321+0000 mon.c (mon.2) 854 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.068065+0000 mon.c (mon.2) 855 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.068065+0000 mon.c (mon.2) 855 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.073494+0000 mon.a (mon.0) 3526 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:01 vm00 bash[20770]: audit 2026-03-09T17:48:00.073494+0000 mon.a (mon.0) 3526 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.064562+0000 mon.c (mon.2) 852 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.064562+0000 mon.c (mon.2) 852 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.065329+0000 mon.a (mon.0) 3524 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.065329+0000 mon.a (mon.0) 3524 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.066190+0000 mon.c (mon.2) 853 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.066190+0000 mon.c (mon.2) 853 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.066428+0000 mon.a (mon.0) 3525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.066428+0000 mon.a (mon.0) 3525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.067321+0000 mon.c (mon.2) 854 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.067321+0000 mon.c (mon.2) 854 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.068065+0000 mon.c (mon.2) 855 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.068065+0000 mon.c (mon.2) 855 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.073494+0000 mon.a (mon.0) 3526 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:48:01.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:01 vm00 bash[28333]: audit 2026-03-09T17:48:00.073494+0000 mon.a (mon.0) 3526 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.064562+0000 mon.c (mon.2) 852 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.064562+0000 mon.c (mon.2) 852 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.065329+0000 mon.a (mon.0) 3524 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.065329+0000 mon.a (mon.0) 3524 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.066190+0000 mon.c (mon.2) 853 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.066190+0000 mon.c (mon.2) 853 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.066428+0000 mon.a (mon.0) 3525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.066428+0000 mon.a (mon.0) 3525 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.067321+0000 mon.c (mon.2) 854 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.067321+0000 mon.c (mon.2) 854 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.068065+0000 mon.c (mon.2) 855 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.068065+0000 mon.c (mon.2) 855 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.073494+0000 mon.a (mon.0) 3526 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:48:01.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:01 vm02 bash[23351]: audit 2026-03-09T17:48:00.073494+0000 mon.a (mon.0) 3526 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:48:02.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:02 vm02 bash[23351]: cluster 2026-03-09T17:48:00.945708+0000 mgr.y (mgr.14505) 828 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:02.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:02 vm02 bash[23351]: cluster 2026-03-09T17:48:00.945708+0000 mgr.y (mgr.14505) 828 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:02 vm00 bash[20770]: cluster 2026-03-09T17:48:00.945708+0000 mgr.y (mgr.14505) 828 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:02 vm00 bash[20770]: cluster 2026-03-09T17:48:00.945708+0000 mgr.y (mgr.14505) 828 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:02 vm00 bash[28333]: cluster 2026-03-09T17:48:00.945708+0000 mgr.y (mgr.14505) 828 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:02 vm00 bash[28333]: cluster 2026-03-09T17:48:00.945708+0000 mgr.y (mgr.14505) 828 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:02.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:48:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:48:03.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:03 vm02 bash[23351]: audit 2026-03-09T17:48:02.481465+0000 mgr.y (mgr.14505) 829 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:03.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:03 vm02 bash[23351]: audit 2026-03-09T17:48:02.481465+0000 mgr.y (mgr.14505) 829 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:03 vm00 bash[20770]: audit 2026-03-09T17:48:02.481465+0000 mgr.y (mgr.14505) 829 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:03.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:03 vm00 bash[20770]: audit 2026-03-09T17:48:02.481465+0000 mgr.y (mgr.14505) 829 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:03 vm00 bash[28333]: audit 2026-03-09T17:48:02.481465+0000 mgr.y (mgr.14505) 829 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:03.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:03 vm00 bash[28333]: audit 2026-03-09T17:48:02.481465+0000 mgr.y (mgr.14505) 829 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:04 vm02 bash[23351]: cluster 2026-03-09T17:48:02.946426+0000 mgr.y (mgr.14505) 830 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:04 vm02 bash[23351]: cluster 2026-03-09T17:48:02.946426+0000 mgr.y (mgr.14505) 830 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:04 vm00 bash[28333]: cluster 2026-03-09T17:48:02.946426+0000 mgr.y (mgr.14505) 830 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:04 vm00 bash[28333]: cluster 2026-03-09T17:48:02.946426+0000 mgr.y (mgr.14505) 830 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:04 vm00 bash[20770]: cluster 2026-03-09T17:48:02.946426+0000 mgr.y (mgr.14505) 830 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:04 vm00 bash[20770]: cluster 2026-03-09T17:48:02.946426+0000 mgr.y (mgr.14505) 830 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:06 vm00 bash[20770]: cluster 2026-03-09T17:48:04.946862+0000 mgr.y (mgr.14505) 831 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:06 vm00 bash[20770]: cluster 2026-03-09T17:48:04.946862+0000 mgr.y (mgr.14505) 831 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:06 vm00 bash[28333]: cluster 2026-03-09T17:48:04.946862+0000 mgr.y (mgr.14505) 831 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:06 vm00 bash[28333]: cluster 2026-03-09T17:48:04.946862+0000 mgr.y (mgr.14505) 831 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:48:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:48:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:48:06.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:06 vm02 bash[23351]: cluster 2026-03-09T17:48:04.946862+0000 mgr.y (mgr.14505) 831 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:06.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:06 vm02 bash[23351]: cluster 2026-03-09T17:48:04.946862+0000 mgr.y (mgr.14505) 831 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:08 vm00 bash[20770]: cluster 2026-03-09T17:48:06.947243+0000 mgr.y (mgr.14505) 832 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:08 vm00 bash[20770]: cluster 2026-03-09T17:48:06.947243+0000 mgr.y (mgr.14505) 832 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:08 vm00 bash[28333]: cluster 2026-03-09T17:48:06.947243+0000 mgr.y (mgr.14505) 832 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:08 vm00 bash[28333]: cluster 2026-03-09T17:48:06.947243+0000 mgr.y (mgr.14505) 832 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:08 vm02 bash[23351]: cluster 2026-03-09T17:48:06.947243+0000 mgr.y (mgr.14505) 832 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:08 vm02 bash[23351]: cluster 2026-03-09T17:48:06.947243+0000 mgr.y (mgr.14505) 832 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:10 vm00 bash[20770]: cluster 2026-03-09T17:48:08.947933+0000 mgr.y (mgr.14505) 833 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:10 vm00 bash[20770]: cluster 2026-03-09T17:48:08.947933+0000 mgr.y (mgr.14505) 833 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:10 vm00 bash[28333]: cluster 2026-03-09T17:48:08.947933+0000 mgr.y (mgr.14505) 833 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:10 vm00 bash[28333]: cluster 2026-03-09T17:48:08.947933+0000 mgr.y (mgr.14505) 833 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:10 vm02 bash[23351]: cluster 2026-03-09T17:48:08.947933+0000 mgr.y (mgr.14505) 833 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:10 vm02 bash[23351]: cluster 2026-03-09T17:48:08.947933+0000 mgr.y (mgr.14505) 833 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:12.486 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:12 vm02 bash[23351]: cluster 2026-03-09T17:48:10.948267+0000 mgr.y (mgr.14505) 834 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:12.487 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:12 vm02 bash[23351]: cluster 2026-03-09T17:48:10.948267+0000 mgr.y (mgr.14505) 834 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:12 vm00 bash[20770]: cluster 2026-03-09T17:48:10.948267+0000 mgr.y (mgr.14505) 834 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:12 vm00 bash[20770]: cluster 2026-03-09T17:48:10.948267+0000 mgr.y (mgr.14505) 834 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:12 vm00 bash[28333]: cluster 2026-03-09T17:48:10.948267+0000 mgr.y (mgr.14505) 834 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:12 vm00 bash[28333]: cluster 2026-03-09T17:48:10.948267+0000 mgr.y (mgr.14505) 834 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:12.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:48:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:14 vm00 bash[28333]: audit 2026-03-09T17:48:12.486228+0000 mgr.y (mgr.14505) 835 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:14 vm00 bash[28333]: audit 2026-03-09T17:48:12.486228+0000 mgr.y (mgr.14505) 835 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:14 vm00 bash[28333]: cluster 2026-03-09T17:48:12.948893+0000 mgr.y (mgr.14505) 836 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:14 vm00 bash[28333]: cluster 2026-03-09T17:48:12.948893+0000 mgr.y (mgr.14505) 836 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:14 vm00 bash[28333]: audit 2026-03-09T17:48:13.626798+0000 mon.c (mon.2) 856 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:14 vm00 bash[28333]: audit 2026-03-09T17:48:13.626798+0000 mon.c (mon.2) 856 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:14 vm00 bash[20770]: audit 2026-03-09T17:48:12.486228+0000 mgr.y (mgr.14505) 835 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:14 vm00 bash[20770]: audit 2026-03-09T17:48:12.486228+0000 mgr.y (mgr.14505) 835 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:14 vm00 bash[20770]: cluster 2026-03-09T17:48:12.948893+0000 mgr.y (mgr.14505) 836 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:14 vm00 bash[20770]: cluster 2026-03-09T17:48:12.948893+0000 mgr.y (mgr.14505) 836 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:14 vm00 bash[20770]: audit 2026-03-09T17:48:13.626798+0000 mon.c (mon.2) 856 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:14 vm00 bash[20770]: audit 2026-03-09T17:48:13.626798+0000 mon.c (mon.2) 856 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:14 vm02 bash[23351]: audit 2026-03-09T17:48:12.486228+0000 mgr.y (mgr.14505) 835 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:14 vm02 bash[23351]: audit 2026-03-09T17:48:12.486228+0000 mgr.y (mgr.14505) 835 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:14 vm02 bash[23351]: cluster 2026-03-09T17:48:12.948893+0000 mgr.y (mgr.14505) 836 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:14 vm02 bash[23351]: cluster 2026-03-09T17:48:12.948893+0000 mgr.y (mgr.14505) 836 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:14 vm02 bash[23351]: audit 2026-03-09T17:48:13.626798+0000 mon.c (mon.2) 856 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:14 vm02 bash[23351]: audit 2026-03-09T17:48:13.626798+0000 mon.c (mon.2) 856 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:16.253 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:16 vm00 bash[20770]: cluster 2026-03-09T17:48:14.949235+0000 mgr.y (mgr.14505) 837 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:16.253 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:16 vm00 bash[20770]: cluster 2026-03-09T17:48:14.949235+0000 mgr.y (mgr.14505) 837 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:48:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:48:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:48:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:16 vm00 bash[28333]: cluster 2026-03-09T17:48:14.949235+0000 mgr.y (mgr.14505) 837 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:16 vm00 bash[28333]: cluster 2026-03-09T17:48:14.949235+0000 mgr.y (mgr.14505) 837 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:16 vm02 bash[23351]: cluster 2026-03-09T17:48:14.949235+0000 mgr.y (mgr.14505) 837 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:16 vm02 bash[23351]: cluster 2026-03-09T17:48:14.949235+0000 mgr.y (mgr.14505) 837 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:18 vm00 bash[28333]: cluster 2026-03-09T17:48:16.949558+0000 mgr.y (mgr.14505) 838 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:18 vm00 bash[28333]: cluster 2026-03-09T17:48:16.949558+0000 mgr.y (mgr.14505) 838 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:18 vm00 bash[20770]: cluster 2026-03-09T17:48:16.949558+0000 mgr.y (mgr.14505) 838 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:18 vm00 bash[20770]: cluster 2026-03-09T17:48:16.949558+0000 mgr.y (mgr.14505) 838 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:18 vm02 bash[23351]: cluster 2026-03-09T17:48:16.949558+0000 mgr.y (mgr.14505) 838 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:18 vm02 bash[23351]: cluster 2026-03-09T17:48:16.949558+0000 mgr.y (mgr.14505) 838 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:20 vm00 bash[28333]: cluster 2026-03-09T17:48:18.950163+0000 mgr.y (mgr.14505) 839 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:20 vm00 bash[28333]: cluster 2026-03-09T17:48:18.950163+0000 mgr.y (mgr.14505) 839 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:20 vm00 bash[20770]: cluster 2026-03-09T17:48:18.950163+0000 mgr.y (mgr.14505) 839 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:20 vm00 bash[20770]: cluster 2026-03-09T17:48:18.950163+0000 mgr.y (mgr.14505) 839 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:20 vm02 bash[23351]: cluster 2026-03-09T17:48:18.950163+0000 mgr.y (mgr.14505) 839 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:20 vm02 bash[23351]: cluster 2026-03-09T17:48:18.950163+0000 mgr.y (mgr.14505) 839 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:22 vm00 bash[28333]: cluster 2026-03-09T17:48:20.950388+0000 mgr.y (mgr.14505) 840 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:22 vm00 bash[28333]: cluster 2026-03-09T17:48:20.950388+0000 mgr.y (mgr.14505) 840 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:22 vm00 bash[20770]: cluster 2026-03-09T17:48:20.950388+0000 mgr.y (mgr.14505) 840 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:22 vm00 bash[20770]: cluster 2026-03-09T17:48:20.950388+0000 mgr.y (mgr.14505) 840 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:22 vm02 bash[23351]: cluster 2026-03-09T17:48:20.950388+0000 mgr.y (mgr.14505) 840 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:22 vm02 bash[23351]: cluster 2026-03-09T17:48:20.950388+0000 mgr.y (mgr.14505) 840 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:22.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:48:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:48:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:24 vm00 bash[28333]: audit 2026-03-09T17:48:22.489372+0000 mgr.y (mgr.14505) 841 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:24 vm00 bash[28333]: audit 2026-03-09T17:48:22.489372+0000 mgr.y (mgr.14505) 841 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:24 vm00 bash[28333]: cluster 2026-03-09T17:48:22.950869+0000 mgr.y (mgr.14505) 842 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:24 vm00 bash[28333]: cluster 2026-03-09T17:48:22.950869+0000 mgr.y (mgr.14505) 842 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:24 vm00 bash[20770]: audit 2026-03-09T17:48:22.489372+0000 mgr.y (mgr.14505) 841 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:24 vm00 bash[20770]: audit 2026-03-09T17:48:22.489372+0000 mgr.y (mgr.14505) 841 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:24 vm00 bash[20770]: cluster 2026-03-09T17:48:22.950869+0000 mgr.y (mgr.14505) 842 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:24 vm00 bash[20770]: cluster 2026-03-09T17:48:22.950869+0000 mgr.y (mgr.14505) 842 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:24 vm02 bash[23351]: audit 2026-03-09T17:48:22.489372+0000 mgr.y (mgr.14505) 841 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:24 vm02 bash[23351]: audit 2026-03-09T17:48:22.489372+0000 mgr.y (mgr.14505) 841 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:24 vm02 bash[23351]: cluster 2026-03-09T17:48:22.950869+0000 mgr.y (mgr.14505) 842 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:24 vm02 bash[23351]: cluster 2026-03-09T17:48:22.950869+0000 mgr.y (mgr.14505) 842 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:26 vm02 bash[23351]: cluster 2026-03-09T17:48:24.951198+0000 mgr.y (mgr.14505) 843 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:26 vm02 bash[23351]: cluster 2026-03-09T17:48:24.951198+0000 mgr.y (mgr.14505) 843 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:26 vm00 bash[20770]: cluster 2026-03-09T17:48:24.951198+0000 mgr.y (mgr.14505) 843 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:26 vm00 bash[20770]: cluster 2026-03-09T17:48:24.951198+0000 mgr.y (mgr.14505) 843 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:26 vm00 bash[28333]: cluster 2026-03-09T17:48:24.951198+0000 mgr.y (mgr.14505) 843 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:26 vm00 bash[28333]: cluster 2026-03-09T17:48:24.951198+0000 mgr.y (mgr.14505) 843 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:48:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:48:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:48:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:28 vm02 bash[23351]: cluster 2026-03-09T17:48:26.951487+0000 mgr.y (mgr.14505) 844 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:28 vm02 bash[23351]: cluster 2026-03-09T17:48:26.951487+0000 mgr.y (mgr.14505) 844 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:28 vm00 bash[20770]: cluster 2026-03-09T17:48:26.951487+0000 mgr.y (mgr.14505) 844 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:28 vm00 bash[20770]: cluster 2026-03-09T17:48:26.951487+0000 mgr.y (mgr.14505) 844 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:28 vm00 bash[28333]: cluster 2026-03-09T17:48:26.951487+0000 mgr.y (mgr.14505) 844 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:28 vm00 bash[28333]: cluster 2026-03-09T17:48:26.951487+0000 mgr.y (mgr.14505) 844 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:29.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:29 vm02 bash[23351]: audit 2026-03-09T17:48:28.632652+0000 mon.c (mon.2) 857 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:29.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:29 vm02 bash[23351]: audit 2026-03-09T17:48:28.632652+0000 mon.c (mon.2) 857 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:29 vm00 bash[28333]: audit 2026-03-09T17:48:28.632652+0000 mon.c (mon.2) 857 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:29 vm00 bash[28333]: audit 2026-03-09T17:48:28.632652+0000 mon.c (mon.2) 857 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:29 vm00 bash[20770]: audit 2026-03-09T17:48:28.632652+0000 mon.c (mon.2) 857 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:29 vm00 bash[20770]: audit 2026-03-09T17:48:28.632652+0000 mon.c (mon.2) 857 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:30 vm02 bash[23351]: cluster 2026-03-09T17:48:28.952085+0000 mgr.y (mgr.14505) 845 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:30 vm02 bash[23351]: cluster 2026-03-09T17:48:28.952085+0000 mgr.y (mgr.14505) 845 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:30 vm00 bash[28333]: cluster 2026-03-09T17:48:28.952085+0000 mgr.y (mgr.14505) 845 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:30 vm00 bash[28333]: cluster 2026-03-09T17:48:28.952085+0000 mgr.y (mgr.14505) 845 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:30 vm00 bash[20770]: cluster 2026-03-09T17:48:28.952085+0000 mgr.y (mgr.14505) 845 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:30 vm00 bash[20770]: cluster 2026-03-09T17:48:28.952085+0000 mgr.y (mgr.14505) 845 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:32 vm02 bash[23351]: cluster 2026-03-09T17:48:30.952432+0000 mgr.y (mgr.14505) 846 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:32 vm02 bash[23351]: cluster 2026-03-09T17:48:30.952432+0000 mgr.y (mgr.14505) 846 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:32.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:48:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:48:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:32 vm00 bash[28333]: cluster 2026-03-09T17:48:30.952432+0000 mgr.y (mgr.14505) 846 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:32 vm00 bash[28333]: cluster 2026-03-09T17:48:30.952432+0000 mgr.y (mgr.14505) 846 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:32 vm00 bash[20770]: cluster 2026-03-09T17:48:30.952432+0000 mgr.y (mgr.14505) 846 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:32 vm00 bash[20770]: cluster 2026-03-09T17:48:30.952432+0000 mgr.y (mgr.14505) 846 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:34 vm02 bash[23351]: audit 2026-03-09T17:48:32.496667+0000 mgr.y (mgr.14505) 847 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:34 vm02 bash[23351]: audit 2026-03-09T17:48:32.496667+0000 mgr.y (mgr.14505) 847 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:34 vm02 bash[23351]: cluster 2026-03-09T17:48:32.952918+0000 mgr.y (mgr.14505) 848 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:34 vm02 bash[23351]: cluster 2026-03-09T17:48:32.952918+0000 mgr.y (mgr.14505) 848 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:34 vm00 bash[28333]: audit 2026-03-09T17:48:32.496667+0000 mgr.y (mgr.14505) 847 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:34 vm00 bash[28333]: audit 2026-03-09T17:48:32.496667+0000 mgr.y (mgr.14505) 847 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:34 vm00 bash[28333]: cluster 2026-03-09T17:48:32.952918+0000 mgr.y (mgr.14505) 848 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:34 vm00 bash[28333]: cluster 2026-03-09T17:48:32.952918+0000 mgr.y (mgr.14505) 848 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:34 vm00 bash[20770]: audit 2026-03-09T17:48:32.496667+0000 mgr.y (mgr.14505) 847 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:34 vm00 bash[20770]: audit 2026-03-09T17:48:32.496667+0000 mgr.y (mgr.14505) 847 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:34 vm00 bash[20770]: cluster 2026-03-09T17:48:32.952918+0000 mgr.y (mgr.14505) 848 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:34 vm00 bash[20770]: cluster 2026-03-09T17:48:32.952918+0000 mgr.y (mgr.14505) 848 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:36 vm02 bash[23351]: cluster 2026-03-09T17:48:34.953243+0000 mgr.y (mgr.14505) 849 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:36 vm02 bash[23351]: cluster 2026-03-09T17:48:34.953243+0000 mgr.y (mgr.14505) 849 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:36 vm00 bash[28333]: cluster 2026-03-09T17:48:34.953243+0000 mgr.y (mgr.14505) 849 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:36 vm00 bash[28333]: cluster 2026-03-09T17:48:34.953243+0000 mgr.y (mgr.14505) 849 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:36 vm00 bash[20770]: cluster 2026-03-09T17:48:34.953243+0000 mgr.y (mgr.14505) 849 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:36 vm00 bash[20770]: cluster 2026-03-09T17:48:34.953243+0000 mgr.y (mgr.14505) 849 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:48:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:48:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:48:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:38 vm02 bash[23351]: cluster 2026-03-09T17:48:36.953519+0000 mgr.y (mgr.14505) 850 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:38 vm02 bash[23351]: cluster 2026-03-09T17:48:36.953519+0000 mgr.y (mgr.14505) 850 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:38 vm00 bash[28333]: cluster 2026-03-09T17:48:36.953519+0000 mgr.y (mgr.14505) 850 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:38 vm00 bash[28333]: cluster 2026-03-09T17:48:36.953519+0000 mgr.y (mgr.14505) 850 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:38 vm00 bash[20770]: cluster 2026-03-09T17:48:36.953519+0000 mgr.y (mgr.14505) 850 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:38 vm00 bash[20770]: cluster 2026-03-09T17:48:36.953519+0000 mgr.y (mgr.14505) 850 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:40 vm02 bash[23351]: cluster 2026-03-09T17:48:38.954093+0000 mgr.y (mgr.14505) 851 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:40 vm02 bash[23351]: cluster 2026-03-09T17:48:38.954093+0000 mgr.y (mgr.14505) 851 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:40 vm00 bash[28333]: cluster 2026-03-09T17:48:38.954093+0000 mgr.y (mgr.14505) 851 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:40 vm00 bash[28333]: cluster 2026-03-09T17:48:38.954093+0000 mgr.y (mgr.14505) 851 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:40 vm00 bash[20770]: cluster 2026-03-09T17:48:38.954093+0000 mgr.y (mgr.14505) 851 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:40 vm00 bash[20770]: cluster 2026-03-09T17:48:38.954093+0000 mgr.y (mgr.14505) 851 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:42 vm02 bash[23351]: cluster 2026-03-09T17:48:40.954416+0000 mgr.y (mgr.14505) 852 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:42 vm02 bash[23351]: cluster 2026-03-09T17:48:40.954416+0000 mgr.y (mgr.14505) 852 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:42.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:48:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:48:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:42 vm00 bash[28333]: cluster 2026-03-09T17:48:40.954416+0000 mgr.y (mgr.14505) 852 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:42 vm00 bash[28333]: cluster 2026-03-09T17:48:40.954416+0000 mgr.y (mgr.14505) 852 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:42 vm00 bash[20770]: cluster 2026-03-09T17:48:40.954416+0000 mgr.y (mgr.14505) 852 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:42 vm00 bash[20770]: cluster 2026-03-09T17:48:40.954416+0000 mgr.y (mgr.14505) 852 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:44 vm02 bash[23351]: audit 2026-03-09T17:48:42.503328+0000 mgr.y (mgr.14505) 853 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:44 vm02 bash[23351]: audit 2026-03-09T17:48:42.503328+0000 mgr.y (mgr.14505) 853 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:44 vm02 bash[23351]: cluster 2026-03-09T17:48:42.954922+0000 mgr.y (mgr.14505) 854 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:44 vm02 bash[23351]: cluster 2026-03-09T17:48:42.954922+0000 mgr.y (mgr.14505) 854 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:44 vm02 bash[23351]: audit 2026-03-09T17:48:43.638657+0000 mon.c (mon.2) 858 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:44 vm02 bash[23351]: audit 2026-03-09T17:48:43.638657+0000 mon.c (mon.2) 858 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:44 vm00 bash[28333]: audit 2026-03-09T17:48:42.503328+0000 mgr.y (mgr.14505) 853 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:44 vm00 bash[28333]: audit 2026-03-09T17:48:42.503328+0000 mgr.y (mgr.14505) 853 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:44 vm00 bash[28333]: cluster 2026-03-09T17:48:42.954922+0000 mgr.y (mgr.14505) 854 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:44 vm00 bash[28333]: cluster 2026-03-09T17:48:42.954922+0000 mgr.y (mgr.14505) 854 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:44 vm00 bash[28333]: audit 2026-03-09T17:48:43.638657+0000 mon.c (mon.2) 858 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:44 vm00 bash[28333]: audit 2026-03-09T17:48:43.638657+0000 mon.c (mon.2) 858 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:44 vm00 bash[20770]: audit 2026-03-09T17:48:42.503328+0000 mgr.y (mgr.14505) 853 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:44 vm00 bash[20770]: audit 2026-03-09T17:48:42.503328+0000 mgr.y (mgr.14505) 853 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:44 vm00 bash[20770]: cluster 2026-03-09T17:48:42.954922+0000 mgr.y (mgr.14505) 854 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:44 vm00 bash[20770]: cluster 2026-03-09T17:48:42.954922+0000 mgr.y (mgr.14505) 854 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:44 vm00 bash[20770]: audit 2026-03-09T17:48:43.638657+0000 mon.c (mon.2) 858 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:44 vm00 bash[20770]: audit 2026-03-09T17:48:43.638657+0000 mon.c (mon.2) 858 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:46 vm02 bash[23351]: cluster 2026-03-09T17:48:44.955260+0000 mgr.y (mgr.14505) 855 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:46 vm02 bash[23351]: cluster 2026-03-09T17:48:44.955260+0000 mgr.y (mgr.14505) 855 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:46 vm00 bash[28333]: cluster 2026-03-09T17:48:44.955260+0000 mgr.y (mgr.14505) 855 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:46 vm00 bash[28333]: cluster 2026-03-09T17:48:44.955260+0000 mgr.y (mgr.14505) 855 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:46 vm00 bash[20770]: cluster 2026-03-09T17:48:44.955260+0000 mgr.y (mgr.14505) 855 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:46 vm00 bash[20770]: cluster 2026-03-09T17:48:44.955260+0000 mgr.y (mgr.14505) 855 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:48:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:48:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:48:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:48 vm02 bash[23351]: cluster 2026-03-09T17:48:46.955539+0000 mgr.y (mgr.14505) 856 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:48 vm02 bash[23351]: cluster 2026-03-09T17:48:46.955539+0000 mgr.y (mgr.14505) 856 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:48 vm00 bash[28333]: cluster 2026-03-09T17:48:46.955539+0000 mgr.y (mgr.14505) 856 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:48 vm00 bash[28333]: cluster 2026-03-09T17:48:46.955539+0000 mgr.y (mgr.14505) 856 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:48 vm00 bash[20770]: cluster 2026-03-09T17:48:46.955539+0000 mgr.y (mgr.14505) 856 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:48 vm00 bash[20770]: cluster 2026-03-09T17:48:46.955539+0000 mgr.y (mgr.14505) 856 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:50.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:50 vm00 bash[28333]: cluster 2026-03-09T17:48:48.956162+0000 mgr.y (mgr.14505) 857 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:50.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:50 vm00 bash[28333]: cluster 2026-03-09T17:48:48.956162+0000 mgr.y (mgr.14505) 857 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:50 vm02 bash[23351]: cluster 2026-03-09T17:48:48.956162+0000 mgr.y (mgr.14505) 857 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:50 vm02 bash[23351]: cluster 2026-03-09T17:48:48.956162+0000 mgr.y (mgr.14505) 857 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:50 vm00 bash[20770]: cluster 2026-03-09T17:48:48.956162+0000 mgr.y (mgr.14505) 857 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:50 vm00 bash[20770]: cluster 2026-03-09T17:48:48.956162+0000 mgr.y (mgr.14505) 857 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:52 vm00 bash[28333]: cluster 2026-03-09T17:48:50.956561+0000 mgr.y (mgr.14505) 858 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:52 vm00 bash[28333]: cluster 2026-03-09T17:48:50.956561+0000 mgr.y (mgr.14505) 858 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:52 vm00 bash[20770]: cluster 2026-03-09T17:48:50.956561+0000 mgr.y (mgr.14505) 858 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:52 vm00 bash[20770]: cluster 2026-03-09T17:48:50.956561+0000 mgr.y (mgr.14505) 858 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:52 vm02 bash[23351]: cluster 2026-03-09T17:48:50.956561+0000 mgr.y (mgr.14505) 858 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:52 vm02 bash[23351]: cluster 2026-03-09T17:48:50.956561+0000 mgr.y (mgr.14505) 858 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:52.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:48:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:48:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:54 vm00 bash[28333]: audit 2026-03-09T17:48:52.505327+0000 mgr.y (mgr.14505) 859 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:54 vm00 bash[28333]: audit 2026-03-09T17:48:52.505327+0000 mgr.y (mgr.14505) 859 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:54 vm00 bash[28333]: cluster 2026-03-09T17:48:52.957150+0000 mgr.y (mgr.14505) 860 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:54 vm00 bash[28333]: cluster 2026-03-09T17:48:52.957150+0000 mgr.y (mgr.14505) 860 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:54 vm00 bash[20770]: audit 2026-03-09T17:48:52.505327+0000 mgr.y (mgr.14505) 859 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:54 vm00 bash[20770]: audit 2026-03-09T17:48:52.505327+0000 mgr.y (mgr.14505) 859 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:54 vm00 bash[20770]: cluster 2026-03-09T17:48:52.957150+0000 mgr.y (mgr.14505) 860 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:54 vm00 bash[20770]: cluster 2026-03-09T17:48:52.957150+0000 mgr.y (mgr.14505) 860 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:54 vm02 bash[23351]: audit 2026-03-09T17:48:52.505327+0000 mgr.y (mgr.14505) 859 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:54 vm02 bash[23351]: audit 2026-03-09T17:48:52.505327+0000 mgr.y (mgr.14505) 859 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:48:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:54 vm02 bash[23351]: cluster 2026-03-09T17:48:52.957150+0000 mgr.y (mgr.14505) 860 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:54 vm02 bash[23351]: cluster 2026-03-09T17:48:52.957150+0000 mgr.y (mgr.14505) 860 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:48:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:56 vm00 bash[28333]: cluster 2026-03-09T17:48:54.957495+0000 mgr.y (mgr.14505) 861 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:56 vm00 bash[28333]: cluster 2026-03-09T17:48:54.957495+0000 mgr.y (mgr.14505) 861 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:56 vm00 bash[20770]: cluster 2026-03-09T17:48:54.957495+0000 mgr.y (mgr.14505) 861 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:56 vm00 bash[20770]: cluster 2026-03-09T17:48:54.957495+0000 mgr.y (mgr.14505) 861 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:48:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:48:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:48:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:56 vm02 bash[23351]: cluster 2026-03-09T17:48:54.957495+0000 mgr.y (mgr.14505) 861 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:56 vm02 bash[23351]: cluster 2026-03-09T17:48:54.957495+0000 mgr.y (mgr.14505) 861 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:58 vm00 bash[28333]: cluster 2026-03-09T17:48:56.957763+0000 mgr.y (mgr.14505) 862 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:58 vm00 bash[28333]: cluster 2026-03-09T17:48:56.957763+0000 mgr.y (mgr.14505) 862 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:58 vm00 bash[20770]: cluster 2026-03-09T17:48:56.957763+0000 mgr.y (mgr.14505) 862 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:58 vm00 bash[20770]: cluster 2026-03-09T17:48:56.957763+0000 mgr.y (mgr.14505) 862 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:58 vm02 bash[23351]: cluster 2026-03-09T17:48:56.957763+0000 mgr.y (mgr.14505) 862 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:58 vm02 bash[23351]: cluster 2026-03-09T17:48:56.957763+0000 mgr.y (mgr.14505) 862 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:48:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:59 vm00 bash[28333]: audit 2026-03-09T17:48:58.644741+0000 mon.c (mon.2) 859 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:48:59 vm00 bash[28333]: audit 2026-03-09T17:48:58.644741+0000 mon.c (mon.2) 859 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:59 vm00 bash[20770]: audit 2026-03-09T17:48:58.644741+0000 mon.c (mon.2) 859 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:48:59 vm00 bash[20770]: audit 2026-03-09T17:48:58.644741+0000 mon.c (mon.2) 859 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:59 vm02 bash[23351]: audit 2026-03-09T17:48:58.644741+0000 mon.c (mon.2) 859 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:48:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:48:59 vm02 bash[23351]: audit 2026-03-09T17:48:58.644741+0000 mon.c (mon.2) 859 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:00 vm00 bash[20770]: cluster 2026-03-09T17:48:58.958460+0000 mgr.y (mgr.14505) 863 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:00 vm00 bash[20770]: cluster 2026-03-09T17:48:58.958460+0000 mgr.y (mgr.14505) 863 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:00 vm00 bash[20770]: audit 2026-03-09T17:49:00.114017+0000 mon.c (mon.2) 860 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:49:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:00 vm00 bash[20770]: audit 2026-03-09T17:49:00.114017+0000 mon.c (mon.2) 860 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:49:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:00 vm00 bash[28333]: cluster 2026-03-09T17:48:58.958460+0000 mgr.y (mgr.14505) 863 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:00 vm00 bash[28333]: cluster 2026-03-09T17:48:58.958460+0000 mgr.y (mgr.14505) 863 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:00 vm00 bash[28333]: audit 2026-03-09T17:49:00.114017+0000 mon.c (mon.2) 860 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:49:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:00 vm00 bash[28333]: audit 2026-03-09T17:49:00.114017+0000 mon.c (mon.2) 860 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:49:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:00 vm02 bash[23351]: cluster 2026-03-09T17:48:58.958460+0000 mgr.y (mgr.14505) 863 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:00 vm02 bash[23351]: cluster 2026-03-09T17:48:58.958460+0000 mgr.y (mgr.14505) 863 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:00 vm02 bash[23351]: audit 2026-03-09T17:49:00.114017+0000 mon.c (mon.2) 860 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:49:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:00 vm02 bash[23351]: audit 2026-03-09T17:49:00.114017+0000 mon.c (mon.2) 860 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:49:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:02 vm00 bash[28333]: cluster 2026-03-09T17:49:00.958911+0000 mgr.y (mgr.14505) 864 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:02 vm00 bash[28333]: cluster 2026-03-09T17:49:00.958911+0000 mgr.y (mgr.14505) 864 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:02 vm00 bash[20770]: cluster 2026-03-09T17:49:00.958911+0000 mgr.y (mgr.14505) 864 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:02 vm00 bash[20770]: cluster 2026-03-09T17:49:00.958911+0000 mgr.y (mgr.14505) 864 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:02.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:02 vm02 bash[23351]: cluster 2026-03-09T17:49:00.958911+0000 mgr.y (mgr.14505) 864 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:02.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:02 vm02 bash[23351]: cluster 2026-03-09T17:49:00.958911+0000 mgr.y (mgr.14505) 864 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:02.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:49:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:49:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:04 vm00 bash[28333]: audit 2026-03-09T17:49:02.515879+0000 mgr.y (mgr.14505) 865 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:04 vm00 bash[28333]: audit 2026-03-09T17:49:02.515879+0000 mgr.y (mgr.14505) 865 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:04 vm00 bash[28333]: cluster 2026-03-09T17:49:02.959863+0000 mgr.y (mgr.14505) 866 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:04 vm00 bash[28333]: cluster 2026-03-09T17:49:02.959863+0000 mgr.y (mgr.14505) 866 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:04 vm00 bash[20770]: audit 2026-03-09T17:49:02.515879+0000 mgr.y (mgr.14505) 865 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:04 vm00 bash[20770]: audit 2026-03-09T17:49:02.515879+0000 mgr.y (mgr.14505) 865 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:04 vm00 bash[20770]: cluster 2026-03-09T17:49:02.959863+0000 mgr.y (mgr.14505) 866 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:04 vm00 bash[20770]: cluster 2026-03-09T17:49:02.959863+0000 mgr.y (mgr.14505) 866 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:04 vm02 bash[23351]: audit 2026-03-09T17:49:02.515879+0000 mgr.y (mgr.14505) 865 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:04 vm02 bash[23351]: audit 2026-03-09T17:49:02.515879+0000 mgr.y (mgr.14505) 865 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:04 vm02 bash[23351]: cluster 2026-03-09T17:49:02.959863+0000 mgr.y (mgr.14505) 866 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:04 vm02 bash[23351]: cluster 2026-03-09T17:49:02.959863+0000 mgr.y (mgr.14505) 866 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: cluster 2026-03-09T17:49:04.960287+0000 mgr.y (mgr.14505) 867 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: cluster 2026-03-09T17:49:04.960287+0000 mgr.y (mgr.14505) 867 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.323015+0000 mon.a (mon.0) 3527 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.323015+0000 mon.a (mon.0) 3527 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.331216+0000 mon.a (mon.0) 3528 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.331216+0000 mon.a (mon.0) 3528 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.548692+0000 mon.a (mon.0) 3529 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.548692+0000 mon.a (mon.0) 3529 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.558007+0000 mon.a (mon.0) 3530 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.558007+0000 mon.a (mon.0) 3530 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.861645+0000 mon.c (mon.2) 861 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.861645+0000 mon.c (mon.2) 861 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.862802+0000 mon.c (mon.2) 862 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.862802+0000 mon.c (mon.2) 862 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.869010+0000 mon.a (mon.0) 3531 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:06 vm02 bash[23351]: audit 2026-03-09T17:49:05.869010+0000 mon.a (mon.0) 3531 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: cluster 2026-03-09T17:49:04.960287+0000 mgr.y (mgr.14505) 867 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: cluster 2026-03-09T17:49:04.960287+0000 mgr.y (mgr.14505) 867 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.323015+0000 mon.a (mon.0) 3527 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.323015+0000 mon.a (mon.0) 3527 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.331216+0000 mon.a (mon.0) 3528 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.331216+0000 mon.a (mon.0) 3528 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.548692+0000 mon.a (mon.0) 3529 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.548692+0000 mon.a (mon.0) 3529 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.558007+0000 mon.a (mon.0) 3530 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.558007+0000 mon.a (mon.0) 3530 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.861645+0000 mon.c (mon.2) 861 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.861645+0000 mon.c (mon.2) 861 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:49:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.862802+0000 mon.c (mon.2) 862 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.862802+0000 mon.c (mon.2) 862 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.869010+0000 mon.a (mon.0) 3531 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:06 vm00 bash[28333]: audit 2026-03-09T17:49:05.869010+0000 mon.a (mon.0) 3531 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: cluster 2026-03-09T17:49:04.960287+0000 mgr.y (mgr.14505) 867 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: cluster 2026-03-09T17:49:04.960287+0000 mgr.y (mgr.14505) 867 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.323015+0000 mon.a (mon.0) 3527 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.323015+0000 mon.a (mon.0) 3527 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.331216+0000 mon.a (mon.0) 3528 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.331216+0000 mon.a (mon.0) 3528 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.548692+0000 mon.a (mon.0) 3529 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.548692+0000 mon.a (mon.0) 3529 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.558007+0000 mon.a (mon.0) 3530 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.558007+0000 mon.a (mon.0) 3530 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.861645+0000 mon.c (mon.2) 861 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.861645+0000 mon.c (mon.2) 861 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.862802+0000 mon.c (mon.2) 862 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.862802+0000 mon.c (mon.2) 862 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.869010+0000 mon.a (mon.0) 3531 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:06 vm00 bash[20770]: audit 2026-03-09T17:49:05.869010+0000 mon.a (mon.0) 3531 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:49:06.789 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:49:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:49:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:49:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:08 vm02 bash[23351]: cluster 2026-03-09T17:49:06.960613+0000 mgr.y (mgr.14505) 868 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:08 vm02 bash[23351]: cluster 2026-03-09T17:49:06.960613+0000 mgr.y (mgr.14505) 868 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:08 vm00 bash[28333]: cluster 2026-03-09T17:49:06.960613+0000 mgr.y (mgr.14505) 868 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:08 vm00 bash[28333]: cluster 2026-03-09T17:49:06.960613+0000 mgr.y (mgr.14505) 868 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:08 vm00 bash[20770]: cluster 2026-03-09T17:49:06.960613+0000 mgr.y (mgr.14505) 868 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:08 vm00 bash[20770]: cluster 2026-03-09T17:49:06.960613+0000 mgr.y (mgr.14505) 868 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:10 vm02 bash[23351]: cluster 2026-03-09T17:49:08.961327+0000 mgr.y (mgr.14505) 869 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:10 vm02 bash[23351]: cluster 2026-03-09T17:49:08.961327+0000 mgr.y (mgr.14505) 869 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:10 vm00 bash[28333]: cluster 2026-03-09T17:49:08.961327+0000 mgr.y (mgr.14505) 869 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:10 vm00 bash[28333]: cluster 2026-03-09T17:49:08.961327+0000 mgr.y (mgr.14505) 869 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:10 vm00 bash[20770]: cluster 2026-03-09T17:49:08.961327+0000 mgr.y (mgr.14505) 869 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:10 vm00 bash[20770]: cluster 2026-03-09T17:49:08.961327+0000 mgr.y (mgr.14505) 869 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:12 vm02 bash[23351]: cluster 2026-03-09T17:49:10.961613+0000 mgr.y (mgr.14505) 870 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:12 vm02 bash[23351]: cluster 2026-03-09T17:49:10.961613+0000 mgr.y (mgr.14505) 870 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:12.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:49:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:49:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:12 vm00 bash[28333]: cluster 2026-03-09T17:49:10.961613+0000 mgr.y (mgr.14505) 870 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:12 vm00 bash[28333]: cluster 2026-03-09T17:49:10.961613+0000 mgr.y (mgr.14505) 870 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:12 vm00 bash[20770]: cluster 2026-03-09T17:49:10.961613+0000 mgr.y (mgr.14505) 870 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:12 vm00 bash[20770]: cluster 2026-03-09T17:49:10.961613+0000 mgr.y (mgr.14505) 870 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:14 vm00 bash[28333]: audit 2026-03-09T17:49:12.524176+0000 mgr.y (mgr.14505) 871 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:14 vm00 bash[28333]: audit 2026-03-09T17:49:12.524176+0000 mgr.y (mgr.14505) 871 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:14 vm00 bash[28333]: cluster 2026-03-09T17:49:12.962259+0000 mgr.y (mgr.14505) 872 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:14 vm00 bash[28333]: cluster 2026-03-09T17:49:12.962259+0000 mgr.y (mgr.14505) 872 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:14 vm00 bash[28333]: audit 2026-03-09T17:49:13.650916+0000 mon.c (mon.2) 863 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:14 vm00 bash[28333]: audit 2026-03-09T17:49:13.650916+0000 mon.c (mon.2) 863 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:14 vm00 bash[20770]: audit 2026-03-09T17:49:12.524176+0000 mgr.y (mgr.14505) 871 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:14 vm00 bash[20770]: audit 2026-03-09T17:49:12.524176+0000 mgr.y (mgr.14505) 871 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:14 vm00 bash[20770]: cluster 2026-03-09T17:49:12.962259+0000 mgr.y (mgr.14505) 872 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:14 vm00 bash[20770]: cluster 2026-03-09T17:49:12.962259+0000 mgr.y (mgr.14505) 872 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:14 vm00 bash[20770]: audit 2026-03-09T17:49:13.650916+0000 mon.c (mon.2) 863 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:14 vm00 bash[20770]: audit 2026-03-09T17:49:13.650916+0000 mon.c (mon.2) 863 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:14 vm02 bash[23351]: audit 2026-03-09T17:49:12.524176+0000 mgr.y (mgr.14505) 871 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:14 vm02 bash[23351]: audit 2026-03-09T17:49:12.524176+0000 mgr.y (mgr.14505) 871 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:14 vm02 bash[23351]: cluster 2026-03-09T17:49:12.962259+0000 mgr.y (mgr.14505) 872 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:14 vm02 bash[23351]: cluster 2026-03-09T17:49:12.962259+0000 mgr.y (mgr.14505) 872 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:14 vm02 bash[23351]: audit 2026-03-09T17:49:13.650916+0000 mon.c (mon.2) 863 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:14 vm02 bash[23351]: audit 2026-03-09T17:49:13.650916+0000 mon.c (mon.2) 863 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:49:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:49:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:49:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:16 vm00 bash[28333]: cluster 2026-03-09T17:49:14.962673+0000 mgr.y (mgr.14505) 873 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:16 vm00 bash[28333]: cluster 2026-03-09T17:49:14.962673+0000 mgr.y (mgr.14505) 873 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:16 vm00 bash[20770]: cluster 2026-03-09T17:49:14.962673+0000 mgr.y (mgr.14505) 873 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:16 vm00 bash[20770]: cluster 2026-03-09T17:49:14.962673+0000 mgr.y (mgr.14505) 873 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:16.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:16 vm02 bash[23351]: cluster 2026-03-09T17:49:14.962673+0000 mgr.y (mgr.14505) 873 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:16.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:16 vm02 bash[23351]: cluster 2026-03-09T17:49:14.962673+0000 mgr.y (mgr.14505) 873 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:18 vm00 bash[28333]: cluster 2026-03-09T17:49:16.963021+0000 mgr.y (mgr.14505) 874 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:18 vm00 bash[28333]: cluster 2026-03-09T17:49:16.963021+0000 mgr.y (mgr.14505) 874 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:18 vm00 bash[20770]: cluster 2026-03-09T17:49:16.963021+0000 mgr.y (mgr.14505) 874 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:18 vm00 bash[20770]: cluster 2026-03-09T17:49:16.963021+0000 mgr.y (mgr.14505) 874 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:18.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:18 vm02 bash[23351]: cluster 2026-03-09T17:49:16.963021+0000 mgr.y (mgr.14505) 874 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:18.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:18 vm02 bash[23351]: cluster 2026-03-09T17:49:16.963021+0000 mgr.y (mgr.14505) 874 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:20 vm00 bash[20770]: cluster 2026-03-09T17:49:18.963739+0000 mgr.y (mgr.14505) 875 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:20 vm00 bash[20770]: cluster 2026-03-09T17:49:18.963739+0000 mgr.y (mgr.14505) 875 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:20 vm00 bash[28333]: cluster 2026-03-09T17:49:18.963739+0000 mgr.y (mgr.14505) 875 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:20 vm00 bash[28333]: cluster 2026-03-09T17:49:18.963739+0000 mgr.y (mgr.14505) 875 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:20 vm02 bash[23351]: cluster 2026-03-09T17:49:18.963739+0000 mgr.y (mgr.14505) 875 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:20 vm02 bash[23351]: cluster 2026-03-09T17:49:18.963739+0000 mgr.y (mgr.14505) 875 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:22 vm00 bash[28333]: cluster 2026-03-09T17:49:20.964042+0000 mgr.y (mgr.14505) 876 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:22 vm00 bash[28333]: cluster 2026-03-09T17:49:20.964042+0000 mgr.y (mgr.14505) 876 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:22 vm00 bash[20770]: cluster 2026-03-09T17:49:20.964042+0000 mgr.y (mgr.14505) 876 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:22 vm00 bash[20770]: cluster 2026-03-09T17:49:20.964042+0000 mgr.y (mgr.14505) 876 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:22.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:22 vm02 bash[23351]: cluster 2026-03-09T17:49:20.964042+0000 mgr.y (mgr.14505) 876 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:22.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:22 vm02 bash[23351]: cluster 2026-03-09T17:49:20.964042+0000 mgr.y (mgr.14505) 876 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:22.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:49:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:49:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:24 vm00 bash[28333]: audit 2026-03-09T17:49:22.534928+0000 mgr.y (mgr.14505) 877 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:24 vm00 bash[28333]: audit 2026-03-09T17:49:22.534928+0000 mgr.y (mgr.14505) 877 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:24 vm00 bash[28333]: cluster 2026-03-09T17:49:22.964580+0000 mgr.y (mgr.14505) 878 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:24 vm00 bash[28333]: cluster 2026-03-09T17:49:22.964580+0000 mgr.y (mgr.14505) 878 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:24 vm00 bash[20770]: audit 2026-03-09T17:49:22.534928+0000 mgr.y (mgr.14505) 877 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:24 vm00 bash[20770]: audit 2026-03-09T17:49:22.534928+0000 mgr.y (mgr.14505) 877 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:24 vm00 bash[20770]: cluster 2026-03-09T17:49:22.964580+0000 mgr.y (mgr.14505) 878 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:24 vm00 bash[20770]: cluster 2026-03-09T17:49:22.964580+0000 mgr.y (mgr.14505) 878 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:24 vm02 bash[23351]: audit 2026-03-09T17:49:22.534928+0000 mgr.y (mgr.14505) 877 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:24 vm02 bash[23351]: audit 2026-03-09T17:49:22.534928+0000 mgr.y (mgr.14505) 877 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:24 vm02 bash[23351]: cluster 2026-03-09T17:49:22.964580+0000 mgr.y (mgr.14505) 878 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:24 vm02 bash[23351]: cluster 2026-03-09T17:49:22.964580+0000 mgr.y (mgr.14505) 878 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:26 vm00 bash[28333]: cluster 2026-03-09T17:49:24.964959+0000 mgr.y (mgr.14505) 879 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:26 vm00 bash[28333]: cluster 2026-03-09T17:49:24.964959+0000 mgr.y (mgr.14505) 879 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:26 vm00 bash[20770]: cluster 2026-03-09T17:49:24.964959+0000 mgr.y (mgr.14505) 879 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:26 vm00 bash[20770]: cluster 2026-03-09T17:49:24.964959+0000 mgr.y (mgr.14505) 879 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:49:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:49:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:49:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:26 vm02 bash[23351]: cluster 2026-03-09T17:49:24.964959+0000 mgr.y (mgr.14505) 879 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:26 vm02 bash[23351]: cluster 2026-03-09T17:49:24.964959+0000 mgr.y (mgr.14505) 879 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:28 vm00 bash[28333]: cluster 2026-03-09T17:49:26.965230+0000 mgr.y (mgr.14505) 880 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:28 vm00 bash[28333]: cluster 2026-03-09T17:49:26.965230+0000 mgr.y (mgr.14505) 880 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:28 vm00 bash[20770]: cluster 2026-03-09T17:49:26.965230+0000 mgr.y (mgr.14505) 880 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:28 vm00 bash[20770]: cluster 2026-03-09T17:49:26.965230+0000 mgr.y (mgr.14505) 880 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:28.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:28 vm02 bash[23351]: cluster 2026-03-09T17:49:26.965230+0000 mgr.y (mgr.14505) 880 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:28.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:28 vm02 bash[23351]: cluster 2026-03-09T17:49:26.965230+0000 mgr.y (mgr.14505) 880 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:29 vm00 bash[28333]: audit 2026-03-09T17:49:28.660488+0000 mon.c (mon.2) 864 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:29 vm00 bash[28333]: audit 2026-03-09T17:49:28.660488+0000 mon.c (mon.2) 864 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:29 vm00 bash[20770]: audit 2026-03-09T17:49:28.660488+0000 mon.c (mon.2) 864 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:29 vm00 bash[20770]: audit 2026-03-09T17:49:28.660488+0000 mon.c (mon.2) 864 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:29.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:29 vm02 bash[23351]: audit 2026-03-09T17:49:28.660488+0000 mon.c (mon.2) 864 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:29.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:29 vm02 bash[23351]: audit 2026-03-09T17:49:28.660488+0000 mon.c (mon.2) 864 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:30 vm00 bash[28333]: cluster 2026-03-09T17:49:28.965904+0000 mgr.y (mgr.14505) 881 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:30 vm00 bash[28333]: cluster 2026-03-09T17:49:28.965904+0000 mgr.y (mgr.14505) 881 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:30 vm00 bash[20770]: cluster 2026-03-09T17:49:28.965904+0000 mgr.y (mgr.14505) 881 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:30 vm00 bash[20770]: cluster 2026-03-09T17:49:28.965904+0000 mgr.y (mgr.14505) 881 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:30.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:30 vm02 bash[23351]: cluster 2026-03-09T17:49:28.965904+0000 mgr.y (mgr.14505) 881 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:30.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:30 vm02 bash[23351]: cluster 2026-03-09T17:49:28.965904+0000 mgr.y (mgr.14505) 881 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:32 vm00 bash[28333]: cluster 2026-03-09T17:49:30.966250+0000 mgr.y (mgr.14505) 882 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:32 vm00 bash[28333]: cluster 2026-03-09T17:49:30.966250+0000 mgr.y (mgr.14505) 882 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:32 vm00 bash[20770]: cluster 2026-03-09T17:49:30.966250+0000 mgr.y (mgr.14505) 882 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:32 vm00 bash[20770]: cluster 2026-03-09T17:49:30.966250+0000 mgr.y (mgr.14505) 882 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:32.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:32 vm02 bash[23351]: cluster 2026-03-09T17:49:30.966250+0000 mgr.y (mgr.14505) 882 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:32.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:32 vm02 bash[23351]: cluster 2026-03-09T17:49:30.966250+0000 mgr.y (mgr.14505) 882 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:32.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:49:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:49:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:34 vm00 bash[28333]: audit 2026-03-09T17:49:32.544643+0000 mgr.y (mgr.14505) 883 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:34 vm00 bash[28333]: audit 2026-03-09T17:49:32.544643+0000 mgr.y (mgr.14505) 883 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:34 vm00 bash[28333]: cluster 2026-03-09T17:49:32.967089+0000 mgr.y (mgr.14505) 884 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:34 vm00 bash[28333]: cluster 2026-03-09T17:49:32.967089+0000 mgr.y (mgr.14505) 884 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:34 vm00 bash[20770]: audit 2026-03-09T17:49:32.544643+0000 mgr.y (mgr.14505) 883 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:34 vm00 bash[20770]: audit 2026-03-09T17:49:32.544643+0000 mgr.y (mgr.14505) 883 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:34 vm00 bash[20770]: cluster 2026-03-09T17:49:32.967089+0000 mgr.y (mgr.14505) 884 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:34 vm00 bash[20770]: cluster 2026-03-09T17:49:32.967089+0000 mgr.y (mgr.14505) 884 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:34 vm02 bash[23351]: audit 2026-03-09T17:49:32.544643+0000 mgr.y (mgr.14505) 883 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:34 vm02 bash[23351]: audit 2026-03-09T17:49:32.544643+0000 mgr.y (mgr.14505) 883 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:34 vm02 bash[23351]: cluster 2026-03-09T17:49:32.967089+0000 mgr.y (mgr.14505) 884 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:34 vm02 bash[23351]: cluster 2026-03-09T17:49:32.967089+0000 mgr.y (mgr.14505) 884 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:35 vm00 bash[28333]: cluster 2026-03-09T17:49:34.967483+0000 mgr.y (mgr.14505) 885 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:35 vm00 bash[28333]: cluster 2026-03-09T17:49:34.967483+0000 mgr.y (mgr.14505) 885 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:35 vm00 bash[20770]: cluster 2026-03-09T17:49:34.967483+0000 mgr.y (mgr.14505) 885 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:35 vm00 bash[20770]: cluster 2026-03-09T17:49:34.967483+0000 mgr.y (mgr.14505) 885 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:35.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:35 vm02 bash[23351]: cluster 2026-03-09T17:49:34.967483+0000 mgr.y (mgr.14505) 885 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:35.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:35 vm02 bash[23351]: cluster 2026-03-09T17:49:34.967483+0000 mgr.y (mgr.14505) 885 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:49:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:49:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:49:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:38 vm00 bash[28333]: cluster 2026-03-09T17:49:36.967736+0000 mgr.y (mgr.14505) 886 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:38.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:38 vm00 bash[28333]: cluster 2026-03-09T17:49:36.967736+0000 mgr.y (mgr.14505) 886 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:38 vm00 bash[20770]: cluster 2026-03-09T17:49:36.967736+0000 mgr.y (mgr.14505) 886 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:38.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:38 vm00 bash[20770]: cluster 2026-03-09T17:49:36.967736+0000 mgr.y (mgr.14505) 886 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:38.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:38 vm02 bash[23351]: cluster 2026-03-09T17:49:36.967736+0000 mgr.y (mgr.14505) 886 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:38.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:38 vm02 bash[23351]: cluster 2026-03-09T17:49:36.967736+0000 mgr.y (mgr.14505) 886 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:40 vm00 bash[28333]: cluster 2026-03-09T17:49:38.968341+0000 mgr.y (mgr.14505) 887 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:40.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:40 vm00 bash[28333]: cluster 2026-03-09T17:49:38.968341+0000 mgr.y (mgr.14505) 887 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:40 vm00 bash[20770]: cluster 2026-03-09T17:49:38.968341+0000 mgr.y (mgr.14505) 887 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:40.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:40 vm00 bash[20770]: cluster 2026-03-09T17:49:38.968341+0000 mgr.y (mgr.14505) 887 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:40 vm02 bash[23351]: cluster 2026-03-09T17:49:38.968341+0000 mgr.y (mgr.14505) 887 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:40.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:40 vm02 bash[23351]: cluster 2026-03-09T17:49:38.968341+0000 mgr.y (mgr.14505) 887 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:42 vm00 bash[28333]: cluster 2026-03-09T17:49:40.968609+0000 mgr.y (mgr.14505) 888 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:42 vm00 bash[28333]: cluster 2026-03-09T17:49:40.968609+0000 mgr.y (mgr.14505) 888 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:42 vm00 bash[20770]: cluster 2026-03-09T17:49:40.968609+0000 mgr.y (mgr.14505) 888 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:42 vm00 bash[20770]: cluster 2026-03-09T17:49:40.968609+0000 mgr.y (mgr.14505) 888 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:42.555 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:42 vm02 bash[23351]: cluster 2026-03-09T17:49:40.968609+0000 mgr.y (mgr.14505) 888 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:42.555 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:42 vm02 bash[23351]: cluster 2026-03-09T17:49:40.968609+0000 mgr.y (mgr.14505) 888 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:42.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:49:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:44 vm00 bash[28333]: audit 2026-03-09T17:49:42.555036+0000 mgr.y (mgr.14505) 889 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:44 vm00 bash[28333]: audit 2026-03-09T17:49:42.555036+0000 mgr.y (mgr.14505) 889 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:44 vm00 bash[28333]: cluster 2026-03-09T17:49:42.969086+0000 mgr.y (mgr.14505) 890 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:44 vm00 bash[28333]: cluster 2026-03-09T17:49:42.969086+0000 mgr.y (mgr.14505) 890 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:44 vm00 bash[28333]: audit 2026-03-09T17:49:43.666426+0000 mon.c (mon.2) 865 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:44 vm00 bash[28333]: audit 2026-03-09T17:49:43.666426+0000 mon.c (mon.2) 865 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:44 vm00 bash[20770]: audit 2026-03-09T17:49:42.555036+0000 mgr.y (mgr.14505) 889 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:44 vm00 bash[20770]: audit 2026-03-09T17:49:42.555036+0000 mgr.y (mgr.14505) 889 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:44 vm00 bash[20770]: cluster 2026-03-09T17:49:42.969086+0000 mgr.y (mgr.14505) 890 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:44 vm00 bash[20770]: cluster 2026-03-09T17:49:42.969086+0000 mgr.y (mgr.14505) 890 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:44 vm00 bash[20770]: audit 2026-03-09T17:49:43.666426+0000 mon.c (mon.2) 865 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:44 vm00 bash[20770]: audit 2026-03-09T17:49:43.666426+0000 mon.c (mon.2) 865 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:44 vm02 bash[23351]: audit 2026-03-09T17:49:42.555036+0000 mgr.y (mgr.14505) 889 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:44 vm02 bash[23351]: audit 2026-03-09T17:49:42.555036+0000 mgr.y (mgr.14505) 889 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:44 vm02 bash[23351]: cluster 2026-03-09T17:49:42.969086+0000 mgr.y (mgr.14505) 890 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:44 vm02 bash[23351]: cluster 2026-03-09T17:49:42.969086+0000 mgr.y (mgr.14505) 890 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:44 vm02 bash[23351]: audit 2026-03-09T17:49:43.666426+0000 mon.c (mon.2) 865 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:44 vm02 bash[23351]: audit 2026-03-09T17:49:43.666426+0000 mon.c (mon.2) 865 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:46 vm00 bash[28333]: cluster 2026-03-09T17:49:44.969449+0000 mgr.y (mgr.14505) 891 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:46 vm00 bash[28333]: cluster 2026-03-09T17:49:44.969449+0000 mgr.y (mgr.14505) 891 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:49:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:49:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:49:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:46 vm00 bash[20770]: cluster 2026-03-09T17:49:44.969449+0000 mgr.y (mgr.14505) 891 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:46 vm00 bash[20770]: cluster 2026-03-09T17:49:44.969449+0000 mgr.y (mgr.14505) 891 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:46 vm02 bash[23351]: cluster 2026-03-09T17:49:44.969449+0000 mgr.y (mgr.14505) 891 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:46 vm02 bash[23351]: cluster 2026-03-09T17:49:44.969449+0000 mgr.y (mgr.14505) 891 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:48 vm00 bash[20770]: cluster 2026-03-09T17:49:46.969711+0000 mgr.y (mgr.14505) 892 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:48 vm00 bash[20770]: cluster 2026-03-09T17:49:46.969711+0000 mgr.y (mgr.14505) 892 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:48 vm00 bash[28333]: cluster 2026-03-09T17:49:46.969711+0000 mgr.y (mgr.14505) 892 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:48 vm00 bash[28333]: cluster 2026-03-09T17:49:46.969711+0000 mgr.y (mgr.14505) 892 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:48 vm02 bash[23351]: cluster 2026-03-09T17:49:46.969711+0000 mgr.y (mgr.14505) 892 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:48 vm02 bash[23351]: cluster 2026-03-09T17:49:46.969711+0000 mgr.y (mgr.14505) 892 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:50 vm00 bash[28333]: cluster 2026-03-09T17:49:48.970318+0000 mgr.y (mgr.14505) 893 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:50 vm00 bash[28333]: cluster 2026-03-09T17:49:48.970318+0000 mgr.y (mgr.14505) 893 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:50 vm00 bash[20770]: cluster 2026-03-09T17:49:48.970318+0000 mgr.y (mgr.14505) 893 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:50 vm00 bash[20770]: cluster 2026-03-09T17:49:48.970318+0000 mgr.y (mgr.14505) 893 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:50 vm02 bash[23351]: cluster 2026-03-09T17:49:48.970318+0000 mgr.y (mgr.14505) 893 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:50 vm02 bash[23351]: cluster 2026-03-09T17:49:48.970318+0000 mgr.y (mgr.14505) 893 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:52 vm00 bash[20770]: cluster 2026-03-09T17:49:50.970667+0000 mgr.y (mgr.14505) 894 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:52 vm00 bash[20770]: cluster 2026-03-09T17:49:50.970667+0000 mgr.y (mgr.14505) 894 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:52 vm00 bash[28333]: cluster 2026-03-09T17:49:50.970667+0000 mgr.y (mgr.14505) 894 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:52 vm00 bash[28333]: cluster 2026-03-09T17:49:50.970667+0000 mgr.y (mgr.14505) 894 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:52.561 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:52 vm02 bash[23351]: cluster 2026-03-09T17:49:50.970667+0000 mgr.y (mgr.14505) 894 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:52.562 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:52 vm02 bash[23351]: cluster 2026-03-09T17:49:50.970667+0000 mgr.y (mgr.14505) 894 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:52.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:49:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:49:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:54 vm00 bash[28333]: audit 2026-03-09T17:49:52.561259+0000 mgr.y (mgr.14505) 895 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:54 vm00 bash[28333]: audit 2026-03-09T17:49:52.561259+0000 mgr.y (mgr.14505) 895 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:54 vm00 bash[28333]: cluster 2026-03-09T17:49:52.971149+0000 mgr.y (mgr.14505) 896 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:54 vm00 bash[28333]: cluster 2026-03-09T17:49:52.971149+0000 mgr.y (mgr.14505) 896 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:54 vm00 bash[20770]: audit 2026-03-09T17:49:52.561259+0000 mgr.y (mgr.14505) 895 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:54 vm00 bash[20770]: audit 2026-03-09T17:49:52.561259+0000 mgr.y (mgr.14505) 895 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:54 vm00 bash[20770]: cluster 2026-03-09T17:49:52.971149+0000 mgr.y (mgr.14505) 896 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:54 vm00 bash[20770]: cluster 2026-03-09T17:49:52.971149+0000 mgr.y (mgr.14505) 896 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:54 vm02 bash[23351]: audit 2026-03-09T17:49:52.561259+0000 mgr.y (mgr.14505) 895 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:54 vm02 bash[23351]: audit 2026-03-09T17:49:52.561259+0000 mgr.y (mgr.14505) 895 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:49:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:54 vm02 bash[23351]: cluster 2026-03-09T17:49:52.971149+0000 mgr.y (mgr.14505) 896 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:54 vm02 bash[23351]: cluster 2026-03-09T17:49:52.971149+0000 mgr.y (mgr.14505) 896 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:49:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:56 vm00 bash[28333]: cluster 2026-03-09T17:49:54.971479+0000 mgr.y (mgr.14505) 897 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:56 vm00 bash[28333]: cluster 2026-03-09T17:49:54.971479+0000 mgr.y (mgr.14505) 897 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:56 vm00 bash[20770]: cluster 2026-03-09T17:49:54.971479+0000 mgr.y (mgr.14505) 897 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:56 vm00 bash[20770]: cluster 2026-03-09T17:49:54.971479+0000 mgr.y (mgr.14505) 897 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:49:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:49:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:49:56.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:56 vm02 bash[23351]: cluster 2026-03-09T17:49:54.971479+0000 mgr.y (mgr.14505) 897 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:56.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:56 vm02 bash[23351]: cluster 2026-03-09T17:49:54.971479+0000 mgr.y (mgr.14505) 897 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:58 vm00 bash[28333]: cluster 2026-03-09T17:49:56.971802+0000 mgr.y (mgr.14505) 898 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:58 vm00 bash[28333]: cluster 2026-03-09T17:49:56.971802+0000 mgr.y (mgr.14505) 898 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:58 vm00 bash[20770]: cluster 2026-03-09T17:49:56.971802+0000 mgr.y (mgr.14505) 898 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:58 vm00 bash[20770]: cluster 2026-03-09T17:49:56.971802+0000 mgr.y (mgr.14505) 898 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:58.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:58 vm02 bash[23351]: cluster 2026-03-09T17:49:56.971802+0000 mgr.y (mgr.14505) 898 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:58.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:58 vm02 bash[23351]: cluster 2026-03-09T17:49:56.971802+0000 mgr.y (mgr.14505) 898 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:49:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:59 vm00 bash[28333]: audit 2026-03-09T17:49:58.672742+0000 mon.c (mon.2) 866 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:49:59 vm00 bash[28333]: audit 2026-03-09T17:49:58.672742+0000 mon.c (mon.2) 866 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:59 vm00 bash[20770]: audit 2026-03-09T17:49:58.672742+0000 mon.c (mon.2) 866 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:49:59 vm00 bash[20770]: audit 2026-03-09T17:49:58.672742+0000 mon.c (mon.2) 866 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:59 vm02 bash[23351]: audit 2026-03-09T17:49:58.672742+0000 mon.c (mon.2) 866 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:49:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:49:59 vm02 bash[23351]: audit 2026-03-09T17:49:58.672742+0000 mon.c (mon.2) 866 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:49:58.972434+0000 mgr.y (mgr.14505) 899 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:49:58.972434+0000 mgr.y (mgr.14505) 899 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000104+0000 mon.a (mon.0) 3532 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000104+0000 mon.a (mon.0) 3532 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000146+0000 mon.a (mon.0) 3533 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000146+0000 mon.a (mon.0) 3533 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000158+0000 mon.a (mon.0) 3534 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000158+0000 mon.a (mon.0) 3534 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000169+0000 mon.a (mon.0) 3535 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000169+0000 mon.a (mon.0) 3535 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000181+0000 mon.a (mon.0) 3536 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000181+0000 mon.a (mon.0) 3536 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000192+0000 mon.a (mon.0) 3537 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:00 vm00 bash[28333]: cluster 2026-03-09T17:50:00.000192+0000 mon.a (mon.0) 3537 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:49:58.972434+0000 mgr.y (mgr.14505) 899 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:49:58.972434+0000 mgr.y (mgr.14505) 899 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000104+0000 mon.a (mon.0) 3532 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000104+0000 mon.a (mon.0) 3532 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000146+0000 mon.a (mon.0) 3533 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000146+0000 mon.a (mon.0) 3533 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000158+0000 mon.a (mon.0) 3534 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:50:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000158+0000 mon.a (mon.0) 3534 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:50:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000169+0000 mon.a (mon.0) 3535 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:50:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000169+0000 mon.a (mon.0) 3535 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:50:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000181+0000 mon.a (mon.0) 3536 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:50:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000181+0000 mon.a (mon.0) 3536 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:50:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000192+0000 mon.a (mon.0) 3537 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:50:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:00 vm00 bash[20770]: cluster 2026-03-09T17:50:00.000192+0000 mon.a (mon.0) 3537 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:50:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:49:58.972434+0000 mgr.y (mgr.14505) 899 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:49:58.972434+0000 mgr.y (mgr.14505) 899 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000104+0000 mon.a (mon.0) 3532 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000104+0000 mon.a (mon.0) 3532 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000146+0000 mon.a (mon.0) 3533 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000146+0000 mon.a (mon.0) 3533 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000158+0000 mon.a (mon.0) 3534 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000158+0000 mon.a (mon.0) 3534 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000169+0000 mon.a (mon.0) 3535 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000169+0000 mon.a (mon.0) 3535 : cluster [WRN] application not enabled on pool 'SnapListvm00-60924-1' 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000181+0000 mon.a (mon.0) 3536 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000181+0000 mon.a (mon.0) 3536 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60946-1' 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000192+0000 mon.a (mon.0) 3537 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:50:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:00 vm02 bash[23351]: cluster 2026-03-09T17:50:00.000192+0000 mon.a (mon.0) 3537 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T17:50:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:02 vm00 bash[28333]: cluster 2026-03-09T17:50:00.972735+0000 mgr.y (mgr.14505) 900 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:02 vm00 bash[28333]: cluster 2026-03-09T17:50:00.972735+0000 mgr.y (mgr.14505) 900 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:02 vm00 bash[20770]: cluster 2026-03-09T17:50:00.972735+0000 mgr.y (mgr.14505) 900 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:02 vm00 bash[20770]: cluster 2026-03-09T17:50:00.972735+0000 mgr.y (mgr.14505) 900 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:02.569 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:02 vm02 bash[23351]: cluster 2026-03-09T17:50:00.972735+0000 mgr.y (mgr.14505) 900 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:02.569 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:02 vm02 bash[23351]: cluster 2026-03-09T17:50:00.972735+0000 mgr.y (mgr.14505) 900 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:02.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:50:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:50:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:04 vm00 bash[28333]: audit 2026-03-09T17:50:02.568822+0000 mgr.y (mgr.14505) 901 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:04 vm00 bash[28333]: audit 2026-03-09T17:50:02.568822+0000 mgr.y (mgr.14505) 901 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:04 vm00 bash[28333]: cluster 2026-03-09T17:50:02.973348+0000 mgr.y (mgr.14505) 902 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:04 vm00 bash[28333]: cluster 2026-03-09T17:50:02.973348+0000 mgr.y (mgr.14505) 902 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:04 vm00 bash[20770]: audit 2026-03-09T17:50:02.568822+0000 mgr.y (mgr.14505) 901 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:04 vm00 bash[20770]: audit 2026-03-09T17:50:02.568822+0000 mgr.y (mgr.14505) 901 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:04 vm00 bash[20770]: cluster 2026-03-09T17:50:02.973348+0000 mgr.y (mgr.14505) 902 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:04 vm00 bash[20770]: cluster 2026-03-09T17:50:02.973348+0000 mgr.y (mgr.14505) 902 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:04.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:04 vm02 bash[23351]: audit 2026-03-09T17:50:02.568822+0000 mgr.y (mgr.14505) 901 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:04.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:04 vm02 bash[23351]: audit 2026-03-09T17:50:02.568822+0000 mgr.y (mgr.14505) 901 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:04.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:04 vm02 bash[23351]: cluster 2026-03-09T17:50:02.973348+0000 mgr.y (mgr.14505) 902 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:04.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:04 vm02 bash[23351]: cluster 2026-03-09T17:50:02.973348+0000 mgr.y (mgr.14505) 902 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: cluster 2026-03-09T17:50:04.973771+0000 mgr.y (mgr.14505) 903 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: cluster 2026-03-09T17:50:04.973771+0000 mgr.y (mgr.14505) 903 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: audit 2026-03-09T17:50:05.911409+0000 mon.c (mon.2) 867 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: audit 2026-03-09T17:50:05.911409+0000 mon.c (mon.2) 867 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: audit 2026-03-09T17:50:06.237298+0000 mon.c (mon.2) 868 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: audit 2026-03-09T17:50:06.237298+0000 mon.c (mon.2) 868 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: audit 2026-03-09T17:50:06.237983+0000 mon.c (mon.2) 869 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: audit 2026-03-09T17:50:06.237983+0000 mon.c (mon.2) 869 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: audit 2026-03-09T17:50:06.247324+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:06 vm00 bash[28333]: audit 2026-03-09T17:50:06.247324+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: cluster 2026-03-09T17:50:04.973771+0000 mgr.y (mgr.14505) 903 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: cluster 2026-03-09T17:50:04.973771+0000 mgr.y (mgr.14505) 903 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: audit 2026-03-09T17:50:05.911409+0000 mon.c (mon.2) 867 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: audit 2026-03-09T17:50:05.911409+0000 mon.c (mon.2) 867 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: audit 2026-03-09T17:50:06.237298+0000 mon.c (mon.2) 868 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: audit 2026-03-09T17:50:06.237298+0000 mon.c (mon.2) 868 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:50:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: audit 2026-03-09T17:50:06.237983+0000 mon.c (mon.2) 869 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:50:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: audit 2026-03-09T17:50:06.237983+0000 mon.c (mon.2) 869 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:50:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: audit 2026-03-09T17:50:06.247324+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:50:06.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:06 vm00 bash[20770]: audit 2026-03-09T17:50:06.247324+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:50:06.539 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:50:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:50:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:50:06.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: cluster 2026-03-09T17:50:04.973771+0000 mgr.y (mgr.14505) 903 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: cluster 2026-03-09T17:50:04.973771+0000 mgr.y (mgr.14505) 903 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: audit 2026-03-09T17:50:05.911409+0000 mon.c (mon.2) 867 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: audit 2026-03-09T17:50:05.911409+0000 mon.c (mon.2) 867 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: audit 2026-03-09T17:50:06.237298+0000 mon.c (mon.2) 868 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: audit 2026-03-09T17:50:06.237298+0000 mon.c (mon.2) 868 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: audit 2026-03-09T17:50:06.237983+0000 mon.c (mon.2) 869 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: audit 2026-03-09T17:50:06.237983+0000 mon.c (mon.2) 869 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: audit 2026-03-09T17:50:06.247324+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:50:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:06 vm02 bash[23351]: audit 2026-03-09T17:50:06.247324+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:50:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:08 vm00 bash[28333]: cluster 2026-03-09T17:50:06.974159+0000 mgr.y (mgr.14505) 904 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:08 vm00 bash[28333]: cluster 2026-03-09T17:50:06.974159+0000 mgr.y (mgr.14505) 904 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:08 vm00 bash[20770]: cluster 2026-03-09T17:50:06.974159+0000 mgr.y (mgr.14505) 904 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:08 vm00 bash[20770]: cluster 2026-03-09T17:50:06.974159+0000 mgr.y (mgr.14505) 904 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:08 vm02 bash[23351]: cluster 2026-03-09T17:50:06.974159+0000 mgr.y (mgr.14505) 904 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:08 vm02 bash[23351]: cluster 2026-03-09T17:50:06.974159+0000 mgr.y (mgr.14505) 904 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:10 vm00 bash[28333]: cluster 2026-03-09T17:50:08.974894+0000 mgr.y (mgr.14505) 905 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:10 vm00 bash[28333]: cluster 2026-03-09T17:50:08.974894+0000 mgr.y (mgr.14505) 905 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:10 vm00 bash[20770]: cluster 2026-03-09T17:50:08.974894+0000 mgr.y (mgr.14505) 905 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:10 vm00 bash[20770]: cluster 2026-03-09T17:50:08.974894+0000 mgr.y (mgr.14505) 905 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:10 vm02 bash[23351]: cluster 2026-03-09T17:50:08.974894+0000 mgr.y (mgr.14505) 905 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:10 vm02 bash[23351]: cluster 2026-03-09T17:50:08.974894+0000 mgr.y (mgr.14505) 905 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:12 vm00 bash[28333]: cluster 2026-03-09T17:50:10.975146+0000 mgr.y (mgr.14505) 906 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:12 vm00 bash[28333]: cluster 2026-03-09T17:50:10.975146+0000 mgr.y (mgr.14505) 906 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:12 vm00 bash[20770]: cluster 2026-03-09T17:50:10.975146+0000 mgr.y (mgr.14505) 906 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:12 vm00 bash[20770]: cluster 2026-03-09T17:50:10.975146+0000 mgr.y (mgr.14505) 906 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:12.575 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:12 vm02 bash[23351]: cluster 2026-03-09T17:50:10.975146+0000 mgr.y (mgr.14505) 906 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:12.575 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:12 vm02 bash[23351]: cluster 2026-03-09T17:50:10.975146+0000 mgr.y (mgr.14505) 906 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:12.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:50:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:14 vm00 bash[20770]: audit 2026-03-09T17:50:12.574970+0000 mgr.y (mgr.14505) 907 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:14 vm00 bash[20770]: audit 2026-03-09T17:50:12.574970+0000 mgr.y (mgr.14505) 907 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:14 vm00 bash[20770]: cluster 2026-03-09T17:50:12.975817+0000 mgr.y (mgr.14505) 908 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:14 vm00 bash[20770]: cluster 2026-03-09T17:50:12.975817+0000 mgr.y (mgr.14505) 908 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:14 vm00 bash[20770]: audit 2026-03-09T17:50:13.679630+0000 mon.c (mon.2) 870 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:14 vm00 bash[20770]: audit 2026-03-09T17:50:13.679630+0000 mon.c (mon.2) 870 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:14 vm00 bash[28333]: audit 2026-03-09T17:50:12.574970+0000 mgr.y (mgr.14505) 907 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:14 vm00 bash[28333]: audit 2026-03-09T17:50:12.574970+0000 mgr.y (mgr.14505) 907 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:14 vm00 bash[28333]: cluster 2026-03-09T17:50:12.975817+0000 mgr.y (mgr.14505) 908 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:14 vm00 bash[28333]: cluster 2026-03-09T17:50:12.975817+0000 mgr.y (mgr.14505) 908 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:14 vm00 bash[28333]: audit 2026-03-09T17:50:13.679630+0000 mon.c (mon.2) 870 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:14 vm00 bash[28333]: audit 2026-03-09T17:50:13.679630+0000 mon.c (mon.2) 870 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:14 vm02 bash[23351]: audit 2026-03-09T17:50:12.574970+0000 mgr.y (mgr.14505) 907 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:14 vm02 bash[23351]: audit 2026-03-09T17:50:12.574970+0000 mgr.y (mgr.14505) 907 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:14 vm02 bash[23351]: cluster 2026-03-09T17:50:12.975817+0000 mgr.y (mgr.14505) 908 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:14 vm02 bash[23351]: cluster 2026-03-09T17:50:12.975817+0000 mgr.y (mgr.14505) 908 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:14 vm02 bash[23351]: audit 2026-03-09T17:50:13.679630+0000 mon.c (mon.2) 870 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:14 vm02 bash[23351]: audit 2026-03-09T17:50:13.679630+0000 mon.c (mon.2) 870 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:16 vm00 bash[20770]: cluster 2026-03-09T17:50:14.976360+0000 mgr.y (mgr.14505) 909 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:16 vm00 bash[20770]: cluster 2026-03-09T17:50:14.976360+0000 mgr.y (mgr.14505) 909 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:16 vm00 bash[28333]: cluster 2026-03-09T17:50:14.976360+0000 mgr.y (mgr.14505) 909 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:16 vm00 bash[28333]: cluster 2026-03-09T17:50:14.976360+0000 mgr.y (mgr.14505) 909 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:50:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:50:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:50:16.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:16 vm02 bash[23351]: cluster 2026-03-09T17:50:14.976360+0000 mgr.y (mgr.14505) 909 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:16 vm02 bash[23351]: cluster 2026-03-09T17:50:14.976360+0000 mgr.y (mgr.14505) 909 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:18.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:18 vm02 bash[23351]: cluster 2026-03-09T17:50:16.976687+0000 mgr.y (mgr.14505) 910 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:18 vm02 bash[23351]: cluster 2026-03-09T17:50:16.976687+0000 mgr.y (mgr.14505) 910 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:19.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:18 vm00 bash[28333]: cluster 2026-03-09T17:50:16.976687+0000 mgr.y (mgr.14505) 910 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:19.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:18 vm00 bash[28333]: cluster 2026-03-09T17:50:16.976687+0000 mgr.y (mgr.14505) 910 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:18 vm00 bash[20770]: cluster 2026-03-09T17:50:16.976687+0000 mgr.y (mgr.14505) 910 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:18 vm00 bash[20770]: cluster 2026-03-09T17:50:16.976687+0000 mgr.y (mgr.14505) 910 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:19 vm00 bash[28333]: cluster 2026-03-09T17:50:18.977488+0000 mgr.y (mgr.14505) 911 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:20.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:19 vm00 bash[28333]: cluster 2026-03-09T17:50:18.977488+0000 mgr.y (mgr.14505) 911 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:19 vm00 bash[20770]: cluster 2026-03-09T17:50:18.977488+0000 mgr.y (mgr.14505) 911 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:20.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:19 vm00 bash[20770]: cluster 2026-03-09T17:50:18.977488+0000 mgr.y (mgr.14505) 911 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:20.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:19 vm02 bash[23351]: cluster 2026-03-09T17:50:18.977488+0000 mgr.y (mgr.14505) 911 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:20.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:19 vm02 bash[23351]: cluster 2026-03-09T17:50:18.977488+0000 mgr.y (mgr.14505) 911 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:22 vm02 bash[23351]: cluster 2026-03-09T17:50:20.977793+0000 mgr.y (mgr.14505) 912 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:22.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:22 vm02 bash[23351]: cluster 2026-03-09T17:50:20.977793+0000 mgr.y (mgr.14505) 912 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:22 vm00 bash[28333]: cluster 2026-03-09T17:50:20.977793+0000 mgr.y (mgr.14505) 912 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:22 vm00 bash[28333]: cluster 2026-03-09T17:50:20.977793+0000 mgr.y (mgr.14505) 912 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:22 vm00 bash[20770]: cluster 2026-03-09T17:50:20.977793+0000 mgr.y (mgr.14505) 912 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:22 vm00 bash[20770]: cluster 2026-03-09T17:50:20.977793+0000 mgr.y (mgr.14505) 912 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:22.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:50:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:50:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:24 vm02 bash[23351]: audit 2026-03-09T17:50:22.579772+0000 mgr.y (mgr.14505) 913 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:24 vm02 bash[23351]: audit 2026-03-09T17:50:22.579772+0000 mgr.y (mgr.14505) 913 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:24 vm02 bash[23351]: cluster 2026-03-09T17:50:22.978332+0000 mgr.y (mgr.14505) 914 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:24 vm02 bash[23351]: cluster 2026-03-09T17:50:22.978332+0000 mgr.y (mgr.14505) 914 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:24 vm00 bash[28333]: audit 2026-03-09T17:50:22.579772+0000 mgr.y (mgr.14505) 913 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:24 vm00 bash[28333]: audit 2026-03-09T17:50:22.579772+0000 mgr.y (mgr.14505) 913 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:24 vm00 bash[28333]: cluster 2026-03-09T17:50:22.978332+0000 mgr.y (mgr.14505) 914 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:24 vm00 bash[28333]: cluster 2026-03-09T17:50:22.978332+0000 mgr.y (mgr.14505) 914 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:24 vm00 bash[20770]: audit 2026-03-09T17:50:22.579772+0000 mgr.y (mgr.14505) 913 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:24 vm00 bash[20770]: audit 2026-03-09T17:50:22.579772+0000 mgr.y (mgr.14505) 913 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:24 vm00 bash[20770]: cluster 2026-03-09T17:50:22.978332+0000 mgr.y (mgr.14505) 914 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:24 vm00 bash[20770]: cluster 2026-03-09T17:50:22.978332+0000 mgr.y (mgr.14505) 914 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:26 vm02 bash[23351]: cluster 2026-03-09T17:50:24.978683+0000 mgr.y (mgr.14505) 915 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:26 vm02 bash[23351]: cluster 2026-03-09T17:50:24.978683+0000 mgr.y (mgr.14505) 915 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:26 vm00 bash[28333]: cluster 2026-03-09T17:50:24.978683+0000 mgr.y (mgr.14505) 915 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:26 vm00 bash[28333]: cluster 2026-03-09T17:50:24.978683+0000 mgr.y (mgr.14505) 915 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:26 vm00 bash[20770]: cluster 2026-03-09T17:50:24.978683+0000 mgr.y (mgr.14505) 915 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:26 vm00 bash[20770]: cluster 2026-03-09T17:50:24.978683+0000 mgr.y (mgr.14505) 915 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:50:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:50:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:50:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:28 vm02 bash[23351]: cluster 2026-03-09T17:50:26.978976+0000 mgr.y (mgr.14505) 916 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:28 vm02 bash[23351]: cluster 2026-03-09T17:50:26.978976+0000 mgr.y (mgr.14505) 916 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:28 vm00 bash[28333]: cluster 2026-03-09T17:50:26.978976+0000 mgr.y (mgr.14505) 916 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:28 vm00 bash[28333]: cluster 2026-03-09T17:50:26.978976+0000 mgr.y (mgr.14505) 916 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:28 vm00 bash[20770]: cluster 2026-03-09T17:50:26.978976+0000 mgr.y (mgr.14505) 916 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:28 vm00 bash[20770]: cluster 2026-03-09T17:50:26.978976+0000 mgr.y (mgr.14505) 916 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:29.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:29 vm02 bash[23351]: audit 2026-03-09T17:50:28.686241+0000 mon.c (mon.2) 871 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:29.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:29 vm02 bash[23351]: audit 2026-03-09T17:50:28.686241+0000 mon.c (mon.2) 871 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:29 vm00 bash[28333]: audit 2026-03-09T17:50:28.686241+0000 mon.c (mon.2) 871 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:29 vm00 bash[28333]: audit 2026-03-09T17:50:28.686241+0000 mon.c (mon.2) 871 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:29 vm00 bash[20770]: audit 2026-03-09T17:50:28.686241+0000 mon.c (mon.2) 871 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:29 vm00 bash[20770]: audit 2026-03-09T17:50:28.686241+0000 mon.c (mon.2) 871 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:30 vm02 bash[23351]: cluster 2026-03-09T17:50:28.979636+0000 mgr.y (mgr.14505) 917 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:30 vm02 bash[23351]: cluster 2026-03-09T17:50:28.979636+0000 mgr.y (mgr.14505) 917 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:30 vm00 bash[28333]: cluster 2026-03-09T17:50:28.979636+0000 mgr.y (mgr.14505) 917 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:30 vm00 bash[28333]: cluster 2026-03-09T17:50:28.979636+0000 mgr.y (mgr.14505) 917 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:30 vm00 bash[20770]: cluster 2026-03-09T17:50:28.979636+0000 mgr.y (mgr.14505) 917 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:30 vm00 bash[20770]: cluster 2026-03-09T17:50:28.979636+0000 mgr.y (mgr.14505) 917 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:32 vm02 bash[23351]: cluster 2026-03-09T17:50:30.979926+0000 mgr.y (mgr.14505) 918 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:32 vm02 bash[23351]: cluster 2026-03-09T17:50:30.979926+0000 mgr.y (mgr.14505) 918 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:32.635 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:50:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:50:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:32 vm00 bash[28333]: cluster 2026-03-09T17:50:30.979926+0000 mgr.y (mgr.14505) 918 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:32 vm00 bash[28333]: cluster 2026-03-09T17:50:30.979926+0000 mgr.y (mgr.14505) 918 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:32 vm00 bash[20770]: cluster 2026-03-09T17:50:30.979926+0000 mgr.y (mgr.14505) 918 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:32 vm00 bash[20770]: cluster 2026-03-09T17:50:30.979926+0000 mgr.y (mgr.14505) 918 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:34 vm02 bash[23351]: audit 2026-03-09T17:50:32.590346+0000 mgr.y (mgr.14505) 919 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:34 vm02 bash[23351]: audit 2026-03-09T17:50:32.590346+0000 mgr.y (mgr.14505) 919 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:34 vm02 bash[23351]: cluster 2026-03-09T17:50:32.980443+0000 mgr.y (mgr.14505) 920 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:34 vm02 bash[23351]: cluster 2026-03-09T17:50:32.980443+0000 mgr.y (mgr.14505) 920 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:34 vm00 bash[20770]: audit 2026-03-09T17:50:32.590346+0000 mgr.y (mgr.14505) 919 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:34 vm00 bash[20770]: audit 2026-03-09T17:50:32.590346+0000 mgr.y (mgr.14505) 919 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:34 vm00 bash[20770]: cluster 2026-03-09T17:50:32.980443+0000 mgr.y (mgr.14505) 920 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:34 vm00 bash[20770]: cluster 2026-03-09T17:50:32.980443+0000 mgr.y (mgr.14505) 920 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:34 vm00 bash[28333]: audit 2026-03-09T17:50:32.590346+0000 mgr.y (mgr.14505) 919 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:34 vm00 bash[28333]: audit 2026-03-09T17:50:32.590346+0000 mgr.y (mgr.14505) 919 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:34 vm00 bash[28333]: cluster 2026-03-09T17:50:32.980443+0000 mgr.y (mgr.14505) 920 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:34 vm00 bash[28333]: cluster 2026-03-09T17:50:32.980443+0000 mgr.y (mgr.14505) 920 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:36 vm00 bash[20770]: cluster 2026-03-09T17:50:34.980829+0000 mgr.y (mgr.14505) 921 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:36 vm00 bash[20770]: cluster 2026-03-09T17:50:34.980829+0000 mgr.y (mgr.14505) 921 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:50:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:50:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:50:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:36 vm00 bash[28333]: cluster 2026-03-09T17:50:34.980829+0000 mgr.y (mgr.14505) 921 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:36 vm00 bash[28333]: cluster 2026-03-09T17:50:34.980829+0000 mgr.y (mgr.14505) 921 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:36 vm02 bash[23351]: cluster 2026-03-09T17:50:34.980829+0000 mgr.y (mgr.14505) 921 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:36 vm02 bash[23351]: cluster 2026-03-09T17:50:34.980829+0000 mgr.y (mgr.14505) 921 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:38 vm00 bash[28333]: cluster 2026-03-09T17:50:36.981191+0000 mgr.y (mgr.14505) 922 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:38 vm00 bash[28333]: cluster 2026-03-09T17:50:36.981191+0000 mgr.y (mgr.14505) 922 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:38 vm00 bash[20770]: cluster 2026-03-09T17:50:36.981191+0000 mgr.y (mgr.14505) 922 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:38 vm00 bash[20770]: cluster 2026-03-09T17:50:36.981191+0000 mgr.y (mgr.14505) 922 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:38 vm02 bash[23351]: cluster 2026-03-09T17:50:36.981191+0000 mgr.y (mgr.14505) 922 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:38 vm02 bash[23351]: cluster 2026-03-09T17:50:36.981191+0000 mgr.y (mgr.14505) 922 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:40 vm00 bash[20770]: cluster 2026-03-09T17:50:38.981814+0000 mgr.y (mgr.14505) 923 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:40 vm00 bash[20770]: cluster 2026-03-09T17:50:38.981814+0000 mgr.y (mgr.14505) 923 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:40 vm00 bash[28333]: cluster 2026-03-09T17:50:38.981814+0000 mgr.y (mgr.14505) 923 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:40 vm00 bash[28333]: cluster 2026-03-09T17:50:38.981814+0000 mgr.y (mgr.14505) 923 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:40 vm02 bash[23351]: cluster 2026-03-09T17:50:38.981814+0000 mgr.y (mgr.14505) 923 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:40 vm02 bash[23351]: cluster 2026-03-09T17:50:38.981814+0000 mgr.y (mgr.14505) 923 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:42 vm00 bash[28333]: cluster 2026-03-09T17:50:40.982176+0000 mgr.y (mgr.14505) 924 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:42 vm00 bash[28333]: cluster 2026-03-09T17:50:40.982176+0000 mgr.y (mgr.14505) 924 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:42 vm00 bash[20770]: cluster 2026-03-09T17:50:40.982176+0000 mgr.y (mgr.14505) 924 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:42 vm00 bash[20770]: cluster 2026-03-09T17:50:40.982176+0000 mgr.y (mgr.14505) 924 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:42.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:42 vm02 bash[23351]: cluster 2026-03-09T17:50:40.982176+0000 mgr.y (mgr.14505) 924 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:42.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:42 vm02 bash[23351]: cluster 2026-03-09T17:50:40.982176+0000 mgr.y (mgr.14505) 924 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:42.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:50:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:44 vm00 bash[28333]: audit 2026-03-09T17:50:42.598251+0000 mgr.y (mgr.14505) 925 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:44 vm00 bash[28333]: audit 2026-03-09T17:50:42.598251+0000 mgr.y (mgr.14505) 925 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:44 vm00 bash[28333]: cluster 2026-03-09T17:50:42.982669+0000 mgr.y (mgr.14505) 926 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:44 vm00 bash[28333]: cluster 2026-03-09T17:50:42.982669+0000 mgr.y (mgr.14505) 926 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:44 vm00 bash[28333]: audit 2026-03-09T17:50:43.692807+0000 mon.c (mon.2) 872 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:44 vm00 bash[28333]: audit 2026-03-09T17:50:43.692807+0000 mon.c (mon.2) 872 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:44 vm00 bash[20770]: audit 2026-03-09T17:50:42.598251+0000 mgr.y (mgr.14505) 925 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:44 vm00 bash[20770]: audit 2026-03-09T17:50:42.598251+0000 mgr.y (mgr.14505) 925 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:44 vm00 bash[20770]: cluster 2026-03-09T17:50:42.982669+0000 mgr.y (mgr.14505) 926 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:44 vm00 bash[20770]: cluster 2026-03-09T17:50:42.982669+0000 mgr.y (mgr.14505) 926 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:44 vm00 bash[20770]: audit 2026-03-09T17:50:43.692807+0000 mon.c (mon.2) 872 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:44 vm00 bash[20770]: audit 2026-03-09T17:50:43.692807+0000 mon.c (mon.2) 872 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:44 vm02 bash[23351]: audit 2026-03-09T17:50:42.598251+0000 mgr.y (mgr.14505) 925 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:44 vm02 bash[23351]: audit 2026-03-09T17:50:42.598251+0000 mgr.y (mgr.14505) 925 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:44 vm02 bash[23351]: cluster 2026-03-09T17:50:42.982669+0000 mgr.y (mgr.14505) 926 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:44 vm02 bash[23351]: cluster 2026-03-09T17:50:42.982669+0000 mgr.y (mgr.14505) 926 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:44 vm02 bash[23351]: audit 2026-03-09T17:50:43.692807+0000 mon.c (mon.2) 872 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:44 vm02 bash[23351]: audit 2026-03-09T17:50:43.692807+0000 mon.c (mon.2) 872 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:46 vm00 bash[28333]: cluster 2026-03-09T17:50:44.983044+0000 mgr.y (mgr.14505) 927 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:46 vm00 bash[28333]: cluster 2026-03-09T17:50:44.983044+0000 mgr.y (mgr.14505) 927 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:46 vm00 bash[20770]: cluster 2026-03-09T17:50:44.983044+0000 mgr.y (mgr.14505) 927 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:46 vm00 bash[20770]: cluster 2026-03-09T17:50:44.983044+0000 mgr.y (mgr.14505) 927 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:50:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:50:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:50:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:46 vm02 bash[23351]: cluster 2026-03-09T17:50:44.983044+0000 mgr.y (mgr.14505) 927 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:46 vm02 bash[23351]: cluster 2026-03-09T17:50:44.983044+0000 mgr.y (mgr.14505) 927 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:48 vm00 bash[28333]: cluster 2026-03-09T17:50:46.983312+0000 mgr.y (mgr.14505) 928 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:48 vm00 bash[28333]: cluster 2026-03-09T17:50:46.983312+0000 mgr.y (mgr.14505) 928 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:48 vm00 bash[20770]: cluster 2026-03-09T17:50:46.983312+0000 mgr.y (mgr.14505) 928 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:48 vm00 bash[20770]: cluster 2026-03-09T17:50:46.983312+0000 mgr.y (mgr.14505) 928 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:48 vm02 bash[23351]: cluster 2026-03-09T17:50:46.983312+0000 mgr.y (mgr.14505) 928 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:48 vm02 bash[23351]: cluster 2026-03-09T17:50:46.983312+0000 mgr.y (mgr.14505) 928 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:50 vm00 bash[20770]: cluster 2026-03-09T17:50:48.983998+0000 mgr.y (mgr.14505) 929 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:50 vm00 bash[20770]: cluster 2026-03-09T17:50:48.983998+0000 mgr.y (mgr.14505) 929 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:50 vm00 bash[28333]: cluster 2026-03-09T17:50:48.983998+0000 mgr.y (mgr.14505) 929 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:50 vm00 bash[28333]: cluster 2026-03-09T17:50:48.983998+0000 mgr.y (mgr.14505) 929 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:50 vm02 bash[23351]: cluster 2026-03-09T17:50:48.983998+0000 mgr.y (mgr.14505) 929 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:50 vm02 bash[23351]: cluster 2026-03-09T17:50:48.983998+0000 mgr.y (mgr.14505) 929 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:52 vm00 bash[20770]: cluster 2026-03-09T17:50:50.984295+0000 mgr.y (mgr.14505) 930 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:52 vm00 bash[20770]: cluster 2026-03-09T17:50:50.984295+0000 mgr.y (mgr.14505) 930 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:52 vm00 bash[28333]: cluster 2026-03-09T17:50:50.984295+0000 mgr.y (mgr.14505) 930 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:52 vm00 bash[28333]: cluster 2026-03-09T17:50:50.984295+0000 mgr.y (mgr.14505) 930 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:52 vm02 bash[23351]: cluster 2026-03-09T17:50:50.984295+0000 mgr.y (mgr.14505) 930 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:52 vm02 bash[23351]: cluster 2026-03-09T17:50:50.984295+0000 mgr.y (mgr.14505) 930 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:52.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:50:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:50:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:54 vm02 bash[23351]: audit 2026-03-09T17:50:52.602904+0000 mgr.y (mgr.14505) 931 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:54 vm02 bash[23351]: audit 2026-03-09T17:50:52.602904+0000 mgr.y (mgr.14505) 931 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:54 vm02 bash[23351]: cluster 2026-03-09T17:50:52.984913+0000 mgr.y (mgr.14505) 932 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:54 vm02 bash[23351]: cluster 2026-03-09T17:50:52.984913+0000 mgr.y (mgr.14505) 932 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:54 vm00 bash[20770]: audit 2026-03-09T17:50:52.602904+0000 mgr.y (mgr.14505) 931 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:54 vm00 bash[20770]: audit 2026-03-09T17:50:52.602904+0000 mgr.y (mgr.14505) 931 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:54 vm00 bash[20770]: cluster 2026-03-09T17:50:52.984913+0000 mgr.y (mgr.14505) 932 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:54 vm00 bash[20770]: cluster 2026-03-09T17:50:52.984913+0000 mgr.y (mgr.14505) 932 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:54 vm00 bash[28333]: audit 2026-03-09T17:50:52.602904+0000 mgr.y (mgr.14505) 931 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:54 vm00 bash[28333]: audit 2026-03-09T17:50:52.602904+0000 mgr.y (mgr.14505) 931 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:50:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:54 vm00 bash[28333]: cluster 2026-03-09T17:50:52.984913+0000 mgr.y (mgr.14505) 932 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:54 vm00 bash[28333]: cluster 2026-03-09T17:50:52.984913+0000 mgr.y (mgr.14505) 932 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:50:55.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:55 vm02 bash[23351]: cluster 2026-03-09T17:50:54.985299+0000 mgr.y (mgr.14505) 933 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:55.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:55 vm02 bash[23351]: cluster 2026-03-09T17:50:54.985299+0000 mgr.y (mgr.14505) 933 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:55 vm00 bash[28333]: cluster 2026-03-09T17:50:54.985299+0000 mgr.y (mgr.14505) 933 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:55 vm00 bash[28333]: cluster 2026-03-09T17:50:54.985299+0000 mgr.y (mgr.14505) 933 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:55 vm00 bash[20770]: cluster 2026-03-09T17:50:54.985299+0000 mgr.y (mgr.14505) 933 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:55 vm00 bash[20770]: cluster 2026-03-09T17:50:54.985299+0000 mgr.y (mgr.14505) 933 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:50:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:50:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:50:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:58 vm02 bash[23351]: cluster 2026-03-09T17:50:56.985681+0000 mgr.y (mgr.14505) 934 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:58 vm02 bash[23351]: cluster 2026-03-09T17:50:56.985681+0000 mgr.y (mgr.14505) 934 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:58 vm00 bash[28333]: cluster 2026-03-09T17:50:56.985681+0000 mgr.y (mgr.14505) 934 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:58 vm00 bash[28333]: cluster 2026-03-09T17:50:56.985681+0000 mgr.y (mgr.14505) 934 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:58 vm00 bash[20770]: cluster 2026-03-09T17:50:56.985681+0000 mgr.y (mgr.14505) 934 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:58 vm00 bash[20770]: cluster 2026-03-09T17:50:56.985681+0000 mgr.y (mgr.14505) 934 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:50:59.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:59 vm02 bash[23351]: audit 2026-03-09T17:50:58.698762+0000 mon.c (mon.2) 873 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:59.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:50:59 vm02 bash[23351]: audit 2026-03-09T17:50:58.698762+0000 mon.c (mon.2) 873 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:59 vm00 bash[28333]: audit 2026-03-09T17:50:58.698762+0000 mon.c (mon.2) 873 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:50:59 vm00 bash[28333]: audit 2026-03-09T17:50:58.698762+0000 mon.c (mon.2) 873 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:59 vm00 bash[20770]: audit 2026-03-09T17:50:58.698762+0000 mon.c (mon.2) 873 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:50:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:50:59 vm00 bash[20770]: audit 2026-03-09T17:50:58.698762+0000 mon.c (mon.2) 873 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:00 vm02 bash[23351]: cluster 2026-03-09T17:50:58.986353+0000 mgr.y (mgr.14505) 935 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:00 vm02 bash[23351]: cluster 2026-03-09T17:50:58.986353+0000 mgr.y (mgr.14505) 935 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:00 vm00 bash[28333]: cluster 2026-03-09T17:50:58.986353+0000 mgr.y (mgr.14505) 935 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:00 vm00 bash[28333]: cluster 2026-03-09T17:50:58.986353+0000 mgr.y (mgr.14505) 935 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:00 vm00 bash[20770]: cluster 2026-03-09T17:50:58.986353+0000 mgr.y (mgr.14505) 935 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:00 vm00 bash[20770]: cluster 2026-03-09T17:50:58.986353+0000 mgr.y (mgr.14505) 935 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:02.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:02 vm02 bash[23351]: cluster 2026-03-09T17:51:00.986779+0000 mgr.y (mgr.14505) 936 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:02.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:02 vm02 bash[23351]: cluster 2026-03-09T17:51:00.986779+0000 mgr.y (mgr.14505) 936 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:02 vm00 bash[28333]: cluster 2026-03-09T17:51:00.986779+0000 mgr.y (mgr.14505) 936 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:02 vm00 bash[28333]: cluster 2026-03-09T17:51:00.986779+0000 mgr.y (mgr.14505) 936 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:02 vm00 bash[20770]: cluster 2026-03-09T17:51:00.986779+0000 mgr.y (mgr.14505) 936 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:02 vm00 bash[20770]: cluster 2026-03-09T17:51:00.986779+0000 mgr.y (mgr.14505) 936 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:02.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:51:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:51:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:04 vm02 bash[23351]: audit 2026-03-09T17:51:02.613174+0000 mgr.y (mgr.14505) 937 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:04 vm02 bash[23351]: audit 2026-03-09T17:51:02.613174+0000 mgr.y (mgr.14505) 937 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:04 vm02 bash[23351]: cluster 2026-03-09T17:51:02.987470+0000 mgr.y (mgr.14505) 938 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:04 vm02 bash[23351]: cluster 2026-03-09T17:51:02.987470+0000 mgr.y (mgr.14505) 938 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:04 vm00 bash[28333]: audit 2026-03-09T17:51:02.613174+0000 mgr.y (mgr.14505) 937 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:04 vm00 bash[28333]: audit 2026-03-09T17:51:02.613174+0000 mgr.y (mgr.14505) 937 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:04 vm00 bash[28333]: cluster 2026-03-09T17:51:02.987470+0000 mgr.y (mgr.14505) 938 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:04 vm00 bash[28333]: cluster 2026-03-09T17:51:02.987470+0000 mgr.y (mgr.14505) 938 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:04 vm00 bash[20770]: audit 2026-03-09T17:51:02.613174+0000 mgr.y (mgr.14505) 937 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:04 vm00 bash[20770]: audit 2026-03-09T17:51:02.613174+0000 mgr.y (mgr.14505) 937 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:04 vm00 bash[20770]: cluster 2026-03-09T17:51:02.987470+0000 mgr.y (mgr.14505) 938 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:04 vm00 bash[20770]: cluster 2026-03-09T17:51:02.987470+0000 mgr.y (mgr.14505) 938 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:06.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:06 vm02 bash[23351]: cluster 2026-03-09T17:51:04.987795+0000 mgr.y (mgr.14505) 939 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:06.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:06 vm02 bash[23351]: cluster 2026-03-09T17:51:04.987795+0000 mgr.y (mgr.14505) 939 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:06 vm00 bash[28333]: cluster 2026-03-09T17:51:04.987795+0000 mgr.y (mgr.14505) 939 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:06 vm00 bash[28333]: cluster 2026-03-09T17:51:04.987795+0000 mgr.y (mgr.14505) 939 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:06 vm00 bash[20770]: cluster 2026-03-09T17:51:04.987795+0000 mgr.y (mgr.14505) 939 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:06 vm00 bash[20770]: cluster 2026-03-09T17:51:04.987795+0000 mgr.y (mgr.14505) 939 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:51:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:51:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:51:07.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.291260+0000 mon.c (mon.2) 874 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.291260+0000 mon.c (mon.2) 874 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.606615+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.606615+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.616675+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.616675+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.920040+0000 mon.c (mon.2) 875 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.920040+0000 mon.c (mon.2) 875 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.921051+0000 mon.c (mon.2) 876 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.921051+0000 mon.c (mon.2) 876 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.936606+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:07 vm02 bash[23351]: audit 2026-03-09T17:51:06.936606+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.291260+0000 mon.c (mon.2) 874 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.291260+0000 mon.c (mon.2) 874 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.606615+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.606615+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.616675+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.616675+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.920040+0000 mon.c (mon.2) 875 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.920040+0000 mon.c (mon.2) 875 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.921051+0000 mon.c (mon.2) 876 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.921051+0000 mon.c (mon.2) 876 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.936606+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:07 vm00 bash[28333]: audit 2026-03-09T17:51:06.936606+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.291260+0000 mon.c (mon.2) 874 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.291260+0000 mon.c (mon.2) 874 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.606615+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.606615+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.616675+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.616675+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.920040+0000 mon.c (mon.2) 875 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:51:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.920040+0000 mon.c (mon.2) 875 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:51:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.921051+0000 mon.c (mon.2) 876 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:51:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.921051+0000 mon.c (mon.2) 876 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:51:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.936606+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:07.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:07 vm00 bash[20770]: audit 2026-03-09T17:51:06.936606+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:51:08.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:08 vm02 bash[23351]: cluster 2026-03-09T17:51:06.988165+0000 mgr.y (mgr.14505) 940 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:08.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:08 vm02 bash[23351]: cluster 2026-03-09T17:51:06.988165+0000 mgr.y (mgr.14505) 940 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:08 vm00 bash[28333]: cluster 2026-03-09T17:51:06.988165+0000 mgr.y (mgr.14505) 940 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:08 vm00 bash[28333]: cluster 2026-03-09T17:51:06.988165+0000 mgr.y (mgr.14505) 940 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:08 vm00 bash[20770]: cluster 2026-03-09T17:51:06.988165+0000 mgr.y (mgr.14505) 940 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:08 vm00 bash[20770]: cluster 2026-03-09T17:51:06.988165+0000 mgr.y (mgr.14505) 940 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:10.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:10 vm02 bash[23351]: cluster 2026-03-09T17:51:08.988974+0000 mgr.y (mgr.14505) 941 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:10.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:10 vm02 bash[23351]: cluster 2026-03-09T17:51:08.988974+0000 mgr.y (mgr.14505) 941 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:10 vm00 bash[28333]: cluster 2026-03-09T17:51:08.988974+0000 mgr.y (mgr.14505) 941 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:10 vm00 bash[28333]: cluster 2026-03-09T17:51:08.988974+0000 mgr.y (mgr.14505) 941 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:10 vm00 bash[20770]: cluster 2026-03-09T17:51:08.988974+0000 mgr.y (mgr.14505) 941 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:10 vm00 bash[20770]: cluster 2026-03-09T17:51:08.988974+0000 mgr.y (mgr.14505) 941 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:12 vm02 bash[23351]: cluster 2026-03-09T17:51:10.989271+0000 mgr.y (mgr.14505) 942 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:12 vm02 bash[23351]: cluster 2026-03-09T17:51:10.989271+0000 mgr.y (mgr.14505) 942 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:12 vm00 bash[28333]: cluster 2026-03-09T17:51:10.989271+0000 mgr.y (mgr.14505) 942 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:12 vm00 bash[28333]: cluster 2026-03-09T17:51:10.989271+0000 mgr.y (mgr.14505) 942 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:12 vm00 bash[20770]: cluster 2026-03-09T17:51:10.989271+0000 mgr.y (mgr.14505) 942 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:12 vm00 bash[20770]: cluster 2026-03-09T17:51:10.989271+0000 mgr.y (mgr.14505) 942 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:12.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:51:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:51:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:14 vm02 bash[23351]: audit 2026-03-09T17:51:12.615492+0000 mgr.y (mgr.14505) 943 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:14 vm02 bash[23351]: audit 2026-03-09T17:51:12.615492+0000 mgr.y (mgr.14505) 943 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:14 vm02 bash[23351]: cluster 2026-03-09T17:51:12.989723+0000 mgr.y (mgr.14505) 944 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:14 vm02 bash[23351]: cluster 2026-03-09T17:51:12.989723+0000 mgr.y (mgr.14505) 944 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:14 vm02 bash[23351]: audit 2026-03-09T17:51:13.706248+0000 mon.c (mon.2) 877 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:14 vm02 bash[23351]: audit 2026-03-09T17:51:13.706248+0000 mon.c (mon.2) 877 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:14 vm00 bash[28333]: audit 2026-03-09T17:51:12.615492+0000 mgr.y (mgr.14505) 943 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:14 vm00 bash[28333]: audit 2026-03-09T17:51:12.615492+0000 mgr.y (mgr.14505) 943 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:14 vm00 bash[28333]: cluster 2026-03-09T17:51:12.989723+0000 mgr.y (mgr.14505) 944 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:14 vm00 bash[28333]: cluster 2026-03-09T17:51:12.989723+0000 mgr.y (mgr.14505) 944 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:14 vm00 bash[28333]: audit 2026-03-09T17:51:13.706248+0000 mon.c (mon.2) 877 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:14 vm00 bash[28333]: audit 2026-03-09T17:51:13.706248+0000 mon.c (mon.2) 877 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:14 vm00 bash[20770]: audit 2026-03-09T17:51:12.615492+0000 mgr.y (mgr.14505) 943 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:14 vm00 bash[20770]: audit 2026-03-09T17:51:12.615492+0000 mgr.y (mgr.14505) 943 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:14 vm00 bash[20770]: cluster 2026-03-09T17:51:12.989723+0000 mgr.y (mgr.14505) 944 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:14 vm00 bash[20770]: cluster 2026-03-09T17:51:12.989723+0000 mgr.y (mgr.14505) 944 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:14 vm00 bash[20770]: audit 2026-03-09T17:51:13.706248+0000 mon.c (mon.2) 877 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:14 vm00 bash[20770]: audit 2026-03-09T17:51:13.706248+0000 mon.c (mon.2) 877 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:16 vm00 bash[28333]: cluster 2026-03-09T17:51:14.990153+0000 mgr.y (mgr.14505) 945 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:16 vm00 bash[28333]: cluster 2026-03-09T17:51:14.990153+0000 mgr.y (mgr.14505) 945 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:16 vm00 bash[20770]: cluster 2026-03-09T17:51:14.990153+0000 mgr.y (mgr.14505) 945 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:16 vm00 bash[20770]: cluster 2026-03-09T17:51:14.990153+0000 mgr.y (mgr.14505) 945 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:51:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:51:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:51:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:16 vm02 bash[23351]: cluster 2026-03-09T17:51:14.990153+0000 mgr.y (mgr.14505) 945 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:16 vm02 bash[23351]: cluster 2026-03-09T17:51:14.990153+0000 mgr.y (mgr.14505) 945 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:18 vm00 bash[28333]: cluster 2026-03-09T17:51:16.990403+0000 mgr.y (mgr.14505) 946 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:18 vm00 bash[28333]: cluster 2026-03-09T17:51:16.990403+0000 mgr.y (mgr.14505) 946 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:18 vm00 bash[20770]: cluster 2026-03-09T17:51:16.990403+0000 mgr.y (mgr.14505) 946 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:18 vm00 bash[20770]: cluster 2026-03-09T17:51:16.990403+0000 mgr.y (mgr.14505) 946 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:18 vm02 bash[23351]: cluster 2026-03-09T17:51:16.990403+0000 mgr.y (mgr.14505) 946 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:18 vm02 bash[23351]: cluster 2026-03-09T17:51:16.990403+0000 mgr.y (mgr.14505) 946 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:20 vm02 bash[23351]: cluster 2026-03-09T17:51:18.991204+0000 mgr.y (mgr.14505) 947 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:20 vm02 bash[23351]: cluster 2026-03-09T17:51:18.991204+0000 mgr.y (mgr.14505) 947 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:20 vm00 bash[20770]: cluster 2026-03-09T17:51:18.991204+0000 mgr.y (mgr.14505) 947 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:20 vm00 bash[20770]: cluster 2026-03-09T17:51:18.991204+0000 mgr.y (mgr.14505) 947 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:20 vm00 bash[28333]: cluster 2026-03-09T17:51:18.991204+0000 mgr.y (mgr.14505) 947 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:20 vm00 bash[28333]: cluster 2026-03-09T17:51:18.991204+0000 mgr.y (mgr.14505) 947 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:22.623 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:22 vm02 bash[23351]: cluster 2026-03-09T17:51:20.991616+0000 mgr.y (mgr.14505) 948 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:22.623 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:22 vm02 bash[23351]: cluster 2026-03-09T17:51:20.991616+0000 mgr.y (mgr.14505) 948 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:22 vm00 bash[28333]: cluster 2026-03-09T17:51:20.991616+0000 mgr.y (mgr.14505) 948 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:22 vm00 bash[28333]: cluster 2026-03-09T17:51:20.991616+0000 mgr.y (mgr.14505) 948 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:22 vm00 bash[20770]: cluster 2026-03-09T17:51:20.991616+0000 mgr.y (mgr.14505) 948 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:22 vm00 bash[20770]: cluster 2026-03-09T17:51:20.991616+0000 mgr.y (mgr.14505) 948 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:22.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:51:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:51:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:24 vm02 bash[23351]: audit 2026-03-09T17:51:22.622358+0000 mgr.y (mgr.14505) 949 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:24 vm02 bash[23351]: audit 2026-03-09T17:51:22.622358+0000 mgr.y (mgr.14505) 949 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:24 vm02 bash[23351]: cluster 2026-03-09T17:51:22.992181+0000 mgr.y (mgr.14505) 950 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:24 vm02 bash[23351]: cluster 2026-03-09T17:51:22.992181+0000 mgr.y (mgr.14505) 950 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:24 vm00 bash[28333]: audit 2026-03-09T17:51:22.622358+0000 mgr.y (mgr.14505) 949 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:24 vm00 bash[28333]: audit 2026-03-09T17:51:22.622358+0000 mgr.y (mgr.14505) 949 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:24 vm00 bash[28333]: cluster 2026-03-09T17:51:22.992181+0000 mgr.y (mgr.14505) 950 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:24 vm00 bash[28333]: cluster 2026-03-09T17:51:22.992181+0000 mgr.y (mgr.14505) 950 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:24 vm00 bash[20770]: audit 2026-03-09T17:51:22.622358+0000 mgr.y (mgr.14505) 949 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:24 vm00 bash[20770]: audit 2026-03-09T17:51:22.622358+0000 mgr.y (mgr.14505) 949 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:24 vm00 bash[20770]: cluster 2026-03-09T17:51:22.992181+0000 mgr.y (mgr.14505) 950 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:24 vm00 bash[20770]: cluster 2026-03-09T17:51:22.992181+0000 mgr.y (mgr.14505) 950 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:26 vm02 bash[23351]: cluster 2026-03-09T17:51:24.992543+0000 mgr.y (mgr.14505) 951 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:26 vm02 bash[23351]: cluster 2026-03-09T17:51:24.992543+0000 mgr.y (mgr.14505) 951 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:26 vm00 bash[28333]: cluster 2026-03-09T17:51:24.992543+0000 mgr.y (mgr.14505) 951 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:26 vm00 bash[28333]: cluster 2026-03-09T17:51:24.992543+0000 mgr.y (mgr.14505) 951 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:26 vm00 bash[20770]: cluster 2026-03-09T17:51:24.992543+0000 mgr.y (mgr.14505) 951 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:26 vm00 bash[20770]: cluster 2026-03-09T17:51:24.992543+0000 mgr.y (mgr.14505) 951 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:51:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:51:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:51:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:28 vm02 bash[23351]: cluster 2026-03-09T17:51:26.992812+0000 mgr.y (mgr.14505) 952 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:28 vm02 bash[23351]: cluster 2026-03-09T17:51:26.992812+0000 mgr.y (mgr.14505) 952 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:28 vm00 bash[28333]: cluster 2026-03-09T17:51:26.992812+0000 mgr.y (mgr.14505) 952 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:28 vm00 bash[28333]: cluster 2026-03-09T17:51:26.992812+0000 mgr.y (mgr.14505) 952 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:28 vm00 bash[20770]: cluster 2026-03-09T17:51:26.992812+0000 mgr.y (mgr.14505) 952 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:28 vm00 bash[20770]: cluster 2026-03-09T17:51:26.992812+0000 mgr.y (mgr.14505) 952 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:29.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:29 vm02 bash[23351]: audit 2026-03-09T17:51:28.713255+0000 mon.c (mon.2) 878 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:29.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:29 vm02 bash[23351]: audit 2026-03-09T17:51:28.713255+0000 mon.c (mon.2) 878 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:29 vm00 bash[28333]: audit 2026-03-09T17:51:28.713255+0000 mon.c (mon.2) 878 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:29 vm00 bash[28333]: audit 2026-03-09T17:51:28.713255+0000 mon.c (mon.2) 878 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:29 vm00 bash[20770]: audit 2026-03-09T17:51:28.713255+0000 mon.c (mon.2) 878 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:29 vm00 bash[20770]: audit 2026-03-09T17:51:28.713255+0000 mon.c (mon.2) 878 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:30 vm02 bash[23351]: cluster 2026-03-09T17:51:28.993599+0000 mgr.y (mgr.14505) 953 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:30 vm02 bash[23351]: cluster 2026-03-09T17:51:28.993599+0000 mgr.y (mgr.14505) 953 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:30 vm00 bash[28333]: cluster 2026-03-09T17:51:28.993599+0000 mgr.y (mgr.14505) 953 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:30 vm00 bash[28333]: cluster 2026-03-09T17:51:28.993599+0000 mgr.y (mgr.14505) 953 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:30 vm00 bash[20770]: cluster 2026-03-09T17:51:28.993599+0000 mgr.y (mgr.14505) 953 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:30 vm00 bash[20770]: cluster 2026-03-09T17:51:28.993599+0000 mgr.y (mgr.14505) 953 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:32.628 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:32 vm02 bash[23351]: cluster 2026-03-09T17:51:30.994004+0000 mgr.y (mgr.14505) 954 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:32.628 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:32 vm02 bash[23351]: cluster 2026-03-09T17:51:30.994004+0000 mgr.y (mgr.14505) 954 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:32 vm00 bash[28333]: cluster 2026-03-09T17:51:30.994004+0000 mgr.y (mgr.14505) 954 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:32 vm00 bash[28333]: cluster 2026-03-09T17:51:30.994004+0000 mgr.y (mgr.14505) 954 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:32 vm00 bash[20770]: cluster 2026-03-09T17:51:30.994004+0000 mgr.y (mgr.14505) 954 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:32 vm00 bash[20770]: cluster 2026-03-09T17:51:30.994004+0000 mgr.y (mgr.14505) 954 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:32.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:51:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:51:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:34 vm02 bash[23351]: audit 2026-03-09T17:51:32.627554+0000 mgr.y (mgr.14505) 955 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:34 vm02 bash[23351]: audit 2026-03-09T17:51:32.627554+0000 mgr.y (mgr.14505) 955 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:34 vm02 bash[23351]: cluster 2026-03-09T17:51:32.994492+0000 mgr.y (mgr.14505) 956 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:34 vm02 bash[23351]: cluster 2026-03-09T17:51:32.994492+0000 mgr.y (mgr.14505) 956 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:34 vm00 bash[28333]: audit 2026-03-09T17:51:32.627554+0000 mgr.y (mgr.14505) 955 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:34 vm00 bash[28333]: audit 2026-03-09T17:51:32.627554+0000 mgr.y (mgr.14505) 955 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:34 vm00 bash[28333]: cluster 2026-03-09T17:51:32.994492+0000 mgr.y (mgr.14505) 956 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:34 vm00 bash[28333]: cluster 2026-03-09T17:51:32.994492+0000 mgr.y (mgr.14505) 956 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:34 vm00 bash[20770]: audit 2026-03-09T17:51:32.627554+0000 mgr.y (mgr.14505) 955 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:34 vm00 bash[20770]: audit 2026-03-09T17:51:32.627554+0000 mgr.y (mgr.14505) 955 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:34 vm00 bash[20770]: cluster 2026-03-09T17:51:32.994492+0000 mgr.y (mgr.14505) 956 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:34 vm00 bash[20770]: cluster 2026-03-09T17:51:32.994492+0000 mgr.y (mgr.14505) 956 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:36 vm00 bash[28333]: cluster 2026-03-09T17:51:34.994933+0000 mgr.y (mgr.14505) 957 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:36 vm00 bash[28333]: cluster 2026-03-09T17:51:34.994933+0000 mgr.y (mgr.14505) 957 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:36 vm00 bash[20770]: cluster 2026-03-09T17:51:34.994933+0000 mgr.y (mgr.14505) 957 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:36 vm00 bash[20770]: cluster 2026-03-09T17:51:34.994933+0000 mgr.y (mgr.14505) 957 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:51:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:51:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:51:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:36 vm02 bash[23351]: cluster 2026-03-09T17:51:34.994933+0000 mgr.y (mgr.14505) 957 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:36 vm02 bash[23351]: cluster 2026-03-09T17:51:34.994933+0000 mgr.y (mgr.14505) 957 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:38 vm00 bash[28333]: cluster 2026-03-09T17:51:36.995193+0000 mgr.y (mgr.14505) 958 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:38 vm00 bash[28333]: cluster 2026-03-09T17:51:36.995193+0000 mgr.y (mgr.14505) 958 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:38 vm00 bash[20770]: cluster 2026-03-09T17:51:36.995193+0000 mgr.y (mgr.14505) 958 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:38 vm00 bash[20770]: cluster 2026-03-09T17:51:36.995193+0000 mgr.y (mgr.14505) 958 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:38 vm02 bash[23351]: cluster 2026-03-09T17:51:36.995193+0000 mgr.y (mgr.14505) 958 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:38 vm02 bash[23351]: cluster 2026-03-09T17:51:36.995193+0000 mgr.y (mgr.14505) 958 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:40 vm00 bash[28333]: cluster 2026-03-09T17:51:38.995826+0000 mgr.y (mgr.14505) 959 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:40 vm00 bash[28333]: cluster 2026-03-09T17:51:38.995826+0000 mgr.y (mgr.14505) 959 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:40 vm00 bash[20770]: cluster 2026-03-09T17:51:38.995826+0000 mgr.y (mgr.14505) 959 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:40 vm00 bash[20770]: cluster 2026-03-09T17:51:38.995826+0000 mgr.y (mgr.14505) 959 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:40 vm02 bash[23351]: cluster 2026-03-09T17:51:38.995826+0000 mgr.y (mgr.14505) 959 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:40 vm02 bash[23351]: cluster 2026-03-09T17:51:38.995826+0000 mgr.y (mgr.14505) 959 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:42 vm00 bash[28333]: cluster 2026-03-09T17:51:40.996150+0000 mgr.y (mgr.14505) 960 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:42 vm00 bash[28333]: cluster 2026-03-09T17:51:40.996150+0000 mgr.y (mgr.14505) 960 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:42 vm00 bash[20770]: cluster 2026-03-09T17:51:40.996150+0000 mgr.y (mgr.14505) 960 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:42 vm00 bash[20770]: cluster 2026-03-09T17:51:40.996150+0000 mgr.y (mgr.14505) 960 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:42.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:42 vm02 bash[23351]: cluster 2026-03-09T17:51:40.996150+0000 mgr.y (mgr.14505) 960 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:42.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:42 vm02 bash[23351]: cluster 2026-03-09T17:51:40.996150+0000 mgr.y (mgr.14505) 960 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:42.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:51:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:44 vm00 bash[28333]: audit 2026-03-09T17:51:42.633521+0000 mgr.y (mgr.14505) 961 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:44 vm00 bash[28333]: audit 2026-03-09T17:51:42.633521+0000 mgr.y (mgr.14505) 961 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:44 vm00 bash[28333]: cluster 2026-03-09T17:51:42.996806+0000 mgr.y (mgr.14505) 962 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:44 vm00 bash[28333]: cluster 2026-03-09T17:51:42.996806+0000 mgr.y (mgr.14505) 962 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:44 vm00 bash[28333]: audit 2026-03-09T17:51:43.719409+0000 mon.c (mon.2) 879 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:44 vm00 bash[28333]: audit 2026-03-09T17:51:43.719409+0000 mon.c (mon.2) 879 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:44 vm00 bash[20770]: audit 2026-03-09T17:51:42.633521+0000 mgr.y (mgr.14505) 961 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:44 vm00 bash[20770]: audit 2026-03-09T17:51:42.633521+0000 mgr.y (mgr.14505) 961 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:44 vm00 bash[20770]: cluster 2026-03-09T17:51:42.996806+0000 mgr.y (mgr.14505) 962 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:44 vm00 bash[20770]: cluster 2026-03-09T17:51:42.996806+0000 mgr.y (mgr.14505) 962 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:44 vm00 bash[20770]: audit 2026-03-09T17:51:43.719409+0000 mon.c (mon.2) 879 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:44 vm00 bash[20770]: audit 2026-03-09T17:51:43.719409+0000 mon.c (mon.2) 879 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:44 vm02 bash[23351]: audit 2026-03-09T17:51:42.633521+0000 mgr.y (mgr.14505) 961 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:44 vm02 bash[23351]: audit 2026-03-09T17:51:42.633521+0000 mgr.y (mgr.14505) 961 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:44 vm02 bash[23351]: cluster 2026-03-09T17:51:42.996806+0000 mgr.y (mgr.14505) 962 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:44 vm02 bash[23351]: cluster 2026-03-09T17:51:42.996806+0000 mgr.y (mgr.14505) 962 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:44 vm02 bash[23351]: audit 2026-03-09T17:51:43.719409+0000 mon.c (mon.2) 879 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:44 vm02 bash[23351]: audit 2026-03-09T17:51:43.719409+0000 mon.c (mon.2) 879 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:46 vm00 bash[28333]: cluster 2026-03-09T17:51:44.997285+0000 mgr.y (mgr.14505) 963 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:46 vm00 bash[28333]: cluster 2026-03-09T17:51:44.997285+0000 mgr.y (mgr.14505) 963 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:46 vm00 bash[20770]: cluster 2026-03-09T17:51:44.997285+0000 mgr.y (mgr.14505) 963 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:46 vm00 bash[20770]: cluster 2026-03-09T17:51:44.997285+0000 mgr.y (mgr.14505) 963 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:51:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:51:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:51:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:46 vm02 bash[23351]: cluster 2026-03-09T17:51:44.997285+0000 mgr.y (mgr.14505) 963 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:46 vm02 bash[23351]: cluster 2026-03-09T17:51:44.997285+0000 mgr.y (mgr.14505) 963 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:48 vm00 bash[28333]: cluster 2026-03-09T17:51:46.997605+0000 mgr.y (mgr.14505) 964 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:48 vm00 bash[28333]: cluster 2026-03-09T17:51:46.997605+0000 mgr.y (mgr.14505) 964 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:48 vm00 bash[20770]: cluster 2026-03-09T17:51:46.997605+0000 mgr.y (mgr.14505) 964 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:48 vm00 bash[20770]: cluster 2026-03-09T17:51:46.997605+0000 mgr.y (mgr.14505) 964 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:48 vm02 bash[23351]: cluster 2026-03-09T17:51:46.997605+0000 mgr.y (mgr.14505) 964 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:48 vm02 bash[23351]: cluster 2026-03-09T17:51:46.997605+0000 mgr.y (mgr.14505) 964 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:50 vm00 bash[28333]: cluster 2026-03-09T17:51:48.998320+0000 mgr.y (mgr.14505) 965 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:50 vm00 bash[28333]: cluster 2026-03-09T17:51:48.998320+0000 mgr.y (mgr.14505) 965 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:50 vm00 bash[20770]: cluster 2026-03-09T17:51:48.998320+0000 mgr.y (mgr.14505) 965 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:50 vm00 bash[20770]: cluster 2026-03-09T17:51:48.998320+0000 mgr.y (mgr.14505) 965 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:50 vm02 bash[23351]: cluster 2026-03-09T17:51:48.998320+0000 mgr.y (mgr.14505) 965 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:50 vm02 bash[23351]: cluster 2026-03-09T17:51:48.998320+0000 mgr.y (mgr.14505) 965 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:52 vm00 bash[28333]: cluster 2026-03-09T17:51:50.998622+0000 mgr.y (mgr.14505) 966 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:52 vm00 bash[28333]: cluster 2026-03-09T17:51:50.998622+0000 mgr.y (mgr.14505) 966 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:52 vm00 bash[20770]: cluster 2026-03-09T17:51:50.998622+0000 mgr.y (mgr.14505) 966 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:52 vm00 bash[20770]: cluster 2026-03-09T17:51:50.998622+0000 mgr.y (mgr.14505) 966 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:52.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:51:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:51:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:52 vm02 bash[23351]: cluster 2026-03-09T17:51:50.998622+0000 mgr.y (mgr.14505) 966 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:52.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:52 vm02 bash[23351]: cluster 2026-03-09T17:51:50.998622+0000 mgr.y (mgr.14505) 966 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:54 vm00 bash[28333]: audit 2026-03-09T17:51:52.640966+0000 mgr.y (mgr.14505) 967 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:54 vm00 bash[28333]: audit 2026-03-09T17:51:52.640966+0000 mgr.y (mgr.14505) 967 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:54 vm00 bash[28333]: cluster 2026-03-09T17:51:52.999281+0000 mgr.y (mgr.14505) 968 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:54 vm00 bash[28333]: cluster 2026-03-09T17:51:52.999281+0000 mgr.y (mgr.14505) 968 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:54 vm00 bash[20770]: audit 2026-03-09T17:51:52.640966+0000 mgr.y (mgr.14505) 967 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:54 vm00 bash[20770]: audit 2026-03-09T17:51:52.640966+0000 mgr.y (mgr.14505) 967 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:54 vm00 bash[20770]: cluster 2026-03-09T17:51:52.999281+0000 mgr.y (mgr.14505) 968 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:54 vm00 bash[20770]: cluster 2026-03-09T17:51:52.999281+0000 mgr.y (mgr.14505) 968 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:54 vm02 bash[23351]: audit 2026-03-09T17:51:52.640966+0000 mgr.y (mgr.14505) 967 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:54 vm02 bash[23351]: audit 2026-03-09T17:51:52.640966+0000 mgr.y (mgr.14505) 967 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:51:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:54 vm02 bash[23351]: cluster 2026-03-09T17:51:52.999281+0000 mgr.y (mgr.14505) 968 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:54 vm02 bash[23351]: cluster 2026-03-09T17:51:52.999281+0000 mgr.y (mgr.14505) 968 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:51:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:56 vm00 bash[28333]: cluster 2026-03-09T17:51:54.999615+0000 mgr.y (mgr.14505) 969 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:56 vm00 bash[28333]: cluster 2026-03-09T17:51:54.999615+0000 mgr.y (mgr.14505) 969 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:56 vm00 bash[20770]: cluster 2026-03-09T17:51:54.999615+0000 mgr.y (mgr.14505) 969 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:56 vm00 bash[20770]: cluster 2026-03-09T17:51:54.999615+0000 mgr.y (mgr.14505) 969 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:51:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:51:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:51:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:56 vm02 bash[23351]: cluster 2026-03-09T17:51:54.999615+0000 mgr.y (mgr.14505) 969 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:56 vm02 bash[23351]: cluster 2026-03-09T17:51:54.999615+0000 mgr.y (mgr.14505) 969 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:58 vm00 bash[28333]: cluster 2026-03-09T17:51:56.999936+0000 mgr.y (mgr.14505) 970 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:58 vm00 bash[28333]: cluster 2026-03-09T17:51:56.999936+0000 mgr.y (mgr.14505) 970 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:58 vm00 bash[20770]: cluster 2026-03-09T17:51:56.999936+0000 mgr.y (mgr.14505) 970 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:58 vm00 bash[20770]: cluster 2026-03-09T17:51:56.999936+0000 mgr.y (mgr.14505) 970 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:58 vm02 bash[23351]: cluster 2026-03-09T17:51:56.999936+0000 mgr.y (mgr.14505) 970 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:58 vm02 bash[23351]: cluster 2026-03-09T17:51:56.999936+0000 mgr.y (mgr.14505) 970 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:51:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:59 vm00 bash[28333]: audit 2026-03-09T17:51:58.725566+0000 mon.c (mon.2) 880 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:51:59 vm00 bash[28333]: audit 2026-03-09T17:51:58.725566+0000 mon.c (mon.2) 880 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:59 vm00 bash[20770]: audit 2026-03-09T17:51:58.725566+0000 mon.c (mon.2) 880 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:51:59 vm00 bash[20770]: audit 2026-03-09T17:51:58.725566+0000 mon.c (mon.2) 880 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:59 vm02 bash[23351]: audit 2026-03-09T17:51:58.725566+0000 mon.c (mon.2) 880 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:51:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:51:59 vm02 bash[23351]: audit 2026-03-09T17:51:58.725566+0000 mon.c (mon.2) 880 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:00 vm00 bash[28333]: cluster 2026-03-09T17:51:59.000526+0000 mgr.y (mgr.14505) 971 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:00 vm00 bash[28333]: cluster 2026-03-09T17:51:59.000526+0000 mgr.y (mgr.14505) 971 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:00 vm00 bash[20770]: cluster 2026-03-09T17:51:59.000526+0000 mgr.y (mgr.14505) 971 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:00 vm00 bash[20770]: cluster 2026-03-09T17:51:59.000526+0000 mgr.y (mgr.14505) 971 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:00 vm02 bash[23351]: cluster 2026-03-09T17:51:59.000526+0000 mgr.y (mgr.14505) 971 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:00 vm02 bash[23351]: cluster 2026-03-09T17:51:59.000526+0000 mgr.y (mgr.14505) 971 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:02 vm00 bash[28333]: cluster 2026-03-09T17:52:01.000835+0000 mgr.y (mgr.14505) 972 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:02 vm00 bash[28333]: cluster 2026-03-09T17:52:01.000835+0000 mgr.y (mgr.14505) 972 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:02 vm00 bash[20770]: cluster 2026-03-09T17:52:01.000835+0000 mgr.y (mgr.14505) 972 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:02 vm00 bash[20770]: cluster 2026-03-09T17:52:01.000835+0000 mgr.y (mgr.14505) 972 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:02.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:02 vm02 bash[23351]: cluster 2026-03-09T17:52:01.000835+0000 mgr.y (mgr.14505) 972 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:02.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:02 vm02 bash[23351]: cluster 2026-03-09T17:52:01.000835+0000 mgr.y (mgr.14505) 972 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:02.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:52:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:52:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:04 vm00 bash[28333]: audit 2026-03-09T17:52:02.651065+0000 mgr.y (mgr.14505) 973 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:04 vm00 bash[28333]: audit 2026-03-09T17:52:02.651065+0000 mgr.y (mgr.14505) 973 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:04 vm00 bash[28333]: cluster 2026-03-09T17:52:03.002503+0000 mgr.y (mgr.14505) 974 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:04 vm00 bash[28333]: cluster 2026-03-09T17:52:03.002503+0000 mgr.y (mgr.14505) 974 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:04 vm00 bash[20770]: audit 2026-03-09T17:52:02.651065+0000 mgr.y (mgr.14505) 973 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:04 vm00 bash[20770]: audit 2026-03-09T17:52:02.651065+0000 mgr.y (mgr.14505) 973 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:04 vm00 bash[20770]: cluster 2026-03-09T17:52:03.002503+0000 mgr.y (mgr.14505) 974 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:04 vm00 bash[20770]: cluster 2026-03-09T17:52:03.002503+0000 mgr.y (mgr.14505) 974 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:04 vm02 bash[23351]: audit 2026-03-09T17:52:02.651065+0000 mgr.y (mgr.14505) 973 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:04 vm02 bash[23351]: audit 2026-03-09T17:52:02.651065+0000 mgr.y (mgr.14505) 973 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:04 vm02 bash[23351]: cluster 2026-03-09T17:52:03.002503+0000 mgr.y (mgr.14505) 974 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:04 vm02 bash[23351]: cluster 2026-03-09T17:52:03.002503+0000 mgr.y (mgr.14505) 974 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:06 vm00 bash[28333]: cluster 2026-03-09T17:52:05.002937+0000 mgr.y (mgr.14505) 975 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:06 vm00 bash[28333]: cluster 2026-03-09T17:52:05.002937+0000 mgr.y (mgr.14505) 975 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:06 vm00 bash[20770]: cluster 2026-03-09T17:52:05.002937+0000 mgr.y (mgr.14505) 975 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:06 vm00 bash[20770]: cluster 2026-03-09T17:52:05.002937+0000 mgr.y (mgr.14505) 975 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:52:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:52:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:52:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:06 vm02 bash[23351]: cluster 2026-03-09T17:52:05.002937+0000 mgr.y (mgr.14505) 975 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:06 vm02 bash[23351]: cluster 2026-03-09T17:52:05.002937+0000 mgr.y (mgr.14505) 975 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:07 vm00 bash[28333]: audit 2026-03-09T17:52:06.978747+0000 mon.c (mon.2) 881 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:07 vm00 bash[28333]: audit 2026-03-09T17:52:06.978747+0000 mon.c (mon.2) 881 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:07 vm00 bash[28333]: audit 2026-03-09T17:52:07.289068+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:07 vm00 bash[28333]: audit 2026-03-09T17:52:07.289068+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:07 vm00 bash[28333]: audit 2026-03-09T17:52:07.295154+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:07 vm00 bash[28333]: audit 2026-03-09T17:52:07.295154+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:07 vm00 bash[20770]: audit 2026-03-09T17:52:06.978747+0000 mon.c (mon.2) 881 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:07 vm00 bash[20770]: audit 2026-03-09T17:52:06.978747+0000 mon.c (mon.2) 881 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:07 vm00 bash[20770]: audit 2026-03-09T17:52:07.289068+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:07 vm00 bash[20770]: audit 2026-03-09T17:52:07.289068+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:07 vm00 bash[20770]: audit 2026-03-09T17:52:07.295154+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:07 vm00 bash[20770]: audit 2026-03-09T17:52:07.295154+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:07 vm02 bash[23351]: audit 2026-03-09T17:52:06.978747+0000 mon.c (mon.2) 881 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:52:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:07 vm02 bash[23351]: audit 2026-03-09T17:52:06.978747+0000 mon.c (mon.2) 881 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:52:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:07 vm02 bash[23351]: audit 2026-03-09T17:52:07.289068+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:07 vm02 bash[23351]: audit 2026-03-09T17:52:07.289068+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:07 vm02 bash[23351]: audit 2026-03-09T17:52:07.295154+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:07.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:07 vm02 bash[23351]: audit 2026-03-09T17:52:07.295154+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:08 vm00 bash[28333]: cluster 2026-03-09T17:52:07.003287+0000 mgr.y (mgr.14505) 976 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:08 vm00 bash[28333]: cluster 2026-03-09T17:52:07.003287+0000 mgr.y (mgr.14505) 976 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:08 vm00 bash[28333]: audit 2026-03-09T17:52:07.603016+0000 mon.c (mon.2) 882 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:08 vm00 bash[28333]: audit 2026-03-09T17:52:07.603016+0000 mon.c (mon.2) 882 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:08 vm00 bash[28333]: audit 2026-03-09T17:52:07.603781+0000 mon.c (mon.2) 883 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:08 vm00 bash[28333]: audit 2026-03-09T17:52:07.603781+0000 mon.c (mon.2) 883 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:08 vm00 bash[28333]: audit 2026-03-09T17:52:07.608752+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:08 vm00 bash[28333]: audit 2026-03-09T17:52:07.608752+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:08 vm00 bash[20770]: cluster 2026-03-09T17:52:07.003287+0000 mgr.y (mgr.14505) 976 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:08 vm00 bash[20770]: cluster 2026-03-09T17:52:07.003287+0000 mgr.y (mgr.14505) 976 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:08 vm00 bash[20770]: audit 2026-03-09T17:52:07.603016+0000 mon.c (mon.2) 882 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:52:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:08 vm00 bash[20770]: audit 2026-03-09T17:52:07.603016+0000 mon.c (mon.2) 882 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:52:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:08 vm00 bash[20770]: audit 2026-03-09T17:52:07.603781+0000 mon.c (mon.2) 883 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:52:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:08 vm00 bash[20770]: audit 2026-03-09T17:52:07.603781+0000 mon.c (mon.2) 883 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:52:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:08 vm00 bash[20770]: audit 2026-03-09T17:52:07.608752+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:08 vm00 bash[20770]: audit 2026-03-09T17:52:07.608752+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:08 vm02 bash[23351]: cluster 2026-03-09T17:52:07.003287+0000 mgr.y (mgr.14505) 976 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:08 vm02 bash[23351]: cluster 2026-03-09T17:52:07.003287+0000 mgr.y (mgr.14505) 976 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:08 vm02 bash[23351]: audit 2026-03-09T17:52:07.603016+0000 mon.c (mon.2) 882 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:52:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:08 vm02 bash[23351]: audit 2026-03-09T17:52:07.603016+0000 mon.c (mon.2) 882 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:52:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:08 vm02 bash[23351]: audit 2026-03-09T17:52:07.603781+0000 mon.c (mon.2) 883 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:52:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:08 vm02 bash[23351]: audit 2026-03-09T17:52:07.603781+0000 mon.c (mon.2) 883 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:52:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:08 vm02 bash[23351]: audit 2026-03-09T17:52:07.608752+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:08 vm02 bash[23351]: audit 2026-03-09T17:52:07.608752+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:52:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:09 vm00 bash[28333]: cluster 2026-03-09T17:52:09.003877+0000 mgr.y (mgr.14505) 977 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:09 vm00 bash[28333]: cluster 2026-03-09T17:52:09.003877+0000 mgr.y (mgr.14505) 977 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:09 vm00 bash[20770]: cluster 2026-03-09T17:52:09.003877+0000 mgr.y (mgr.14505) 977 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:09 vm00 bash[20770]: cluster 2026-03-09T17:52:09.003877+0000 mgr.y (mgr.14505) 977 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:09.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:09 vm02 bash[23351]: cluster 2026-03-09T17:52:09.003877+0000 mgr.y (mgr.14505) 977 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:09.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:09 vm02 bash[23351]: cluster 2026-03-09T17:52:09.003877+0000 mgr.y (mgr.14505) 977 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:12 vm02 bash[23351]: cluster 2026-03-09T17:52:11.004146+0000 mgr.y (mgr.14505) 978 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:12 vm02 bash[23351]: cluster 2026-03-09T17:52:11.004146+0000 mgr.y (mgr.14505) 978 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:12 vm00 bash[28333]: cluster 2026-03-09T17:52:11.004146+0000 mgr.y (mgr.14505) 978 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:12 vm00 bash[28333]: cluster 2026-03-09T17:52:11.004146+0000 mgr.y (mgr.14505) 978 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:12 vm00 bash[20770]: cluster 2026-03-09T17:52:11.004146+0000 mgr.y (mgr.14505) 978 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:12 vm00 bash[20770]: cluster 2026-03-09T17:52:11.004146+0000 mgr.y (mgr.14505) 978 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:13.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:52:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:52:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:14 vm02 bash[23351]: audit 2026-03-09T17:52:12.661784+0000 mgr.y (mgr.14505) 979 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:14 vm02 bash[23351]: audit 2026-03-09T17:52:12.661784+0000 mgr.y (mgr.14505) 979 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:14 vm02 bash[23351]: cluster 2026-03-09T17:52:13.004803+0000 mgr.y (mgr.14505) 980 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:14 vm02 bash[23351]: cluster 2026-03-09T17:52:13.004803+0000 mgr.y (mgr.14505) 980 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:14 vm02 bash[23351]: audit 2026-03-09T17:52:13.731463+0000 mon.c (mon.2) 884 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:14 vm02 bash[23351]: audit 2026-03-09T17:52:13.731463+0000 mon.c (mon.2) 884 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:14 vm00 bash[28333]: audit 2026-03-09T17:52:12.661784+0000 mgr.y (mgr.14505) 979 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:14 vm00 bash[28333]: audit 2026-03-09T17:52:12.661784+0000 mgr.y (mgr.14505) 979 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:14 vm00 bash[28333]: cluster 2026-03-09T17:52:13.004803+0000 mgr.y (mgr.14505) 980 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:14 vm00 bash[28333]: cluster 2026-03-09T17:52:13.004803+0000 mgr.y (mgr.14505) 980 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:14 vm00 bash[28333]: audit 2026-03-09T17:52:13.731463+0000 mon.c (mon.2) 884 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:14 vm00 bash[28333]: audit 2026-03-09T17:52:13.731463+0000 mon.c (mon.2) 884 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:14 vm00 bash[20770]: audit 2026-03-09T17:52:12.661784+0000 mgr.y (mgr.14505) 979 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:14 vm00 bash[20770]: audit 2026-03-09T17:52:12.661784+0000 mgr.y (mgr.14505) 979 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:14 vm00 bash[20770]: cluster 2026-03-09T17:52:13.004803+0000 mgr.y (mgr.14505) 980 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:14 vm00 bash[20770]: cluster 2026-03-09T17:52:13.004803+0000 mgr.y (mgr.14505) 980 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:14 vm00 bash[20770]: audit 2026-03-09T17:52:13.731463+0000 mon.c (mon.2) 884 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:14 vm00 bash[20770]: audit 2026-03-09T17:52:13.731463+0000 mon.c (mon.2) 884 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:16.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:16 vm02 bash[23351]: cluster 2026-03-09T17:52:15.005215+0000 mgr.y (mgr.14505) 981 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:16.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:16 vm02 bash[23351]: cluster 2026-03-09T17:52:15.005215+0000 mgr.y (mgr.14505) 981 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:16 vm00 bash[28333]: cluster 2026-03-09T17:52:15.005215+0000 mgr.y (mgr.14505) 981 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:16 vm00 bash[28333]: cluster 2026-03-09T17:52:15.005215+0000 mgr.y (mgr.14505) 981 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:16 vm00 bash[20770]: cluster 2026-03-09T17:52:15.005215+0000 mgr.y (mgr.14505) 981 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:16 vm00 bash[20770]: cluster 2026-03-09T17:52:15.005215+0000 mgr.y (mgr.14505) 981 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:52:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:52:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:52:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:18 vm02 bash[23351]: cluster 2026-03-09T17:52:17.005534+0000 mgr.y (mgr.14505) 982 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:18.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:18 vm02 bash[23351]: cluster 2026-03-09T17:52:17.005534+0000 mgr.y (mgr.14505) 982 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:18 vm00 bash[28333]: cluster 2026-03-09T17:52:17.005534+0000 mgr.y (mgr.14505) 982 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:18 vm00 bash[28333]: cluster 2026-03-09T17:52:17.005534+0000 mgr.y (mgr.14505) 982 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:18 vm00 bash[20770]: cluster 2026-03-09T17:52:17.005534+0000 mgr.y (mgr.14505) 982 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:18 vm00 bash[20770]: cluster 2026-03-09T17:52:17.005534+0000 mgr.y (mgr.14505) 982 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:20.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:20 vm02 bash[23351]: cluster 2026-03-09T17:52:19.006145+0000 mgr.y (mgr.14505) 983 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:20.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:20 vm02 bash[23351]: cluster 2026-03-09T17:52:19.006145+0000 mgr.y (mgr.14505) 983 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:20 vm00 bash[28333]: cluster 2026-03-09T17:52:19.006145+0000 mgr.y (mgr.14505) 983 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:20 vm00 bash[28333]: cluster 2026-03-09T17:52:19.006145+0000 mgr.y (mgr.14505) 983 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:20 vm00 bash[20770]: cluster 2026-03-09T17:52:19.006145+0000 mgr.y (mgr.14505) 983 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:20 vm00 bash[20770]: cluster 2026-03-09T17:52:19.006145+0000 mgr.y (mgr.14505) 983 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:22 vm02 bash[23351]: cluster 2026-03-09T17:52:21.006399+0000 mgr.y (mgr.14505) 984 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:22.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:22 vm02 bash[23351]: cluster 2026-03-09T17:52:21.006399+0000 mgr.y (mgr.14505) 984 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:22 vm00 bash[28333]: cluster 2026-03-09T17:52:21.006399+0000 mgr.y (mgr.14505) 984 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:22 vm00 bash[28333]: cluster 2026-03-09T17:52:21.006399+0000 mgr.y (mgr.14505) 984 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:22 vm00 bash[20770]: cluster 2026-03-09T17:52:21.006399+0000 mgr.y (mgr.14505) 984 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:22 vm00 bash[20770]: cluster 2026-03-09T17:52:21.006399+0000 mgr.y (mgr.14505) 984 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:23.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:52:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:52:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:24 vm02 bash[23351]: audit 2026-03-09T17:52:22.669261+0000 mgr.y (mgr.14505) 985 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:24 vm02 bash[23351]: audit 2026-03-09T17:52:22.669261+0000 mgr.y (mgr.14505) 985 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:24.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:24 vm02 bash[23351]: cluster 2026-03-09T17:52:23.007003+0000 mgr.y (mgr.14505) 986 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:24 vm02 bash[23351]: cluster 2026-03-09T17:52:23.007003+0000 mgr.y (mgr.14505) 986 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:24 vm00 bash[28333]: audit 2026-03-09T17:52:22.669261+0000 mgr.y (mgr.14505) 985 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:24 vm00 bash[28333]: audit 2026-03-09T17:52:22.669261+0000 mgr.y (mgr.14505) 985 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:24 vm00 bash[28333]: cluster 2026-03-09T17:52:23.007003+0000 mgr.y (mgr.14505) 986 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:24 vm00 bash[28333]: cluster 2026-03-09T17:52:23.007003+0000 mgr.y (mgr.14505) 986 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:24 vm00 bash[20770]: audit 2026-03-09T17:52:22.669261+0000 mgr.y (mgr.14505) 985 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:24 vm00 bash[20770]: audit 2026-03-09T17:52:22.669261+0000 mgr.y (mgr.14505) 985 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:24 vm00 bash[20770]: cluster 2026-03-09T17:52:23.007003+0000 mgr.y (mgr.14505) 986 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:24 vm00 bash[20770]: cluster 2026-03-09T17:52:23.007003+0000 mgr.y (mgr.14505) 986 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:26 vm02 bash[23351]: cluster 2026-03-09T17:52:25.007394+0000 mgr.y (mgr.14505) 987 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:26 vm02 bash[23351]: cluster 2026-03-09T17:52:25.007394+0000 mgr.y (mgr.14505) 987 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:26 vm00 bash[28333]: cluster 2026-03-09T17:52:25.007394+0000 mgr.y (mgr.14505) 987 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:26 vm00 bash[28333]: cluster 2026-03-09T17:52:25.007394+0000 mgr.y (mgr.14505) 987 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:26 vm00 bash[20770]: cluster 2026-03-09T17:52:25.007394+0000 mgr.y (mgr.14505) 987 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:26 vm00 bash[20770]: cluster 2026-03-09T17:52:25.007394+0000 mgr.y (mgr.14505) 987 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:26.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:52:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:52:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:52:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:28 vm02 bash[23351]: cluster 2026-03-09T17:52:27.007733+0000 mgr.y (mgr.14505) 988 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:28 vm02 bash[23351]: cluster 2026-03-09T17:52:27.007733+0000 mgr.y (mgr.14505) 988 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:28 vm00 bash[28333]: cluster 2026-03-09T17:52:27.007733+0000 mgr.y (mgr.14505) 988 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:28 vm00 bash[28333]: cluster 2026-03-09T17:52:27.007733+0000 mgr.y (mgr.14505) 988 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:28 vm00 bash[20770]: cluster 2026-03-09T17:52:27.007733+0000 mgr.y (mgr.14505) 988 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:28 vm00 bash[20770]: cluster 2026-03-09T17:52:27.007733+0000 mgr.y (mgr.14505) 988 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:29 vm02 bash[23351]: audit 2026-03-09T17:52:28.738424+0000 mon.c (mon.2) 885 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:29 vm02 bash[23351]: audit 2026-03-09T17:52:28.738424+0000 mon.c (mon.2) 885 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:29 vm00 bash[28333]: audit 2026-03-09T17:52:28.738424+0000 mon.c (mon.2) 885 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:29 vm00 bash[28333]: audit 2026-03-09T17:52:28.738424+0000 mon.c (mon.2) 885 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:29 vm00 bash[20770]: audit 2026-03-09T17:52:28.738424+0000 mon.c (mon.2) 885 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:29 vm00 bash[20770]: audit 2026-03-09T17:52:28.738424+0000 mon.c (mon.2) 885 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:30 vm02 bash[23351]: cluster 2026-03-09T17:52:29.008365+0000 mgr.y (mgr.14505) 989 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:30 vm02 bash[23351]: cluster 2026-03-09T17:52:29.008365+0000 mgr.y (mgr.14505) 989 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:30 vm00 bash[28333]: cluster 2026-03-09T17:52:29.008365+0000 mgr.y (mgr.14505) 989 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:30 vm00 bash[28333]: cluster 2026-03-09T17:52:29.008365+0000 mgr.y (mgr.14505) 989 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:30 vm00 bash[20770]: cluster 2026-03-09T17:52:29.008365+0000 mgr.y (mgr.14505) 989 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:30 vm00 bash[20770]: cluster 2026-03-09T17:52:29.008365+0000 mgr.y (mgr.14505) 989 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:32.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:32 vm02 bash[23351]: cluster 2026-03-09T17:52:31.008628+0000 mgr.y (mgr.14505) 990 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:32.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:32 vm02 bash[23351]: cluster 2026-03-09T17:52:31.008628+0000 mgr.y (mgr.14505) 990 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:32 vm00 bash[28333]: cluster 2026-03-09T17:52:31.008628+0000 mgr.y (mgr.14505) 990 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:32 vm00 bash[28333]: cluster 2026-03-09T17:52:31.008628+0000 mgr.y (mgr.14505) 990 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:32 vm00 bash[20770]: cluster 2026-03-09T17:52:31.008628+0000 mgr.y (mgr.14505) 990 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:32 vm00 bash[20770]: cluster 2026-03-09T17:52:31.008628+0000 mgr.y (mgr.14505) 990 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:33.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:52:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:52:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:34 vm02 bash[23351]: audit 2026-03-09T17:52:32.670322+0000 mgr.y (mgr.14505) 991 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:34 vm02 bash[23351]: audit 2026-03-09T17:52:32.670322+0000 mgr.y (mgr.14505) 991 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:34 vm02 bash[23351]: cluster 2026-03-09T17:52:33.009260+0000 mgr.y (mgr.14505) 992 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:34 vm02 bash[23351]: cluster 2026-03-09T17:52:33.009260+0000 mgr.y (mgr.14505) 992 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:34 vm00 bash[28333]: audit 2026-03-09T17:52:32.670322+0000 mgr.y (mgr.14505) 991 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:34 vm00 bash[28333]: audit 2026-03-09T17:52:32.670322+0000 mgr.y (mgr.14505) 991 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:34 vm00 bash[28333]: cluster 2026-03-09T17:52:33.009260+0000 mgr.y (mgr.14505) 992 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:34 vm00 bash[28333]: cluster 2026-03-09T17:52:33.009260+0000 mgr.y (mgr.14505) 992 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:34 vm00 bash[20770]: audit 2026-03-09T17:52:32.670322+0000 mgr.y (mgr.14505) 991 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:34 vm00 bash[20770]: audit 2026-03-09T17:52:32.670322+0000 mgr.y (mgr.14505) 991 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:34 vm00 bash[20770]: cluster 2026-03-09T17:52:33.009260+0000 mgr.y (mgr.14505) 992 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:34 vm00 bash[20770]: cluster 2026-03-09T17:52:33.009260+0000 mgr.y (mgr.14505) 992 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:36 vm02 bash[23351]: cluster 2026-03-09T17:52:35.009661+0000 mgr.y (mgr.14505) 993 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:36 vm02 bash[23351]: cluster 2026-03-09T17:52:35.009661+0000 mgr.y (mgr.14505) 993 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:36 vm00 bash[28333]: cluster 2026-03-09T17:52:35.009661+0000 mgr.y (mgr.14505) 993 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:36 vm00 bash[28333]: cluster 2026-03-09T17:52:35.009661+0000 mgr.y (mgr.14505) 993 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:36 vm00 bash[20770]: cluster 2026-03-09T17:52:35.009661+0000 mgr.y (mgr.14505) 993 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:36 vm00 bash[20770]: cluster 2026-03-09T17:52:35.009661+0000 mgr.y (mgr.14505) 993 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:52:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:52:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:52:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:38 vm02 bash[23351]: cluster 2026-03-09T17:52:37.009968+0000 mgr.y (mgr.14505) 994 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:38 vm02 bash[23351]: cluster 2026-03-09T17:52:37.009968+0000 mgr.y (mgr.14505) 994 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:38 vm00 bash[28333]: cluster 2026-03-09T17:52:37.009968+0000 mgr.y (mgr.14505) 994 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:38 vm00 bash[28333]: cluster 2026-03-09T17:52:37.009968+0000 mgr.y (mgr.14505) 994 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:38 vm00 bash[20770]: cluster 2026-03-09T17:52:37.009968+0000 mgr.y (mgr.14505) 994 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:38 vm00 bash[20770]: cluster 2026-03-09T17:52:37.009968+0000 mgr.y (mgr.14505) 994 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:40 vm02 bash[23351]: cluster 2026-03-09T17:52:39.010752+0000 mgr.y (mgr.14505) 995 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:40 vm02 bash[23351]: cluster 2026-03-09T17:52:39.010752+0000 mgr.y (mgr.14505) 995 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:40 vm00 bash[28333]: cluster 2026-03-09T17:52:39.010752+0000 mgr.y (mgr.14505) 995 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:40 vm00 bash[28333]: cluster 2026-03-09T17:52:39.010752+0000 mgr.y (mgr.14505) 995 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:40 vm00 bash[20770]: cluster 2026-03-09T17:52:39.010752+0000 mgr.y (mgr.14505) 995 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:40 vm00 bash[20770]: cluster 2026-03-09T17:52:39.010752+0000 mgr.y (mgr.14505) 995 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:42 vm00 bash[28333]: cluster 2026-03-09T17:52:41.011089+0000 mgr.y (mgr.14505) 996 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:42 vm00 bash[28333]: cluster 2026-03-09T17:52:41.011089+0000 mgr.y (mgr.14505) 996 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:42 vm00 bash[20770]: cluster 2026-03-09T17:52:41.011089+0000 mgr.y (mgr.14505) 996 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:42 vm00 bash[20770]: cluster 2026-03-09T17:52:41.011089+0000 mgr.y (mgr.14505) 996 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:42.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:42 vm02 bash[23351]: cluster 2026-03-09T17:52:41.011089+0000 mgr.y (mgr.14505) 996 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:42.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:42 vm02 bash[23351]: cluster 2026-03-09T17:52:41.011089+0000 mgr.y (mgr.14505) 996 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:42.885 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:52:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:52:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:44 vm02 bash[23351]: audit 2026-03-09T17:52:42.674208+0000 mgr.y (mgr.14505) 997 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:44 vm02 bash[23351]: audit 2026-03-09T17:52:42.674208+0000 mgr.y (mgr.14505) 997 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:44 vm02 bash[23351]: cluster 2026-03-09T17:52:43.011758+0000 mgr.y (mgr.14505) 998 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:44 vm02 bash[23351]: cluster 2026-03-09T17:52:43.011758+0000 mgr.y (mgr.14505) 998 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:44 vm02 bash[23351]: audit 2026-03-09T17:52:43.744222+0000 mon.c (mon.2) 886 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:44 vm02 bash[23351]: audit 2026-03-09T17:52:43.744222+0000 mon.c (mon.2) 886 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:44 vm00 bash[28333]: audit 2026-03-09T17:52:42.674208+0000 mgr.y (mgr.14505) 997 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:44 vm00 bash[28333]: audit 2026-03-09T17:52:42.674208+0000 mgr.y (mgr.14505) 997 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:44 vm00 bash[28333]: cluster 2026-03-09T17:52:43.011758+0000 mgr.y (mgr.14505) 998 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:44 vm00 bash[28333]: cluster 2026-03-09T17:52:43.011758+0000 mgr.y (mgr.14505) 998 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:44 vm00 bash[28333]: audit 2026-03-09T17:52:43.744222+0000 mon.c (mon.2) 886 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:44 vm00 bash[28333]: audit 2026-03-09T17:52:43.744222+0000 mon.c (mon.2) 886 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:44 vm00 bash[20770]: audit 2026-03-09T17:52:42.674208+0000 mgr.y (mgr.14505) 997 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:44 vm00 bash[20770]: audit 2026-03-09T17:52:42.674208+0000 mgr.y (mgr.14505) 997 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:44 vm00 bash[20770]: cluster 2026-03-09T17:52:43.011758+0000 mgr.y (mgr.14505) 998 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:44 vm00 bash[20770]: cluster 2026-03-09T17:52:43.011758+0000 mgr.y (mgr.14505) 998 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:44 vm00 bash[20770]: audit 2026-03-09T17:52:43.744222+0000 mon.c (mon.2) 886 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:44 vm00 bash[20770]: audit 2026-03-09T17:52:43.744222+0000 mon.c (mon.2) 886 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:45.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:45 vm02 bash[23351]: cluster 2026-03-09T17:52:45.012186+0000 mgr.y (mgr.14505) 999 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:45.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:45 vm02 bash[23351]: cluster 2026-03-09T17:52:45.012186+0000 mgr.y (mgr.14505) 999 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:45 vm00 bash[28333]: cluster 2026-03-09T17:52:45.012186+0000 mgr.y (mgr.14505) 999 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:45 vm00 bash[28333]: cluster 2026-03-09T17:52:45.012186+0000 mgr.y (mgr.14505) 999 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:45 vm00 bash[20770]: cluster 2026-03-09T17:52:45.012186+0000 mgr.y (mgr.14505) 999 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:45 vm00 bash[20770]: cluster 2026-03-09T17:52:45.012186+0000 mgr.y (mgr.14505) 999 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:52:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:52:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:52:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:48 vm02 bash[23351]: cluster 2026-03-09T17:52:47.012483+0000 mgr.y (mgr.14505) 1000 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:48 vm02 bash[23351]: cluster 2026-03-09T17:52:47.012483+0000 mgr.y (mgr.14505) 1000 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:48 vm00 bash[28333]: cluster 2026-03-09T17:52:47.012483+0000 mgr.y (mgr.14505) 1000 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:48 vm00 bash[28333]: cluster 2026-03-09T17:52:47.012483+0000 mgr.y (mgr.14505) 1000 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:48 vm00 bash[20770]: cluster 2026-03-09T17:52:47.012483+0000 mgr.y (mgr.14505) 1000 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:48 vm00 bash[20770]: cluster 2026-03-09T17:52:47.012483+0000 mgr.y (mgr.14505) 1000 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:50 vm02 bash[23351]: cluster 2026-03-09T17:52:49.013357+0000 mgr.y (mgr.14505) 1001 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:50 vm02 bash[23351]: cluster 2026-03-09T17:52:49.013357+0000 mgr.y (mgr.14505) 1001 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:50 vm00 bash[28333]: cluster 2026-03-09T17:52:49.013357+0000 mgr.y (mgr.14505) 1001 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:50 vm00 bash[28333]: cluster 2026-03-09T17:52:49.013357+0000 mgr.y (mgr.14505) 1001 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:50 vm00 bash[20770]: cluster 2026-03-09T17:52:49.013357+0000 mgr.y (mgr.14505) 1001 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:50 vm00 bash[20770]: cluster 2026-03-09T17:52:49.013357+0000 mgr.y (mgr.14505) 1001 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:52.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:52 vm02 bash[23351]: cluster 2026-03-09T17:52:51.013641+0000 mgr.y (mgr.14505) 1002 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:52.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:52 vm02 bash[23351]: cluster 2026-03-09T17:52:51.013641+0000 mgr.y (mgr.14505) 1002 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:52 vm00 bash[28333]: cluster 2026-03-09T17:52:51.013641+0000 mgr.y (mgr.14505) 1002 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:52 vm00 bash[28333]: cluster 2026-03-09T17:52:51.013641+0000 mgr.y (mgr.14505) 1002 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:52 vm00 bash[20770]: cluster 2026-03-09T17:52:51.013641+0000 mgr.y (mgr.14505) 1002 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:52 vm00 bash[20770]: cluster 2026-03-09T17:52:51.013641+0000 mgr.y (mgr.14505) 1002 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:53.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:52:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:52:54.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:54 vm02 bash[23351]: audit 2026-03-09T17:52:52.684762+0000 mgr.y (mgr.14505) 1003 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:54.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:54 vm02 bash[23351]: audit 2026-03-09T17:52:52.684762+0000 mgr.y (mgr.14505) 1003 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:54.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:54 vm02 bash[23351]: cluster 2026-03-09T17:52:53.014135+0000 mgr.y (mgr.14505) 1004 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:54 vm02 bash[23351]: cluster 2026-03-09T17:52:53.014135+0000 mgr.y (mgr.14505) 1004 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:54 vm00 bash[28333]: audit 2026-03-09T17:52:52.684762+0000 mgr.y (mgr.14505) 1003 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:54 vm00 bash[28333]: audit 2026-03-09T17:52:52.684762+0000 mgr.y (mgr.14505) 1003 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:54 vm00 bash[28333]: cluster 2026-03-09T17:52:53.014135+0000 mgr.y (mgr.14505) 1004 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:54 vm00 bash[28333]: cluster 2026-03-09T17:52:53.014135+0000 mgr.y (mgr.14505) 1004 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:54 vm00 bash[20770]: audit 2026-03-09T17:52:52.684762+0000 mgr.y (mgr.14505) 1003 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:54 vm00 bash[20770]: audit 2026-03-09T17:52:52.684762+0000 mgr.y (mgr.14505) 1003 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:52:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:54 vm00 bash[20770]: cluster 2026-03-09T17:52:53.014135+0000 mgr.y (mgr.14505) 1004 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:54 vm00 bash[20770]: cluster 2026-03-09T17:52:53.014135+0000 mgr.y (mgr.14505) 1004 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:52:56.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:56 vm02 bash[23351]: cluster 2026-03-09T17:52:55.014504+0000 mgr.y (mgr.14505) 1005 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:56.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:56 vm02 bash[23351]: cluster 2026-03-09T17:52:55.014504+0000 mgr.y (mgr.14505) 1005 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:52:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:52:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:52:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:56 vm00 bash[28333]: cluster 2026-03-09T17:52:55.014504+0000 mgr.y (mgr.14505) 1005 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:56 vm00 bash[28333]: cluster 2026-03-09T17:52:55.014504+0000 mgr.y (mgr.14505) 1005 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:56 vm00 bash[20770]: cluster 2026-03-09T17:52:55.014504+0000 mgr.y (mgr.14505) 1005 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:56 vm00 bash[20770]: cluster 2026-03-09T17:52:55.014504+0000 mgr.y (mgr.14505) 1005 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:58 vm02 bash[23351]: cluster 2026-03-09T17:52:57.014800+0000 mgr.y (mgr.14505) 1006 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:58 vm02 bash[23351]: cluster 2026-03-09T17:52:57.014800+0000 mgr.y (mgr.14505) 1006 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:58 vm00 bash[28333]: cluster 2026-03-09T17:52:57.014800+0000 mgr.y (mgr.14505) 1006 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:58 vm00 bash[28333]: cluster 2026-03-09T17:52:57.014800+0000 mgr.y (mgr.14505) 1006 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:58 vm00 bash[20770]: cluster 2026-03-09T17:52:57.014800+0000 mgr.y (mgr.14505) 1006 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:58 vm00 bash[20770]: cluster 2026-03-09T17:52:57.014800+0000 mgr.y (mgr.14505) 1006 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:52:59.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:59 vm02 bash[23351]: audit 2026-03-09T17:52:58.750942+0000 mon.c (mon.2) 887 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:59.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:52:59 vm02 bash[23351]: audit 2026-03-09T17:52:58.750942+0000 mon.c (mon.2) 887 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:59 vm00 bash[28333]: audit 2026-03-09T17:52:58.750942+0000 mon.c (mon.2) 887 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:52:59 vm00 bash[28333]: audit 2026-03-09T17:52:58.750942+0000 mon.c (mon.2) 887 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:59 vm00 bash[20770]: audit 2026-03-09T17:52:58.750942+0000 mon.c (mon.2) 887 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:52:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:52:59 vm00 bash[20770]: audit 2026-03-09T17:52:58.750942+0000 mon.c (mon.2) 887 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:00 vm02 bash[23351]: cluster 2026-03-09T17:52:59.015359+0000 mgr.y (mgr.14505) 1007 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:00.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:00 vm02 bash[23351]: cluster 2026-03-09T17:52:59.015359+0000 mgr.y (mgr.14505) 1007 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:00 vm00 bash[28333]: cluster 2026-03-09T17:52:59.015359+0000 mgr.y (mgr.14505) 1007 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:00 vm00 bash[28333]: cluster 2026-03-09T17:52:59.015359+0000 mgr.y (mgr.14505) 1007 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:00 vm00 bash[20770]: cluster 2026-03-09T17:52:59.015359+0000 mgr.y (mgr.14505) 1007 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:00 vm00 bash[20770]: cluster 2026-03-09T17:52:59.015359+0000 mgr.y (mgr.14505) 1007 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:02.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:02 vm02 bash[23351]: cluster 2026-03-09T17:53:01.015633+0000 mgr.y (mgr.14505) 1008 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:02.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:02 vm02 bash[23351]: cluster 2026-03-09T17:53:01.015633+0000 mgr.y (mgr.14505) 1008 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:02 vm00 bash[28333]: cluster 2026-03-09T17:53:01.015633+0000 mgr.y (mgr.14505) 1008 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:02 vm00 bash[28333]: cluster 2026-03-09T17:53:01.015633+0000 mgr.y (mgr.14505) 1008 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:02 vm00 bash[20770]: cluster 2026-03-09T17:53:01.015633+0000 mgr.y (mgr.14505) 1008 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:02 vm00 bash[20770]: cluster 2026-03-09T17:53:01.015633+0000 mgr.y (mgr.14505) 1008 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:03.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:53:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:53:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:04 vm02 bash[23351]: audit 2026-03-09T17:53:02.686355+0000 mgr.y (mgr.14505) 1009 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:04 vm02 bash[23351]: audit 2026-03-09T17:53:02.686355+0000 mgr.y (mgr.14505) 1009 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:04 vm02 bash[23351]: cluster 2026-03-09T17:53:03.016094+0000 mgr.y (mgr.14505) 1010 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:04.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:04 vm02 bash[23351]: cluster 2026-03-09T17:53:03.016094+0000 mgr.y (mgr.14505) 1010 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:04 vm00 bash[20770]: audit 2026-03-09T17:53:02.686355+0000 mgr.y (mgr.14505) 1009 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:04 vm00 bash[20770]: audit 2026-03-09T17:53:02.686355+0000 mgr.y (mgr.14505) 1009 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:04 vm00 bash[20770]: cluster 2026-03-09T17:53:03.016094+0000 mgr.y (mgr.14505) 1010 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:04 vm00 bash[20770]: cluster 2026-03-09T17:53:03.016094+0000 mgr.y (mgr.14505) 1010 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:04 vm00 bash[28333]: audit 2026-03-09T17:53:02.686355+0000 mgr.y (mgr.14505) 1009 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:04 vm00 bash[28333]: audit 2026-03-09T17:53:02.686355+0000 mgr.y (mgr.14505) 1009 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:04 vm00 bash[28333]: cluster 2026-03-09T17:53:03.016094+0000 mgr.y (mgr.14505) 1010 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:04 vm00 bash[28333]: cluster 2026-03-09T17:53:03.016094+0000 mgr.y (mgr.14505) 1010 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:06.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:06 vm02 bash[23351]: cluster 2026-03-09T17:53:05.016439+0000 mgr.y (mgr.14505) 1011 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:06.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:06 vm02 bash[23351]: cluster 2026-03-09T17:53:05.016439+0000 mgr.y (mgr.14505) 1011 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:06 vm00 bash[28333]: cluster 2026-03-09T17:53:05.016439+0000 mgr.y (mgr.14505) 1011 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:06 vm00 bash[28333]: cluster 2026-03-09T17:53:05.016439+0000 mgr.y (mgr.14505) 1011 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:06 vm00 bash[20770]: cluster 2026-03-09T17:53:05.016439+0000 mgr.y (mgr.14505) 1011 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:06 vm00 bash[20770]: cluster 2026-03-09T17:53:05.016439+0000 mgr.y (mgr.14505) 1011 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:53:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:53:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:53:08.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: cluster 2026-03-09T17:53:07.016762+0000 mgr.y (mgr.14505) 1012 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:08.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: cluster 2026-03-09T17:53:07.016762+0000 mgr.y (mgr.14505) 1012 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:08.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: audit 2026-03-09T17:53:07.647889+0000 mon.c (mon.2) 888 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:53:08.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: audit 2026-03-09T17:53:07.647889+0000 mon.c (mon.2) 888 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:53:08.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: audit 2026-03-09T17:53:07.960379+0000 mon.c (mon.2) 889 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:53:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: audit 2026-03-09T17:53:07.960379+0000 mon.c (mon.2) 889 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:53:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: audit 2026-03-09T17:53:07.960951+0000 mon.c (mon.2) 890 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:53:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: audit 2026-03-09T17:53:07.960951+0000 mon.c (mon.2) 890 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:53:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: audit 2026-03-09T17:53:07.966111+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:53:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:08 vm02 bash[23351]: audit 2026-03-09T17:53:07.966111+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: cluster 2026-03-09T17:53:07.016762+0000 mgr.y (mgr.14505) 1012 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: cluster 2026-03-09T17:53:07.016762+0000 mgr.y (mgr.14505) 1012 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: audit 2026-03-09T17:53:07.647889+0000 mon.c (mon.2) 888 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: audit 2026-03-09T17:53:07.647889+0000 mon.c (mon.2) 888 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: audit 2026-03-09T17:53:07.960379+0000 mon.c (mon.2) 889 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: audit 2026-03-09T17:53:07.960379+0000 mon.c (mon.2) 889 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: audit 2026-03-09T17:53:07.960951+0000 mon.c (mon.2) 890 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: audit 2026-03-09T17:53:07.960951+0000 mon.c (mon.2) 890 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: audit 2026-03-09T17:53:07.966111+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:08 vm00 bash[28333]: audit 2026-03-09T17:53:07.966111+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: cluster 2026-03-09T17:53:07.016762+0000 mgr.y (mgr.14505) 1012 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: cluster 2026-03-09T17:53:07.016762+0000 mgr.y (mgr.14505) 1012 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: audit 2026-03-09T17:53:07.647889+0000 mon.c (mon.2) 888 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: audit 2026-03-09T17:53:07.647889+0000 mon.c (mon.2) 888 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: audit 2026-03-09T17:53:07.960379+0000 mon.c (mon.2) 889 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: audit 2026-03-09T17:53:07.960379+0000 mon.c (mon.2) 889 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: audit 2026-03-09T17:53:07.960951+0000 mon.c (mon.2) 890 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: audit 2026-03-09T17:53:07.960951+0000 mon.c (mon.2) 890 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: audit 2026-03-09T17:53:07.966111+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:53:08.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:08 vm00 bash[20770]: audit 2026-03-09T17:53:07.966111+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:53:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:10 vm00 bash[28333]: cluster 2026-03-09T17:53:09.017477+0000 mgr.y (mgr.14505) 1013 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:10 vm00 bash[28333]: cluster 2026-03-09T17:53:09.017477+0000 mgr.y (mgr.14505) 1013 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:10 vm00 bash[20770]: cluster 2026-03-09T17:53:09.017477+0000 mgr.y (mgr.14505) 1013 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:10 vm00 bash[20770]: cluster 2026-03-09T17:53:09.017477+0000 mgr.y (mgr.14505) 1013 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:10 vm02 bash[23351]: cluster 2026-03-09T17:53:09.017477+0000 mgr.y (mgr.14505) 1013 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:10 vm02 bash[23351]: cluster 2026-03-09T17:53:09.017477+0000 mgr.y (mgr.14505) 1013 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:12 vm00 bash[28333]: cluster 2026-03-09T17:53:11.017775+0000 mgr.y (mgr.14505) 1014 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:12 vm00 bash[28333]: cluster 2026-03-09T17:53:11.017775+0000 mgr.y (mgr.14505) 1014 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:12 vm00 bash[20770]: cluster 2026-03-09T17:53:11.017775+0000 mgr.y (mgr.14505) 1014 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:12 vm00 bash[20770]: cluster 2026-03-09T17:53:11.017775+0000 mgr.y (mgr.14505) 1014 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:12 vm02 bash[23351]: cluster 2026-03-09T17:53:11.017775+0000 mgr.y (mgr.14505) 1014 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:12 vm02 bash[23351]: cluster 2026-03-09T17:53:11.017775+0000 mgr.y (mgr.14505) 1014 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:13.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:53:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:14 vm00 bash[20770]: audit 2026-03-09T17:53:12.696923+0000 mgr.y (mgr.14505) 1015 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:14 vm00 bash[20770]: audit 2026-03-09T17:53:12.696923+0000 mgr.y (mgr.14505) 1015 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:14 vm00 bash[20770]: cluster 2026-03-09T17:53:13.018414+0000 mgr.y (mgr.14505) 1016 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:14 vm00 bash[20770]: cluster 2026-03-09T17:53:13.018414+0000 mgr.y (mgr.14505) 1016 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:14 vm00 bash[20770]: audit 2026-03-09T17:53:13.762504+0000 mon.c (mon.2) 891 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:14 vm00 bash[20770]: audit 2026-03-09T17:53:13.762504+0000 mon.c (mon.2) 891 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:14 vm00 bash[28333]: audit 2026-03-09T17:53:12.696923+0000 mgr.y (mgr.14505) 1015 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:14 vm00 bash[28333]: audit 2026-03-09T17:53:12.696923+0000 mgr.y (mgr.14505) 1015 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:14 vm00 bash[28333]: cluster 2026-03-09T17:53:13.018414+0000 mgr.y (mgr.14505) 1016 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:14 vm00 bash[28333]: cluster 2026-03-09T17:53:13.018414+0000 mgr.y (mgr.14505) 1016 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:14 vm00 bash[28333]: audit 2026-03-09T17:53:13.762504+0000 mon.c (mon.2) 891 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:14 vm00 bash[28333]: audit 2026-03-09T17:53:13.762504+0000 mon.c (mon.2) 891 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:14 vm02 bash[23351]: audit 2026-03-09T17:53:12.696923+0000 mgr.y (mgr.14505) 1015 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:14 vm02 bash[23351]: audit 2026-03-09T17:53:12.696923+0000 mgr.y (mgr.14505) 1015 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:14 vm02 bash[23351]: cluster 2026-03-09T17:53:13.018414+0000 mgr.y (mgr.14505) 1016 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:14 vm02 bash[23351]: cluster 2026-03-09T17:53:13.018414+0000 mgr.y (mgr.14505) 1016 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:14 vm02 bash[23351]: audit 2026-03-09T17:53:13.762504+0000 mon.c (mon.2) 891 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:14 vm02 bash[23351]: audit 2026-03-09T17:53:13.762504+0000 mon.c (mon.2) 891 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:16 vm00 bash[28333]: cluster 2026-03-09T17:53:15.019399+0000 mgr.y (mgr.14505) 1017 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:16 vm00 bash[28333]: cluster 2026-03-09T17:53:15.019399+0000 mgr.y (mgr.14505) 1017 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:16 vm00 bash[20770]: cluster 2026-03-09T17:53:15.019399+0000 mgr.y (mgr.14505) 1017 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:16 vm00 bash[20770]: cluster 2026-03-09T17:53:15.019399+0000 mgr.y (mgr.14505) 1017 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:53:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:53:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:53:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:16 vm02 bash[23351]: cluster 2026-03-09T17:53:15.019399+0000 mgr.y (mgr.14505) 1017 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:16 vm02 bash[23351]: cluster 2026-03-09T17:53:15.019399+0000 mgr.y (mgr.14505) 1017 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:18 vm00 bash[28333]: cluster 2026-03-09T17:53:17.019665+0000 mgr.y (mgr.14505) 1018 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:18 vm00 bash[28333]: cluster 2026-03-09T17:53:17.019665+0000 mgr.y (mgr.14505) 1018 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:18 vm00 bash[20770]: cluster 2026-03-09T17:53:17.019665+0000 mgr.y (mgr.14505) 1018 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:18 vm00 bash[20770]: cluster 2026-03-09T17:53:17.019665+0000 mgr.y (mgr.14505) 1018 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:18 vm02 bash[23351]: cluster 2026-03-09T17:53:17.019665+0000 mgr.y (mgr.14505) 1018 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:18 vm02 bash[23351]: cluster 2026-03-09T17:53:17.019665+0000 mgr.y (mgr.14505) 1018 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:20 vm00 bash[28333]: cluster 2026-03-09T17:53:19.020340+0000 mgr.y (mgr.14505) 1019 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:20 vm00 bash[28333]: cluster 2026-03-09T17:53:19.020340+0000 mgr.y (mgr.14505) 1019 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:20 vm00 bash[20770]: cluster 2026-03-09T17:53:19.020340+0000 mgr.y (mgr.14505) 1019 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:20 vm00 bash[20770]: cluster 2026-03-09T17:53:19.020340+0000 mgr.y (mgr.14505) 1019 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:20 vm02 bash[23351]: cluster 2026-03-09T17:53:19.020340+0000 mgr.y (mgr.14505) 1019 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:20 vm02 bash[23351]: cluster 2026-03-09T17:53:19.020340+0000 mgr.y (mgr.14505) 1019 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:22 vm00 bash[20770]: cluster 2026-03-09T17:53:21.020682+0000 mgr.y (mgr.14505) 1020 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:22 vm00 bash[20770]: cluster 2026-03-09T17:53:21.020682+0000 mgr.y (mgr.14505) 1020 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:22 vm00 bash[28333]: cluster 2026-03-09T17:53:21.020682+0000 mgr.y (mgr.14505) 1020 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:22 vm00 bash[28333]: cluster 2026-03-09T17:53:21.020682+0000 mgr.y (mgr.14505) 1020 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:22 vm02 bash[23351]: cluster 2026-03-09T17:53:21.020682+0000 mgr.y (mgr.14505) 1020 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:22 vm02 bash[23351]: cluster 2026-03-09T17:53:21.020682+0000 mgr.y (mgr.14505) 1020 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:23.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:53:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:53:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:24 vm00 bash[20770]: audit 2026-03-09T17:53:22.707484+0000 mgr.y (mgr.14505) 1021 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:24 vm00 bash[20770]: audit 2026-03-09T17:53:22.707484+0000 mgr.y (mgr.14505) 1021 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:24 vm00 bash[20770]: cluster 2026-03-09T17:53:23.021155+0000 mgr.y (mgr.14505) 1022 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:24 vm00 bash[20770]: cluster 2026-03-09T17:53:23.021155+0000 mgr.y (mgr.14505) 1022 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:24 vm00 bash[28333]: audit 2026-03-09T17:53:22.707484+0000 mgr.y (mgr.14505) 1021 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:24 vm00 bash[28333]: audit 2026-03-09T17:53:22.707484+0000 mgr.y (mgr.14505) 1021 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:24 vm00 bash[28333]: cluster 2026-03-09T17:53:23.021155+0000 mgr.y (mgr.14505) 1022 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:24 vm00 bash[28333]: cluster 2026-03-09T17:53:23.021155+0000 mgr.y (mgr.14505) 1022 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:24 vm02 bash[23351]: audit 2026-03-09T17:53:22.707484+0000 mgr.y (mgr.14505) 1021 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:24 vm02 bash[23351]: audit 2026-03-09T17:53:22.707484+0000 mgr.y (mgr.14505) 1021 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:24 vm02 bash[23351]: cluster 2026-03-09T17:53:23.021155+0000 mgr.y (mgr.14505) 1022 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:24 vm02 bash[23351]: cluster 2026-03-09T17:53:23.021155+0000 mgr.y (mgr.14505) 1022 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:26 vm00 bash[28333]: cluster 2026-03-09T17:53:25.021533+0000 mgr.y (mgr.14505) 1023 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:26 vm00 bash[28333]: cluster 2026-03-09T17:53:25.021533+0000 mgr.y (mgr.14505) 1023 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:26 vm00 bash[20770]: cluster 2026-03-09T17:53:25.021533+0000 mgr.y (mgr.14505) 1023 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:26 vm00 bash[20770]: cluster 2026-03-09T17:53:25.021533+0000 mgr.y (mgr.14505) 1023 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:26.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:53:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:53:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:53:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:26 vm02 bash[23351]: cluster 2026-03-09T17:53:25.021533+0000 mgr.y (mgr.14505) 1023 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:26 vm02 bash[23351]: cluster 2026-03-09T17:53:25.021533+0000 mgr.y (mgr.14505) 1023 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:28 vm00 bash[20770]: cluster 2026-03-09T17:53:27.021854+0000 mgr.y (mgr.14505) 1024 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:28 vm00 bash[20770]: cluster 2026-03-09T17:53:27.021854+0000 mgr.y (mgr.14505) 1024 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:28 vm00 bash[28333]: cluster 2026-03-09T17:53:27.021854+0000 mgr.y (mgr.14505) 1024 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:28 vm00 bash[28333]: cluster 2026-03-09T17:53:27.021854+0000 mgr.y (mgr.14505) 1024 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:28 vm02 bash[23351]: cluster 2026-03-09T17:53:27.021854+0000 mgr.y (mgr.14505) 1024 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:28 vm02 bash[23351]: cluster 2026-03-09T17:53:27.021854+0000 mgr.y (mgr.14505) 1024 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:29 vm00 bash[20770]: audit 2026-03-09T17:53:28.768558+0000 mon.c (mon.2) 892 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:29 vm00 bash[20770]: audit 2026-03-09T17:53:28.768558+0000 mon.c (mon.2) 892 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:29 vm00 bash[28333]: audit 2026-03-09T17:53:28.768558+0000 mon.c (mon.2) 892 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:29 vm00 bash[28333]: audit 2026-03-09T17:53:28.768558+0000 mon.c (mon.2) 892 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:29.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:29 vm02 bash[23351]: audit 2026-03-09T17:53:28.768558+0000 mon.c (mon.2) 892 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:29 vm02 bash[23351]: audit 2026-03-09T17:53:28.768558+0000 mon.c (mon.2) 892 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:30 vm00 bash[28333]: cluster 2026-03-09T17:53:29.022465+0000 mgr.y (mgr.14505) 1025 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:30 vm00 bash[28333]: cluster 2026-03-09T17:53:29.022465+0000 mgr.y (mgr.14505) 1025 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:30 vm00 bash[20770]: cluster 2026-03-09T17:53:29.022465+0000 mgr.y (mgr.14505) 1025 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:30 vm00 bash[20770]: cluster 2026-03-09T17:53:29.022465+0000 mgr.y (mgr.14505) 1025 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:30 vm02 bash[23351]: cluster 2026-03-09T17:53:29.022465+0000 mgr.y (mgr.14505) 1025 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:30 vm02 bash[23351]: cluster 2026-03-09T17:53:29.022465+0000 mgr.y (mgr.14505) 1025 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:32 vm02 bash[23351]: cluster 2026-03-09T17:53:31.022828+0000 mgr.y (mgr.14505) 1026 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:32 vm02 bash[23351]: cluster 2026-03-09T17:53:31.022828+0000 mgr.y (mgr.14505) 1026 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:32 vm00 bash[28333]: cluster 2026-03-09T17:53:31.022828+0000 mgr.y (mgr.14505) 1026 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:32 vm00 bash[28333]: cluster 2026-03-09T17:53:31.022828+0000 mgr.y (mgr.14505) 1026 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:32 vm00 bash[20770]: cluster 2026-03-09T17:53:31.022828+0000 mgr.y (mgr.14505) 1026 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:32 vm00 bash[20770]: cluster 2026-03-09T17:53:31.022828+0000 mgr.y (mgr.14505) 1026 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:33.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:53:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:53:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:34 vm02 bash[23351]: audit 2026-03-09T17:53:32.717541+0000 mgr.y (mgr.14505) 1027 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:34 vm02 bash[23351]: audit 2026-03-09T17:53:32.717541+0000 mgr.y (mgr.14505) 1027 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:34 vm02 bash[23351]: cluster 2026-03-09T17:53:33.023470+0000 mgr.y (mgr.14505) 1028 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:34 vm02 bash[23351]: cluster 2026-03-09T17:53:33.023470+0000 mgr.y (mgr.14505) 1028 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:34 vm00 bash[28333]: audit 2026-03-09T17:53:32.717541+0000 mgr.y (mgr.14505) 1027 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:34 vm00 bash[28333]: audit 2026-03-09T17:53:32.717541+0000 mgr.y (mgr.14505) 1027 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:34 vm00 bash[28333]: cluster 2026-03-09T17:53:33.023470+0000 mgr.y (mgr.14505) 1028 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:34 vm00 bash[28333]: cluster 2026-03-09T17:53:33.023470+0000 mgr.y (mgr.14505) 1028 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:34 vm00 bash[20770]: audit 2026-03-09T17:53:32.717541+0000 mgr.y (mgr.14505) 1027 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:34 vm00 bash[20770]: audit 2026-03-09T17:53:32.717541+0000 mgr.y (mgr.14505) 1027 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:34 vm00 bash[20770]: cluster 2026-03-09T17:53:33.023470+0000 mgr.y (mgr.14505) 1028 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:34 vm00 bash[20770]: cluster 2026-03-09T17:53:33.023470+0000 mgr.y (mgr.14505) 1028 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:36 vm02 bash[23351]: cluster 2026-03-09T17:53:35.023807+0000 mgr.y (mgr.14505) 1029 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:36 vm02 bash[23351]: cluster 2026-03-09T17:53:35.023807+0000 mgr.y (mgr.14505) 1029 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:36 vm00 bash[28333]: cluster 2026-03-09T17:53:35.023807+0000 mgr.y (mgr.14505) 1029 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:36 vm00 bash[28333]: cluster 2026-03-09T17:53:35.023807+0000 mgr.y (mgr.14505) 1029 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:36 vm00 bash[20770]: cluster 2026-03-09T17:53:35.023807+0000 mgr.y (mgr.14505) 1029 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:36 vm00 bash[20770]: cluster 2026-03-09T17:53:35.023807+0000 mgr.y (mgr.14505) 1029 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:53:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:53:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:53:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:38 vm02 bash[23351]: cluster 2026-03-09T17:53:37.024081+0000 mgr.y (mgr.14505) 1030 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:38 vm02 bash[23351]: cluster 2026-03-09T17:53:37.024081+0000 mgr.y (mgr.14505) 1030 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:38 vm00 bash[28333]: cluster 2026-03-09T17:53:37.024081+0000 mgr.y (mgr.14505) 1030 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:38 vm00 bash[28333]: cluster 2026-03-09T17:53:37.024081+0000 mgr.y (mgr.14505) 1030 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:38 vm00 bash[20770]: cluster 2026-03-09T17:53:37.024081+0000 mgr.y (mgr.14505) 1030 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:38 vm00 bash[20770]: cluster 2026-03-09T17:53:37.024081+0000 mgr.y (mgr.14505) 1030 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:40 vm02 bash[23351]: cluster 2026-03-09T17:53:39.024786+0000 mgr.y (mgr.14505) 1031 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:40 vm02 bash[23351]: cluster 2026-03-09T17:53:39.024786+0000 mgr.y (mgr.14505) 1031 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:40 vm00 bash[28333]: cluster 2026-03-09T17:53:39.024786+0000 mgr.y (mgr.14505) 1031 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:40 vm00 bash[28333]: cluster 2026-03-09T17:53:39.024786+0000 mgr.y (mgr.14505) 1031 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:40 vm00 bash[20770]: cluster 2026-03-09T17:53:39.024786+0000 mgr.y (mgr.14505) 1031 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:40 vm00 bash[20770]: cluster 2026-03-09T17:53:39.024786+0000 mgr.y (mgr.14505) 1031 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:42.728 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:42 vm02 bash[23351]: cluster 2026-03-09T17:53:41.025060+0000 mgr.y (mgr.14505) 1032 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:42.728 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:42 vm02 bash[23351]: cluster 2026-03-09T17:53:41.025060+0000 mgr.y (mgr.14505) 1032 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:42 vm00 bash[28333]: cluster 2026-03-09T17:53:41.025060+0000 mgr.y (mgr.14505) 1032 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:42 vm00 bash[28333]: cluster 2026-03-09T17:53:41.025060+0000 mgr.y (mgr.14505) 1032 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:42 vm00 bash[20770]: cluster 2026-03-09T17:53:41.025060+0000 mgr.y (mgr.14505) 1032 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:42 vm00 bash[20770]: cluster 2026-03-09T17:53:41.025060+0000 mgr.y (mgr.14505) 1032 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:43.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:53:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:44 vm00 bash[28333]: audit 2026-03-09T17:53:42.727905+0000 mgr.y (mgr.14505) 1033 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:44 vm00 bash[28333]: audit 2026-03-09T17:53:42.727905+0000 mgr.y (mgr.14505) 1033 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:44 vm00 bash[28333]: cluster 2026-03-09T17:53:43.025559+0000 mgr.y (mgr.14505) 1034 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:44 vm00 bash[28333]: cluster 2026-03-09T17:53:43.025559+0000 mgr.y (mgr.14505) 1034 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:44 vm00 bash[28333]: audit 2026-03-09T17:53:43.774530+0000 mon.c (mon.2) 893 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:44 vm00 bash[28333]: audit 2026-03-09T17:53:43.774530+0000 mon.c (mon.2) 893 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:44 vm00 bash[20770]: audit 2026-03-09T17:53:42.727905+0000 mgr.y (mgr.14505) 1033 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:44 vm00 bash[20770]: audit 2026-03-09T17:53:42.727905+0000 mgr.y (mgr.14505) 1033 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:44 vm00 bash[20770]: cluster 2026-03-09T17:53:43.025559+0000 mgr.y (mgr.14505) 1034 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:44 vm00 bash[20770]: cluster 2026-03-09T17:53:43.025559+0000 mgr.y (mgr.14505) 1034 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:44 vm00 bash[20770]: audit 2026-03-09T17:53:43.774530+0000 mon.c (mon.2) 893 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:44 vm00 bash[20770]: audit 2026-03-09T17:53:43.774530+0000 mon.c (mon.2) 893 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:44 vm02 bash[23351]: audit 2026-03-09T17:53:42.727905+0000 mgr.y (mgr.14505) 1033 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:44 vm02 bash[23351]: audit 2026-03-09T17:53:42.727905+0000 mgr.y (mgr.14505) 1033 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:44 vm02 bash[23351]: cluster 2026-03-09T17:53:43.025559+0000 mgr.y (mgr.14505) 1034 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:44 vm02 bash[23351]: cluster 2026-03-09T17:53:43.025559+0000 mgr.y (mgr.14505) 1034 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:44 vm02 bash[23351]: audit 2026-03-09T17:53:43.774530+0000 mon.c (mon.2) 893 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:44 vm02 bash[23351]: audit 2026-03-09T17:53:43.774530+0000 mon.c (mon.2) 893 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:46 vm00 bash[28333]: cluster 2026-03-09T17:53:45.025928+0000 mgr.y (mgr.14505) 1035 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:46 vm00 bash[28333]: cluster 2026-03-09T17:53:45.025928+0000 mgr.y (mgr.14505) 1035 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:46 vm00 bash[20770]: cluster 2026-03-09T17:53:45.025928+0000 mgr.y (mgr.14505) 1035 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:46 vm00 bash[20770]: cluster 2026-03-09T17:53:45.025928+0000 mgr.y (mgr.14505) 1035 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:53:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:53:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:53:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:46 vm02 bash[23351]: cluster 2026-03-09T17:53:45.025928+0000 mgr.y (mgr.14505) 1035 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:46 vm02 bash[23351]: cluster 2026-03-09T17:53:45.025928+0000 mgr.y (mgr.14505) 1035 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:48 vm00 bash[28333]: cluster 2026-03-09T17:53:47.026219+0000 mgr.y (mgr.14505) 1036 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:48 vm00 bash[28333]: cluster 2026-03-09T17:53:47.026219+0000 mgr.y (mgr.14505) 1036 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:48 vm00 bash[20770]: cluster 2026-03-09T17:53:47.026219+0000 mgr.y (mgr.14505) 1036 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:48 vm00 bash[20770]: cluster 2026-03-09T17:53:47.026219+0000 mgr.y (mgr.14505) 1036 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:48 vm02 bash[23351]: cluster 2026-03-09T17:53:47.026219+0000 mgr.y (mgr.14505) 1036 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:48 vm02 bash[23351]: cluster 2026-03-09T17:53:47.026219+0000 mgr.y (mgr.14505) 1036 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:50 vm00 bash[28333]: cluster 2026-03-09T17:53:49.026958+0000 mgr.y (mgr.14505) 1037 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:50 vm00 bash[28333]: cluster 2026-03-09T17:53:49.026958+0000 mgr.y (mgr.14505) 1037 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:50 vm00 bash[20770]: cluster 2026-03-09T17:53:49.026958+0000 mgr.y (mgr.14505) 1037 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:50 vm00 bash[20770]: cluster 2026-03-09T17:53:49.026958+0000 mgr.y (mgr.14505) 1037 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:50 vm02 bash[23351]: cluster 2026-03-09T17:53:49.026958+0000 mgr.y (mgr.14505) 1037 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:50 vm02 bash[23351]: cluster 2026-03-09T17:53:49.026958+0000 mgr.y (mgr.14505) 1037 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:52.738 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:52 vm02 bash[23351]: cluster 2026-03-09T17:53:51.027291+0000 mgr.y (mgr.14505) 1038 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:52.738 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:52 vm02 bash[23351]: cluster 2026-03-09T17:53:51.027291+0000 mgr.y (mgr.14505) 1038 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:52 vm00 bash[28333]: cluster 2026-03-09T17:53:51.027291+0000 mgr.y (mgr.14505) 1038 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:52 vm00 bash[28333]: cluster 2026-03-09T17:53:51.027291+0000 mgr.y (mgr.14505) 1038 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:52 vm00 bash[20770]: cluster 2026-03-09T17:53:51.027291+0000 mgr.y (mgr.14505) 1038 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:52 vm00 bash[20770]: cluster 2026-03-09T17:53:51.027291+0000 mgr.y (mgr.14505) 1038 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:53.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:53:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:53:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:54 vm00 bash[28333]: audit 2026-03-09T17:53:52.737919+0000 mgr.y (mgr.14505) 1039 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:54 vm00 bash[28333]: audit 2026-03-09T17:53:52.737919+0000 mgr.y (mgr.14505) 1039 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:54 vm00 bash[28333]: cluster 2026-03-09T17:53:53.027908+0000 mgr.y (mgr.14505) 1040 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:54 vm00 bash[28333]: cluster 2026-03-09T17:53:53.027908+0000 mgr.y (mgr.14505) 1040 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:54 vm00 bash[20770]: audit 2026-03-09T17:53:52.737919+0000 mgr.y (mgr.14505) 1039 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:54 vm00 bash[20770]: audit 2026-03-09T17:53:52.737919+0000 mgr.y (mgr.14505) 1039 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:54 vm00 bash[20770]: cluster 2026-03-09T17:53:53.027908+0000 mgr.y (mgr.14505) 1040 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:54 vm00 bash[20770]: cluster 2026-03-09T17:53:53.027908+0000 mgr.y (mgr.14505) 1040 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:54 vm02 bash[23351]: audit 2026-03-09T17:53:52.737919+0000 mgr.y (mgr.14505) 1039 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:54 vm02 bash[23351]: audit 2026-03-09T17:53:52.737919+0000 mgr.y (mgr.14505) 1039 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:53:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:54 vm02 bash[23351]: cluster 2026-03-09T17:53:53.027908+0000 mgr.y (mgr.14505) 1040 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:54 vm02 bash[23351]: cluster 2026-03-09T17:53:53.027908+0000 mgr.y (mgr.14505) 1040 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:53:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:56 vm00 bash[28333]: cluster 2026-03-09T17:53:55.028344+0000 mgr.y (mgr.14505) 1041 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:56 vm00 bash[28333]: cluster 2026-03-09T17:53:55.028344+0000 mgr.y (mgr.14505) 1041 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:56 vm00 bash[20770]: cluster 2026-03-09T17:53:55.028344+0000 mgr.y (mgr.14505) 1041 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:56 vm00 bash[20770]: cluster 2026-03-09T17:53:55.028344+0000 mgr.y (mgr.14505) 1041 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:53:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:53:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:53:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:56 vm02 bash[23351]: cluster 2026-03-09T17:53:55.028344+0000 mgr.y (mgr.14505) 1041 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:56 vm02 bash[23351]: cluster 2026-03-09T17:53:55.028344+0000 mgr.y (mgr.14505) 1041 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:58 vm00 bash[28333]: cluster 2026-03-09T17:53:57.028665+0000 mgr.y (mgr.14505) 1042 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:58 vm00 bash[28333]: cluster 2026-03-09T17:53:57.028665+0000 mgr.y (mgr.14505) 1042 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:58 vm00 bash[20770]: cluster 2026-03-09T17:53:57.028665+0000 mgr.y (mgr.14505) 1042 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:58 vm00 bash[20770]: cluster 2026-03-09T17:53:57.028665+0000 mgr.y (mgr.14505) 1042 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:58 vm02 bash[23351]: cluster 2026-03-09T17:53:57.028665+0000 mgr.y (mgr.14505) 1042 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:58 vm02 bash[23351]: cluster 2026-03-09T17:53:57.028665+0000 mgr.y (mgr.14505) 1042 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:53:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:59 vm00 bash[28333]: audit 2026-03-09T17:53:58.780757+0000 mon.c (mon.2) 894 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:53:59 vm00 bash[28333]: audit 2026-03-09T17:53:58.780757+0000 mon.c (mon.2) 894 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:59 vm00 bash[20770]: audit 2026-03-09T17:53:58.780757+0000 mon.c (mon.2) 894 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:53:59 vm00 bash[20770]: audit 2026-03-09T17:53:58.780757+0000 mon.c (mon.2) 894 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:59 vm02 bash[23351]: audit 2026-03-09T17:53:58.780757+0000 mon.c (mon.2) 894 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:53:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:53:59 vm02 bash[23351]: audit 2026-03-09T17:53:58.780757+0000 mon.c (mon.2) 894 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:00 vm00 bash[28333]: cluster 2026-03-09T17:53:59.029377+0000 mgr.y (mgr.14505) 1043 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:00 vm00 bash[28333]: cluster 2026-03-09T17:53:59.029377+0000 mgr.y (mgr.14505) 1043 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:00 vm00 bash[20770]: cluster 2026-03-09T17:53:59.029377+0000 mgr.y (mgr.14505) 1043 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:00 vm00 bash[20770]: cluster 2026-03-09T17:53:59.029377+0000 mgr.y (mgr.14505) 1043 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:00 vm02 bash[23351]: cluster 2026-03-09T17:53:59.029377+0000 mgr.y (mgr.14505) 1043 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:00 vm02 bash[23351]: cluster 2026-03-09T17:53:59.029377+0000 mgr.y (mgr.14505) 1043 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:02.748 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:02 vm02 bash[23351]: cluster 2026-03-09T17:54:01.029708+0000 mgr.y (mgr.14505) 1044 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:02.748 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:02 vm02 bash[23351]: cluster 2026-03-09T17:54:01.029708+0000 mgr.y (mgr.14505) 1044 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:02 vm00 bash[28333]: cluster 2026-03-09T17:54:01.029708+0000 mgr.y (mgr.14505) 1044 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:02 vm00 bash[28333]: cluster 2026-03-09T17:54:01.029708+0000 mgr.y (mgr.14505) 1044 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:02 vm00 bash[20770]: cluster 2026-03-09T17:54:01.029708+0000 mgr.y (mgr.14505) 1044 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:02 vm00 bash[20770]: cluster 2026-03-09T17:54:01.029708+0000 mgr.y (mgr.14505) 1044 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:03.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:54:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:54:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:04 vm00 bash[28333]: audit 2026-03-09T17:54:02.748197+0000 mgr.y (mgr.14505) 1045 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:04 vm00 bash[28333]: audit 2026-03-09T17:54:02.748197+0000 mgr.y (mgr.14505) 1045 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:04 vm00 bash[28333]: cluster 2026-03-09T17:54:03.030194+0000 mgr.y (mgr.14505) 1046 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:04 vm00 bash[28333]: cluster 2026-03-09T17:54:03.030194+0000 mgr.y (mgr.14505) 1046 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:04 vm00 bash[20770]: audit 2026-03-09T17:54:02.748197+0000 mgr.y (mgr.14505) 1045 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:04 vm00 bash[20770]: audit 2026-03-09T17:54:02.748197+0000 mgr.y (mgr.14505) 1045 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:04 vm00 bash[20770]: cluster 2026-03-09T17:54:03.030194+0000 mgr.y (mgr.14505) 1046 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:04 vm00 bash[20770]: cluster 2026-03-09T17:54:03.030194+0000 mgr.y (mgr.14505) 1046 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:04 vm02 bash[23351]: audit 2026-03-09T17:54:02.748197+0000 mgr.y (mgr.14505) 1045 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:04 vm02 bash[23351]: audit 2026-03-09T17:54:02.748197+0000 mgr.y (mgr.14505) 1045 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:04 vm02 bash[23351]: cluster 2026-03-09T17:54:03.030194+0000 mgr.y (mgr.14505) 1046 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:04 vm02 bash[23351]: cluster 2026-03-09T17:54:03.030194+0000 mgr.y (mgr.14505) 1046 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:06 vm00 bash[28333]: cluster 2026-03-09T17:54:05.030518+0000 mgr.y (mgr.14505) 1047 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:06 vm00 bash[28333]: cluster 2026-03-09T17:54:05.030518+0000 mgr.y (mgr.14505) 1047 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:06 vm00 bash[20770]: cluster 2026-03-09T17:54:05.030518+0000 mgr.y (mgr.14505) 1047 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:06 vm00 bash[20770]: cluster 2026-03-09T17:54:05.030518+0000 mgr.y (mgr.14505) 1047 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:54:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:54:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:54:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:06 vm02 bash[23351]: cluster 2026-03-09T17:54:05.030518+0000 mgr.y (mgr.14505) 1047 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:06 vm02 bash[23351]: cluster 2026-03-09T17:54:05.030518+0000 mgr.y (mgr.14505) 1047 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:08 vm00 bash[28333]: cluster 2026-03-09T17:54:07.030807+0000 mgr.y (mgr.14505) 1048 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:08 vm00 bash[28333]: cluster 2026-03-09T17:54:07.030807+0000 mgr.y (mgr.14505) 1048 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:08 vm00 bash[28333]: audit 2026-03-09T17:54:08.006079+0000 mon.c (mon.2) 895 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:54:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:08 vm00 bash[28333]: audit 2026-03-09T17:54:08.006079+0000 mon.c (mon.2) 895 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:54:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:08 vm00 bash[20770]: cluster 2026-03-09T17:54:07.030807+0000 mgr.y (mgr.14505) 1048 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:08 vm00 bash[20770]: cluster 2026-03-09T17:54:07.030807+0000 mgr.y (mgr.14505) 1048 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:08 vm00 bash[20770]: audit 2026-03-09T17:54:08.006079+0000 mon.c (mon.2) 895 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:54:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:08 vm00 bash[20770]: audit 2026-03-09T17:54:08.006079+0000 mon.c (mon.2) 895 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:54:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:08 vm02 bash[23351]: cluster 2026-03-09T17:54:07.030807+0000 mgr.y (mgr.14505) 1048 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:08 vm02 bash[23351]: cluster 2026-03-09T17:54:07.030807+0000 mgr.y (mgr.14505) 1048 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:08 vm02 bash[23351]: audit 2026-03-09T17:54:08.006079+0000 mon.c (mon.2) 895 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:54:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:08 vm02 bash[23351]: audit 2026-03-09T17:54:08.006079+0000 mon.c (mon.2) 895 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:54:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:10 vm00 bash[28333]: cluster 2026-03-09T17:54:09.031451+0000 mgr.y (mgr.14505) 1049 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:10 vm00 bash[28333]: cluster 2026-03-09T17:54:09.031451+0000 mgr.y (mgr.14505) 1049 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:10 vm00 bash[20770]: cluster 2026-03-09T17:54:09.031451+0000 mgr.y (mgr.14505) 1049 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:10 vm00 bash[20770]: cluster 2026-03-09T17:54:09.031451+0000 mgr.y (mgr.14505) 1049 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:10.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:10 vm02 bash[23351]: cluster 2026-03-09T17:54:09.031451+0000 mgr.y (mgr.14505) 1049 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:10.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:10 vm02 bash[23351]: cluster 2026-03-09T17:54:09.031451+0000 mgr.y (mgr.14505) 1049 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:12.757 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:12 vm02 bash[23351]: cluster 2026-03-09T17:54:11.031819+0000 mgr.y (mgr.14505) 1050 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:12.757 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:12 vm02 bash[23351]: cluster 2026-03-09T17:54:11.031819+0000 mgr.y (mgr.14505) 1050 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:12 vm00 bash[28333]: cluster 2026-03-09T17:54:11.031819+0000 mgr.y (mgr.14505) 1050 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:12 vm00 bash[28333]: cluster 2026-03-09T17:54:11.031819+0000 mgr.y (mgr.14505) 1050 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:12 vm00 bash[20770]: cluster 2026-03-09T17:54:11.031819+0000 mgr.y (mgr.14505) 1050 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:12 vm00 bash[20770]: cluster 2026-03-09T17:54:11.031819+0000 mgr.y (mgr.14505) 1050 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:13.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:54:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:12.756563+0000 mgr.y (mgr.14505) 1051 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:12.756563+0000 mgr.y (mgr.14505) 1051 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: cluster 2026-03-09T17:54:13.032458+0000 mgr.y (mgr.14505) 1052 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: cluster 2026-03-09T17:54:13.032458+0000 mgr.y (mgr.14505) 1052 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.786435+0000 mon.c (mon.2) 896 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.786435+0000 mon.c (mon.2) 896 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.906475+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.906475+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.912920+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.912920+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.915196+0000 mon.c (mon.2) 897 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.915196+0000 mon.c (mon.2) 897 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.916190+0000 mon.c (mon.2) 898 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.916190+0000 mon.c (mon.2) 898 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.921286+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:14 vm00 bash[28333]: audit 2026-03-09T17:54:13.921286+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:12.756563+0000 mgr.y (mgr.14505) 1051 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:12.756563+0000 mgr.y (mgr.14505) 1051 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: cluster 2026-03-09T17:54:13.032458+0000 mgr.y (mgr.14505) 1052 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: cluster 2026-03-09T17:54:13.032458+0000 mgr.y (mgr.14505) 1052 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.786435+0000 mon.c (mon.2) 896 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.786435+0000 mon.c (mon.2) 896 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.906475+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.906475+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.912920+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.912920+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.915196+0000 mon.c (mon.2) 897 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.915196+0000 mon.c (mon.2) 897 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.916190+0000 mon.c (mon.2) 898 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.916190+0000 mon.c (mon.2) 898 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.921286+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:14 vm00 bash[20770]: audit 2026-03-09T17:54:13.921286+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:12.756563+0000 mgr.y (mgr.14505) 1051 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:12.756563+0000 mgr.y (mgr.14505) 1051 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: cluster 2026-03-09T17:54:13.032458+0000 mgr.y (mgr.14505) 1052 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: cluster 2026-03-09T17:54:13.032458+0000 mgr.y (mgr.14505) 1052 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.786435+0000 mon.c (mon.2) 896 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.786435+0000 mon.c (mon.2) 896 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.906475+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.906475+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.912920+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.912920+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.915196+0000 mon.c (mon.2) 897 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.915196+0000 mon.c (mon.2) 897 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.916190+0000 mon.c (mon.2) 898 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.916190+0000 mon.c (mon.2) 898 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.921286+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:14 vm02 bash[23351]: audit 2026-03-09T17:54:13.921286+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:54:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:16 vm00 bash[28333]: cluster 2026-03-09T17:54:15.032900+0000 mgr.y (mgr.14505) 1053 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:16 vm00 bash[28333]: cluster 2026-03-09T17:54:15.032900+0000 mgr.y (mgr.14505) 1053 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:16 vm00 bash[20770]: cluster 2026-03-09T17:54:15.032900+0000 mgr.y (mgr.14505) 1053 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:16 vm00 bash[20770]: cluster 2026-03-09T17:54:15.032900+0000 mgr.y (mgr.14505) 1053 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:54:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:54:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:54:16.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:16 vm02 bash[23351]: cluster 2026-03-09T17:54:15.032900+0000 mgr.y (mgr.14505) 1053 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:16.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:16 vm02 bash[23351]: cluster 2026-03-09T17:54:15.032900+0000 mgr.y (mgr.14505) 1053 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:18 vm00 bash[28333]: cluster 2026-03-09T17:54:17.033225+0000 mgr.y (mgr.14505) 1054 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:18 vm00 bash[28333]: cluster 2026-03-09T17:54:17.033225+0000 mgr.y (mgr.14505) 1054 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:18 vm00 bash[20770]: cluster 2026-03-09T17:54:17.033225+0000 mgr.y (mgr.14505) 1054 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:18 vm00 bash[20770]: cluster 2026-03-09T17:54:17.033225+0000 mgr.y (mgr.14505) 1054 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:18.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:18 vm02 bash[23351]: cluster 2026-03-09T17:54:17.033225+0000 mgr.y (mgr.14505) 1054 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:18.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:18 vm02 bash[23351]: cluster 2026-03-09T17:54:17.033225+0000 mgr.y (mgr.14505) 1054 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:20 vm00 bash[28333]: cluster 2026-03-09T17:54:19.034018+0000 mgr.y (mgr.14505) 1055 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:20 vm00 bash[28333]: cluster 2026-03-09T17:54:19.034018+0000 mgr.y (mgr.14505) 1055 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:20 vm00 bash[20770]: cluster 2026-03-09T17:54:19.034018+0000 mgr.y (mgr.14505) 1055 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:20 vm00 bash[20770]: cluster 2026-03-09T17:54:19.034018+0000 mgr.y (mgr.14505) 1055 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:20 vm02 bash[23351]: cluster 2026-03-09T17:54:19.034018+0000 mgr.y (mgr.14505) 1055 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:20 vm02 bash[23351]: cluster 2026-03-09T17:54:19.034018+0000 mgr.y (mgr.14505) 1055 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:22 vm00 bash[28333]: cluster 2026-03-09T17:54:21.034358+0000 mgr.y (mgr.14505) 1056 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:22 vm00 bash[28333]: cluster 2026-03-09T17:54:21.034358+0000 mgr.y (mgr.14505) 1056 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:22 vm00 bash[20770]: cluster 2026-03-09T17:54:21.034358+0000 mgr.y (mgr.14505) 1056 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:22 vm00 bash[20770]: cluster 2026-03-09T17:54:21.034358+0000 mgr.y (mgr.14505) 1056 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:22.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:22 vm02 bash[23351]: cluster 2026-03-09T17:54:21.034358+0000 mgr.y (mgr.14505) 1056 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:22.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:22 vm02 bash[23351]: cluster 2026-03-09T17:54:21.034358+0000 mgr.y (mgr.14505) 1056 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:22.886 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:54:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:54:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:24 vm00 bash[28333]: audit 2026-03-09T17:54:22.763948+0000 mgr.y (mgr.14505) 1057 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:24 vm00 bash[28333]: audit 2026-03-09T17:54:22.763948+0000 mgr.y (mgr.14505) 1057 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:24 vm00 bash[28333]: cluster 2026-03-09T17:54:23.034968+0000 mgr.y (mgr.14505) 1058 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:24 vm00 bash[28333]: cluster 2026-03-09T17:54:23.034968+0000 mgr.y (mgr.14505) 1058 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:24 vm00 bash[20770]: audit 2026-03-09T17:54:22.763948+0000 mgr.y (mgr.14505) 1057 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:24 vm00 bash[20770]: audit 2026-03-09T17:54:22.763948+0000 mgr.y (mgr.14505) 1057 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:24 vm00 bash[20770]: cluster 2026-03-09T17:54:23.034968+0000 mgr.y (mgr.14505) 1058 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:24 vm00 bash[20770]: cluster 2026-03-09T17:54:23.034968+0000 mgr.y (mgr.14505) 1058 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:24 vm02 bash[23351]: audit 2026-03-09T17:54:22.763948+0000 mgr.y (mgr.14505) 1057 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:24 vm02 bash[23351]: audit 2026-03-09T17:54:22.763948+0000 mgr.y (mgr.14505) 1057 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:24 vm02 bash[23351]: cluster 2026-03-09T17:54:23.034968+0000 mgr.y (mgr.14505) 1058 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:24 vm02 bash[23351]: cluster 2026-03-09T17:54:23.034968+0000 mgr.y (mgr.14505) 1058 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:25 vm00 bash[28333]: cluster 2026-03-09T17:54:25.035382+0000 mgr.y (mgr.14505) 1059 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:25 vm00 bash[28333]: cluster 2026-03-09T17:54:25.035382+0000 mgr.y (mgr.14505) 1059 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:25 vm00 bash[20770]: cluster 2026-03-09T17:54:25.035382+0000 mgr.y (mgr.14505) 1059 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:25 vm00 bash[20770]: cluster 2026-03-09T17:54:25.035382+0000 mgr.y (mgr.14505) 1059 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:25.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:25 vm02 bash[23351]: cluster 2026-03-09T17:54:25.035382+0000 mgr.y (mgr.14505) 1059 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:25 vm02 bash[23351]: cluster 2026-03-09T17:54:25.035382+0000 mgr.y (mgr.14505) 1059 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:54:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:54:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:54:28.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:28 vm02 bash[23351]: cluster 2026-03-09T17:54:27.035666+0000 mgr.y (mgr.14505) 1060 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:28 vm02 bash[23351]: cluster 2026-03-09T17:54:27.035666+0000 mgr.y (mgr.14505) 1060 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:28 vm00 bash[28333]: cluster 2026-03-09T17:54:27.035666+0000 mgr.y (mgr.14505) 1060 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:28 vm00 bash[28333]: cluster 2026-03-09T17:54:27.035666+0000 mgr.y (mgr.14505) 1060 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:28 vm00 bash[20770]: cluster 2026-03-09T17:54:27.035666+0000 mgr.y (mgr.14505) 1060 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:28 vm00 bash[20770]: cluster 2026-03-09T17:54:27.035666+0000 mgr.y (mgr.14505) 1060 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:29.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:29 vm02 bash[23351]: audit 2026-03-09T17:54:28.792636+0000 mon.c (mon.2) 899 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:29 vm02 bash[23351]: audit 2026-03-09T17:54:28.792636+0000 mon.c (mon.2) 899 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:29 vm00 bash[28333]: audit 2026-03-09T17:54:28.792636+0000 mon.c (mon.2) 899 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:29 vm00 bash[28333]: audit 2026-03-09T17:54:28.792636+0000 mon.c (mon.2) 899 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:29 vm00 bash[20770]: audit 2026-03-09T17:54:28.792636+0000 mon.c (mon.2) 899 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:29 vm00 bash[20770]: audit 2026-03-09T17:54:28.792636+0000 mon.c (mon.2) 899 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:30 vm02 bash[23351]: cluster 2026-03-09T17:54:29.036261+0000 mgr.y (mgr.14505) 1061 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:30.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:30 vm02 bash[23351]: cluster 2026-03-09T17:54:29.036261+0000 mgr.y (mgr.14505) 1061 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:30 vm00 bash[28333]: cluster 2026-03-09T17:54:29.036261+0000 mgr.y (mgr.14505) 1061 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:30 vm00 bash[28333]: cluster 2026-03-09T17:54:29.036261+0000 mgr.y (mgr.14505) 1061 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:30 vm00 bash[20770]: cluster 2026-03-09T17:54:29.036261+0000 mgr.y (mgr.14505) 1061 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:30 vm00 bash[20770]: cluster 2026-03-09T17:54:29.036261+0000 mgr.y (mgr.14505) 1061 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:32.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:32 vm02 bash[23351]: cluster 2026-03-09T17:54:31.036639+0000 mgr.y (mgr.14505) 1062 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:32.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:32 vm02 bash[23351]: cluster 2026-03-09T17:54:31.036639+0000 mgr.y (mgr.14505) 1062 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:32 vm00 bash[28333]: cluster 2026-03-09T17:54:31.036639+0000 mgr.y (mgr.14505) 1062 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:32 vm00 bash[28333]: cluster 2026-03-09T17:54:31.036639+0000 mgr.y (mgr.14505) 1062 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:32 vm00 bash[20770]: cluster 2026-03-09T17:54:31.036639+0000 mgr.y (mgr.14505) 1062 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:32 vm00 bash[20770]: cluster 2026-03-09T17:54:31.036639+0000 mgr.y (mgr.14505) 1062 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:33.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:54:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:54:34.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:34 vm02 bash[23351]: audit 2026-03-09T17:54:32.773262+0000 mgr.y (mgr.14505) 1063 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:34 vm02 bash[23351]: audit 2026-03-09T17:54:32.773262+0000 mgr.y (mgr.14505) 1063 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:34 vm02 bash[23351]: cluster 2026-03-09T17:54:33.037281+0000 mgr.y (mgr.14505) 1064 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:34.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:34 vm02 bash[23351]: cluster 2026-03-09T17:54:33.037281+0000 mgr.y (mgr.14505) 1064 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:34 vm00 bash[28333]: audit 2026-03-09T17:54:32.773262+0000 mgr.y (mgr.14505) 1063 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:34 vm00 bash[28333]: audit 2026-03-09T17:54:32.773262+0000 mgr.y (mgr.14505) 1063 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:34 vm00 bash[28333]: cluster 2026-03-09T17:54:33.037281+0000 mgr.y (mgr.14505) 1064 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:34 vm00 bash[28333]: cluster 2026-03-09T17:54:33.037281+0000 mgr.y (mgr.14505) 1064 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:34 vm00 bash[20770]: audit 2026-03-09T17:54:32.773262+0000 mgr.y (mgr.14505) 1063 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:34 vm00 bash[20770]: audit 2026-03-09T17:54:32.773262+0000 mgr.y (mgr.14505) 1063 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:34 vm00 bash[20770]: cluster 2026-03-09T17:54:33.037281+0000 mgr.y (mgr.14505) 1064 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:34 vm00 bash[20770]: cluster 2026-03-09T17:54:33.037281+0000 mgr.y (mgr.14505) 1064 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:36.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:36 vm02 bash[23351]: cluster 2026-03-09T17:54:35.037690+0000 mgr.y (mgr.14505) 1065 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:36.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:36 vm02 bash[23351]: cluster 2026-03-09T17:54:35.037690+0000 mgr.y (mgr.14505) 1065 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:36 vm00 bash[28333]: cluster 2026-03-09T17:54:35.037690+0000 mgr.y (mgr.14505) 1065 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:36 vm00 bash[28333]: cluster 2026-03-09T17:54:35.037690+0000 mgr.y (mgr.14505) 1065 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:36 vm00 bash[20770]: cluster 2026-03-09T17:54:35.037690+0000 mgr.y (mgr.14505) 1065 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:36 vm00 bash[20770]: cluster 2026-03-09T17:54:35.037690+0000 mgr.y (mgr.14505) 1065 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:54:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:54:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:54:38.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:38 vm02 bash[23351]: cluster 2026-03-09T17:54:37.038030+0000 mgr.y (mgr.14505) 1066 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:38.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:38 vm02 bash[23351]: cluster 2026-03-09T17:54:37.038030+0000 mgr.y (mgr.14505) 1066 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:38 vm00 bash[28333]: cluster 2026-03-09T17:54:37.038030+0000 mgr.y (mgr.14505) 1066 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:38 vm00 bash[28333]: cluster 2026-03-09T17:54:37.038030+0000 mgr.y (mgr.14505) 1066 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:38 vm00 bash[20770]: cluster 2026-03-09T17:54:37.038030+0000 mgr.y (mgr.14505) 1066 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:38 vm00 bash[20770]: cluster 2026-03-09T17:54:37.038030+0000 mgr.y (mgr.14505) 1066 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:40 vm00 bash[28333]: cluster 2026-03-09T17:54:39.038618+0000 mgr.y (mgr.14505) 1067 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:40 vm00 bash[28333]: cluster 2026-03-09T17:54:39.038618+0000 mgr.y (mgr.14505) 1067 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:40 vm00 bash[20770]: cluster 2026-03-09T17:54:39.038618+0000 mgr.y (mgr.14505) 1067 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:40 vm00 bash[20770]: cluster 2026-03-09T17:54:39.038618+0000 mgr.y (mgr.14505) 1067 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:40 vm02 bash[23351]: cluster 2026-03-09T17:54:39.038618+0000 mgr.y (mgr.14505) 1067 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:40 vm02 bash[23351]: cluster 2026-03-09T17:54:39.038618+0000 mgr.y (mgr.14505) 1067 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:42 vm00 bash[28333]: cluster 2026-03-09T17:54:41.038953+0000 mgr.y (mgr.14505) 1068 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:42 vm00 bash[28333]: cluster 2026-03-09T17:54:41.038953+0000 mgr.y (mgr.14505) 1068 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:42 vm00 bash[20770]: cluster 2026-03-09T17:54:41.038953+0000 mgr.y (mgr.14505) 1068 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:42 vm00 bash[20770]: cluster 2026-03-09T17:54:41.038953+0000 mgr.y (mgr.14505) 1068 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:42 vm02 bash[23351]: cluster 2026-03-09T17:54:41.038953+0000 mgr.y (mgr.14505) 1068 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:42 vm02 bash[23351]: cluster 2026-03-09T17:54:41.038953+0000 mgr.y (mgr.14505) 1068 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:43.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:54:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:44 vm00 bash[28333]: audit 2026-03-09T17:54:42.783358+0000 mgr.y (mgr.14505) 1069 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:44 vm00 bash[28333]: audit 2026-03-09T17:54:42.783358+0000 mgr.y (mgr.14505) 1069 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:44 vm00 bash[28333]: cluster 2026-03-09T17:54:43.039507+0000 mgr.y (mgr.14505) 1070 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:44 vm00 bash[28333]: cluster 2026-03-09T17:54:43.039507+0000 mgr.y (mgr.14505) 1070 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:44 vm00 bash[28333]: audit 2026-03-09T17:54:43.799113+0000 mon.c (mon.2) 900 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:44 vm00 bash[28333]: audit 2026-03-09T17:54:43.799113+0000 mon.c (mon.2) 900 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:44 vm00 bash[20770]: audit 2026-03-09T17:54:42.783358+0000 mgr.y (mgr.14505) 1069 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:44 vm00 bash[20770]: audit 2026-03-09T17:54:42.783358+0000 mgr.y (mgr.14505) 1069 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:44 vm00 bash[20770]: cluster 2026-03-09T17:54:43.039507+0000 mgr.y (mgr.14505) 1070 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:44 vm00 bash[20770]: cluster 2026-03-09T17:54:43.039507+0000 mgr.y (mgr.14505) 1070 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:44 vm00 bash[20770]: audit 2026-03-09T17:54:43.799113+0000 mon.c (mon.2) 900 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:44 vm00 bash[20770]: audit 2026-03-09T17:54:43.799113+0000 mon.c (mon.2) 900 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:44 vm02 bash[23351]: audit 2026-03-09T17:54:42.783358+0000 mgr.y (mgr.14505) 1069 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:44 vm02 bash[23351]: audit 2026-03-09T17:54:42.783358+0000 mgr.y (mgr.14505) 1069 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:44 vm02 bash[23351]: cluster 2026-03-09T17:54:43.039507+0000 mgr.y (mgr.14505) 1070 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:44 vm02 bash[23351]: cluster 2026-03-09T17:54:43.039507+0000 mgr.y (mgr.14505) 1070 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:44 vm02 bash[23351]: audit 2026-03-09T17:54:43.799113+0000 mon.c (mon.2) 900 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:44 vm02 bash[23351]: audit 2026-03-09T17:54:43.799113+0000 mon.c (mon.2) 900 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:46 vm00 bash[28333]: cluster 2026-03-09T17:54:45.039995+0000 mgr.y (mgr.14505) 1071 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:46 vm00 bash[28333]: cluster 2026-03-09T17:54:45.039995+0000 mgr.y (mgr.14505) 1071 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:46 vm00 bash[20770]: cluster 2026-03-09T17:54:45.039995+0000 mgr.y (mgr.14505) 1071 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:46 vm00 bash[20770]: cluster 2026-03-09T17:54:45.039995+0000 mgr.y (mgr.14505) 1071 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:54:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:54:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:54:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:46 vm02 bash[23351]: cluster 2026-03-09T17:54:45.039995+0000 mgr.y (mgr.14505) 1071 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:46 vm02 bash[23351]: cluster 2026-03-09T17:54:45.039995+0000 mgr.y (mgr.14505) 1071 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:47 vm00 bash[28333]: cluster 2026-03-09T17:54:47.040330+0000 mgr.y (mgr.14505) 1072 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:47 vm00 bash[28333]: cluster 2026-03-09T17:54:47.040330+0000 mgr.y (mgr.14505) 1072 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:47 vm00 bash[20770]: cluster 2026-03-09T17:54:47.040330+0000 mgr.y (mgr.14505) 1072 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:47 vm00 bash[20770]: cluster 2026-03-09T17:54:47.040330+0000 mgr.y (mgr.14505) 1072 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:47 vm02 bash[23351]: cluster 2026-03-09T17:54:47.040330+0000 mgr.y (mgr.14505) 1072 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:48.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:47 vm02 bash[23351]: cluster 2026-03-09T17:54:47.040330+0000 mgr.y (mgr.14505) 1072 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:50 vm02 bash[23351]: cluster 2026-03-09T17:54:49.040935+0000 mgr.y (mgr.14505) 1073 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:50.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:50 vm02 bash[23351]: cluster 2026-03-09T17:54:49.040935+0000 mgr.y (mgr.14505) 1073 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:50 vm00 bash[28333]: cluster 2026-03-09T17:54:49.040935+0000 mgr.y (mgr.14505) 1073 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:50 vm00 bash[28333]: cluster 2026-03-09T17:54:49.040935+0000 mgr.y (mgr.14505) 1073 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:50 vm00 bash[20770]: cluster 2026-03-09T17:54:49.040935+0000 mgr.y (mgr.14505) 1073 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:50 vm00 bash[20770]: cluster 2026-03-09T17:54:49.040935+0000 mgr.y (mgr.14505) 1073 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:52 vm02 bash[23351]: cluster 2026-03-09T17:54:51.041279+0000 mgr.y (mgr.14505) 1074 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:52.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:52 vm02 bash[23351]: cluster 2026-03-09T17:54:51.041279+0000 mgr.y (mgr.14505) 1074 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:52 vm00 bash[28333]: cluster 2026-03-09T17:54:51.041279+0000 mgr.y (mgr.14505) 1074 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:52 vm00 bash[28333]: cluster 2026-03-09T17:54:51.041279+0000 mgr.y (mgr.14505) 1074 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:52 vm00 bash[20770]: cluster 2026-03-09T17:54:51.041279+0000 mgr.y (mgr.14505) 1074 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:52 vm00 bash[20770]: cluster 2026-03-09T17:54:51.041279+0000 mgr.y (mgr.14505) 1074 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:53.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:54:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:54:54.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:54 vm02 bash[23351]: audit 2026-03-09T17:54:52.794003+0000 mgr.y (mgr.14505) 1075 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:54.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:54 vm02 bash[23351]: audit 2026-03-09T17:54:52.794003+0000 mgr.y (mgr.14505) 1075 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:54 vm02 bash[23351]: cluster 2026-03-09T17:54:53.041783+0000 mgr.y (mgr.14505) 1076 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:54.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:54 vm02 bash[23351]: cluster 2026-03-09T17:54:53.041783+0000 mgr.y (mgr.14505) 1076 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:54 vm00 bash[28333]: audit 2026-03-09T17:54:52.794003+0000 mgr.y (mgr.14505) 1075 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:54 vm00 bash[28333]: audit 2026-03-09T17:54:52.794003+0000 mgr.y (mgr.14505) 1075 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:54 vm00 bash[28333]: cluster 2026-03-09T17:54:53.041783+0000 mgr.y (mgr.14505) 1076 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:54 vm00 bash[28333]: cluster 2026-03-09T17:54:53.041783+0000 mgr.y (mgr.14505) 1076 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:54 vm00 bash[20770]: audit 2026-03-09T17:54:52.794003+0000 mgr.y (mgr.14505) 1075 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:54 vm00 bash[20770]: audit 2026-03-09T17:54:52.794003+0000 mgr.y (mgr.14505) 1075 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:54:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:54 vm00 bash[20770]: cluster 2026-03-09T17:54:53.041783+0000 mgr.y (mgr.14505) 1076 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:54 vm00 bash[20770]: cluster 2026-03-09T17:54:53.041783+0000 mgr.y (mgr.14505) 1076 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:54:56.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:56 vm02 bash[23351]: cluster 2026-03-09T17:54:55.042140+0000 mgr.y (mgr.14505) 1077 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:56.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:56 vm02 bash[23351]: cluster 2026-03-09T17:54:55.042140+0000 mgr.y (mgr.14505) 1077 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:56 vm00 bash[28333]: cluster 2026-03-09T17:54:55.042140+0000 mgr.y (mgr.14505) 1077 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:56 vm00 bash[28333]: cluster 2026-03-09T17:54:55.042140+0000 mgr.y (mgr.14505) 1077 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:56 vm00 bash[20770]: cluster 2026-03-09T17:54:55.042140+0000 mgr.y (mgr.14505) 1077 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:56 vm00 bash[20770]: cluster 2026-03-09T17:54:55.042140+0000 mgr.y (mgr.14505) 1077 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:54:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:54:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:54:58.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:58 vm02 bash[23351]: cluster 2026-03-09T17:54:57.042513+0000 mgr.y (mgr.14505) 1078 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:58.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:58 vm02 bash[23351]: cluster 2026-03-09T17:54:57.042513+0000 mgr.y (mgr.14505) 1078 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:58 vm00 bash[28333]: cluster 2026-03-09T17:54:57.042513+0000 mgr.y (mgr.14505) 1078 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:58 vm00 bash[28333]: cluster 2026-03-09T17:54:57.042513+0000 mgr.y (mgr.14505) 1078 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:58 vm00 bash[20770]: cluster 2026-03-09T17:54:57.042513+0000 mgr.y (mgr.14505) 1078 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:58 vm00 bash[20770]: cluster 2026-03-09T17:54:57.042513+0000 mgr.y (mgr.14505) 1078 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:54:59.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:59 vm02 bash[23351]: audit 2026-03-09T17:54:58.805894+0000 mon.c (mon.2) 901 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:59.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:54:59 vm02 bash[23351]: audit 2026-03-09T17:54:58.805894+0000 mon.c (mon.2) 901 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:59 vm00 bash[28333]: audit 2026-03-09T17:54:58.805894+0000 mon.c (mon.2) 901 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:54:59 vm00 bash[28333]: audit 2026-03-09T17:54:58.805894+0000 mon.c (mon.2) 901 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:59 vm00 bash[20770]: audit 2026-03-09T17:54:58.805894+0000 mon.c (mon.2) 901 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:54:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:54:59 vm00 bash[20770]: audit 2026-03-09T17:54:58.805894+0000 mon.c (mon.2) 901 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:00 vm00 bash[28333]: cluster 2026-03-09T17:54:59.043196+0000 mgr.y (mgr.14505) 1079 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:00 vm00 bash[28333]: cluster 2026-03-09T17:54:59.043196+0000 mgr.y (mgr.14505) 1079 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:00 vm00 bash[20770]: cluster 2026-03-09T17:54:59.043196+0000 mgr.y (mgr.14505) 1079 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:00 vm00 bash[20770]: cluster 2026-03-09T17:54:59.043196+0000 mgr.y (mgr.14505) 1079 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:00 vm02 bash[23351]: cluster 2026-03-09T17:54:59.043196+0000 mgr.y (mgr.14505) 1079 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:00.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:00 vm02 bash[23351]: cluster 2026-03-09T17:54:59.043196+0000 mgr.y (mgr.14505) 1079 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:02 vm00 bash[28333]: cluster 2026-03-09T17:55:01.043500+0000 mgr.y (mgr.14505) 1080 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:02.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:02 vm00 bash[28333]: cluster 2026-03-09T17:55:01.043500+0000 mgr.y (mgr.14505) 1080 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:02 vm00 bash[20770]: cluster 2026-03-09T17:55:01.043500+0000 mgr.y (mgr.14505) 1080 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:02.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:02 vm00 bash[20770]: cluster 2026-03-09T17:55:01.043500+0000 mgr.y (mgr.14505) 1080 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:02.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:02 vm02 bash[23351]: cluster 2026-03-09T17:55:01.043500+0000 mgr.y (mgr.14505) 1080 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:02 vm02 bash[23351]: cluster 2026-03-09T17:55:01.043500+0000 mgr.y (mgr.14505) 1080 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:03.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:55:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:55:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:04 vm00 bash[28333]: audit 2026-03-09T17:55:02.798634+0000 mgr.y (mgr.14505) 1081 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:04 vm00 bash[28333]: audit 2026-03-09T17:55:02.798634+0000 mgr.y (mgr.14505) 1081 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:04 vm00 bash[28333]: cluster 2026-03-09T17:55:03.044071+0000 mgr.y (mgr.14505) 1082 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:04 vm00 bash[28333]: cluster 2026-03-09T17:55:03.044071+0000 mgr.y (mgr.14505) 1082 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:04 vm00 bash[20770]: audit 2026-03-09T17:55:02.798634+0000 mgr.y (mgr.14505) 1081 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:04 vm00 bash[20770]: audit 2026-03-09T17:55:02.798634+0000 mgr.y (mgr.14505) 1081 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:04 vm00 bash[20770]: cluster 2026-03-09T17:55:03.044071+0000 mgr.y (mgr.14505) 1082 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:04 vm00 bash[20770]: cluster 2026-03-09T17:55:03.044071+0000 mgr.y (mgr.14505) 1082 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:04.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:04 vm02 bash[23351]: audit 2026-03-09T17:55:02.798634+0000 mgr.y (mgr.14505) 1081 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:04.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:04 vm02 bash[23351]: audit 2026-03-09T17:55:02.798634+0000 mgr.y (mgr.14505) 1081 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:04 vm02 bash[23351]: cluster 2026-03-09T17:55:03.044071+0000 mgr.y (mgr.14505) 1082 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:04 vm02 bash[23351]: cluster 2026-03-09T17:55:03.044071+0000 mgr.y (mgr.14505) 1082 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:06 vm00 bash[28333]: cluster 2026-03-09T17:55:05.044429+0000 mgr.y (mgr.14505) 1083 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:06.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:06 vm00 bash[28333]: cluster 2026-03-09T17:55:05.044429+0000 mgr.y (mgr.14505) 1083 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:06 vm00 bash[20770]: cluster 2026-03-09T17:55:05.044429+0000 mgr.y (mgr.14505) 1083 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:06.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:06 vm00 bash[20770]: cluster 2026-03-09T17:55:05.044429+0000 mgr.y (mgr.14505) 1083 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:06.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:55:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:55:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:55:06.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:06 vm02 bash[23351]: cluster 2026-03-09T17:55:05.044429+0000 mgr.y (mgr.14505) 1083 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:06 vm02 bash[23351]: cluster 2026-03-09T17:55:05.044429+0000 mgr.y (mgr.14505) 1083 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:07 vm00 bash[28333]: cluster 2026-03-09T17:55:07.044699+0000 mgr.y (mgr.14505) 1084 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:07 vm00 bash[28333]: cluster 2026-03-09T17:55:07.044699+0000 mgr.y (mgr.14505) 1084 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:07 vm00 bash[20770]: cluster 2026-03-09T17:55:07.044699+0000 mgr.y (mgr.14505) 1084 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:07 vm00 bash[20770]: cluster 2026-03-09T17:55:07.044699+0000 mgr.y (mgr.14505) 1084 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:08.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:07 vm02 bash[23351]: cluster 2026-03-09T17:55:07.044699+0000 mgr.y (mgr.14505) 1084 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:07 vm02 bash[23351]: cluster 2026-03-09T17:55:07.044699+0000 mgr.y (mgr.14505) 1084 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:10.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:10 vm02 bash[23351]: cluster 2026-03-09T17:55:09.045337+0000 mgr.y (mgr.14505) 1085 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:10.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:10 vm02 bash[23351]: cluster 2026-03-09T17:55:09.045337+0000 mgr.y (mgr.14505) 1085 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:10 vm00 bash[28333]: cluster 2026-03-09T17:55:09.045337+0000 mgr.y (mgr.14505) 1085 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:10 vm00 bash[28333]: cluster 2026-03-09T17:55:09.045337+0000 mgr.y (mgr.14505) 1085 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:10 vm00 bash[20770]: cluster 2026-03-09T17:55:09.045337+0000 mgr.y (mgr.14505) 1085 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:10 vm00 bash[20770]: cluster 2026-03-09T17:55:09.045337+0000 mgr.y (mgr.14505) 1085 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:12 vm02 bash[23351]: cluster 2026-03-09T17:55:11.045644+0000 mgr.y (mgr.14505) 1086 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:12.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:12 vm02 bash[23351]: cluster 2026-03-09T17:55:11.045644+0000 mgr.y (mgr.14505) 1086 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:12 vm00 bash[28333]: cluster 2026-03-09T17:55:11.045644+0000 mgr.y (mgr.14505) 1086 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:12 vm00 bash[28333]: cluster 2026-03-09T17:55:11.045644+0000 mgr.y (mgr.14505) 1086 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:12 vm00 bash[20770]: cluster 2026-03-09T17:55:11.045644+0000 mgr.y (mgr.14505) 1086 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:12 vm00 bash[20770]: cluster 2026-03-09T17:55:11.045644+0000 mgr.y (mgr.14505) 1086 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:13.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:55:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:55:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:14 vm02 bash[23351]: audit 2026-03-09T17:55:12.809113+0000 mgr.y (mgr.14505) 1087 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:14 vm02 bash[23351]: audit 2026-03-09T17:55:12.809113+0000 mgr.y (mgr.14505) 1087 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:14 vm02 bash[23351]: cluster 2026-03-09T17:55:13.046252+0000 mgr.y (mgr.14505) 1088 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:14 vm02 bash[23351]: cluster 2026-03-09T17:55:13.046252+0000 mgr.y (mgr.14505) 1088 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:14 vm02 bash[23351]: audit 2026-03-09T17:55:13.811752+0000 mon.c (mon.2) 902 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:14 vm02 bash[23351]: audit 2026-03-09T17:55:13.811752+0000 mon.c (mon.2) 902 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:14 vm02 bash[23351]: audit 2026-03-09T17:55:13.960413+0000 mon.c (mon.2) 903 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:55:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:14 vm02 bash[23351]: audit 2026-03-09T17:55:13.960413+0000 mon.c (mon.2) 903 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:14 vm00 bash[28333]: audit 2026-03-09T17:55:12.809113+0000 mgr.y (mgr.14505) 1087 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:14 vm00 bash[28333]: audit 2026-03-09T17:55:12.809113+0000 mgr.y (mgr.14505) 1087 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:14 vm00 bash[28333]: cluster 2026-03-09T17:55:13.046252+0000 mgr.y (mgr.14505) 1088 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:14 vm00 bash[28333]: cluster 2026-03-09T17:55:13.046252+0000 mgr.y (mgr.14505) 1088 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:14 vm00 bash[28333]: audit 2026-03-09T17:55:13.811752+0000 mon.c (mon.2) 902 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:14 vm00 bash[28333]: audit 2026-03-09T17:55:13.811752+0000 mon.c (mon.2) 902 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:14 vm00 bash[28333]: audit 2026-03-09T17:55:13.960413+0000 mon.c (mon.2) 903 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:14 vm00 bash[28333]: audit 2026-03-09T17:55:13.960413+0000 mon.c (mon.2) 903 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:14 vm00 bash[20770]: audit 2026-03-09T17:55:12.809113+0000 mgr.y (mgr.14505) 1087 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:14 vm00 bash[20770]: audit 2026-03-09T17:55:12.809113+0000 mgr.y (mgr.14505) 1087 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:14 vm00 bash[20770]: cluster 2026-03-09T17:55:13.046252+0000 mgr.y (mgr.14505) 1088 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:14 vm00 bash[20770]: cluster 2026-03-09T17:55:13.046252+0000 mgr.y (mgr.14505) 1088 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:14 vm00 bash[20770]: audit 2026-03-09T17:55:13.811752+0000 mon.c (mon.2) 902 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:14 vm00 bash[20770]: audit 2026-03-09T17:55:13.811752+0000 mon.c (mon.2) 902 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:14 vm00 bash[20770]: audit 2026-03-09T17:55:13.960413+0000 mon.c (mon.2) 903 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:55:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:14 vm00 bash[20770]: audit 2026-03-09T17:55:13.960413+0000 mon.c (mon.2) 903 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:55:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:15 vm02 bash[23351]: audit 2026-03-09T17:55:14.287008+0000 mon.c (mon.2) 904 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:55:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:15 vm02 bash[23351]: audit 2026-03-09T17:55:14.287008+0000 mon.c (mon.2) 904 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:55:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:15 vm02 bash[23351]: audit 2026-03-09T17:55:14.288094+0000 mon.c (mon.2) 905 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:55:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:15 vm02 bash[23351]: audit 2026-03-09T17:55:14.288094+0000 mon.c (mon.2) 905 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:55:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:15 vm02 bash[23351]: audit 2026-03-09T17:55:14.293448+0000 mon.a (mon.0) 3549 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:55:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:15 vm02 bash[23351]: audit 2026-03-09T17:55:14.293448+0000 mon.a (mon.0) 3549 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:15 vm00 bash[28333]: audit 2026-03-09T17:55:14.287008+0000 mon.c (mon.2) 904 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:15 vm00 bash[28333]: audit 2026-03-09T17:55:14.287008+0000 mon.c (mon.2) 904 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:15 vm00 bash[28333]: audit 2026-03-09T17:55:14.288094+0000 mon.c (mon.2) 905 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:15 vm00 bash[28333]: audit 2026-03-09T17:55:14.288094+0000 mon.c (mon.2) 905 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:15 vm00 bash[28333]: audit 2026-03-09T17:55:14.293448+0000 mon.a (mon.0) 3549 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:15 vm00 bash[28333]: audit 2026-03-09T17:55:14.293448+0000 mon.a (mon.0) 3549 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:15 vm00 bash[20770]: audit 2026-03-09T17:55:14.287008+0000 mon.c (mon.2) 904 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:15 vm00 bash[20770]: audit 2026-03-09T17:55:14.287008+0000 mon.c (mon.2) 904 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:15 vm00 bash[20770]: audit 2026-03-09T17:55:14.288094+0000 mon.c (mon.2) 905 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:15 vm00 bash[20770]: audit 2026-03-09T17:55:14.288094+0000 mon.c (mon.2) 905 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:15 vm00 bash[20770]: audit 2026-03-09T17:55:14.293448+0000 mon.a (mon.0) 3549 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:55:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:15 vm00 bash[20770]: audit 2026-03-09T17:55:14.293448+0000 mon.a (mon.0) 3549 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:55:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:16 vm02 bash[23351]: cluster 2026-03-09T17:55:15.046719+0000 mgr.y (mgr.14505) 1089 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:16.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:16 vm02 bash[23351]: cluster 2026-03-09T17:55:15.046719+0000 mgr.y (mgr.14505) 1089 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:16 vm00 bash[28333]: cluster 2026-03-09T17:55:15.046719+0000 mgr.y (mgr.14505) 1089 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:16 vm00 bash[28333]: cluster 2026-03-09T17:55:15.046719+0000 mgr.y (mgr.14505) 1089 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:16 vm00 bash[20770]: cluster 2026-03-09T17:55:15.046719+0000 mgr.y (mgr.14505) 1089 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:16 vm00 bash[20770]: cluster 2026-03-09T17:55:15.046719+0000 mgr.y (mgr.14505) 1089 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:55:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:55:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:55:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:18 vm02 bash[23351]: cluster 2026-03-09T17:55:17.047048+0000 mgr.y (mgr.14505) 1090 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:18 vm02 bash[23351]: cluster 2026-03-09T17:55:17.047048+0000 mgr.y (mgr.14505) 1090 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:18 vm00 bash[28333]: cluster 2026-03-09T17:55:17.047048+0000 mgr.y (mgr.14505) 1090 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:18 vm00 bash[28333]: cluster 2026-03-09T17:55:17.047048+0000 mgr.y (mgr.14505) 1090 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:18 vm00 bash[20770]: cluster 2026-03-09T17:55:17.047048+0000 mgr.y (mgr.14505) 1090 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:18 vm00 bash[20770]: cluster 2026-03-09T17:55:17.047048+0000 mgr.y (mgr.14505) 1090 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:20 vm02 bash[23351]: cluster 2026-03-09T17:55:19.047682+0000 mgr.y (mgr.14505) 1091 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:20.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:20 vm02 bash[23351]: cluster 2026-03-09T17:55:19.047682+0000 mgr.y (mgr.14505) 1091 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:20 vm00 bash[28333]: cluster 2026-03-09T17:55:19.047682+0000 mgr.y (mgr.14505) 1091 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:20 vm00 bash[28333]: cluster 2026-03-09T17:55:19.047682+0000 mgr.y (mgr.14505) 1091 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:20 vm00 bash[20770]: cluster 2026-03-09T17:55:19.047682+0000 mgr.y (mgr.14505) 1091 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:20 vm00 bash[20770]: cluster 2026-03-09T17:55:19.047682+0000 mgr.y (mgr.14505) 1091 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:22 vm02 bash[23351]: cluster 2026-03-09T17:55:21.047937+0000 mgr.y (mgr.14505) 1092 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:22 vm02 bash[23351]: cluster 2026-03-09T17:55:21.047937+0000 mgr.y (mgr.14505) 1092 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:22 vm00 bash[28333]: cluster 2026-03-09T17:55:21.047937+0000 mgr.y (mgr.14505) 1092 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:22 vm00 bash[28333]: cluster 2026-03-09T17:55:21.047937+0000 mgr.y (mgr.14505) 1092 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:22 vm00 bash[20770]: cluster 2026-03-09T17:55:21.047937+0000 mgr.y (mgr.14505) 1092 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:22 vm00 bash[20770]: cluster 2026-03-09T17:55:21.047937+0000 mgr.y (mgr.14505) 1092 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:23.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:55:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:55:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:24 vm02 bash[23351]: audit 2026-03-09T17:55:22.819624+0000 mgr.y (mgr.14505) 1093 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:24 vm02 bash[23351]: audit 2026-03-09T17:55:22.819624+0000 mgr.y (mgr.14505) 1093 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:24 vm02 bash[23351]: cluster 2026-03-09T17:55:23.048549+0000 mgr.y (mgr.14505) 1094 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:24 vm02 bash[23351]: cluster 2026-03-09T17:55:23.048549+0000 mgr.y (mgr.14505) 1094 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:24 vm00 bash[28333]: audit 2026-03-09T17:55:22.819624+0000 mgr.y (mgr.14505) 1093 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:24 vm00 bash[28333]: audit 2026-03-09T17:55:22.819624+0000 mgr.y (mgr.14505) 1093 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:24 vm00 bash[28333]: cluster 2026-03-09T17:55:23.048549+0000 mgr.y (mgr.14505) 1094 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:24 vm00 bash[28333]: cluster 2026-03-09T17:55:23.048549+0000 mgr.y (mgr.14505) 1094 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:24 vm00 bash[20770]: audit 2026-03-09T17:55:22.819624+0000 mgr.y (mgr.14505) 1093 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:24 vm00 bash[20770]: audit 2026-03-09T17:55:22.819624+0000 mgr.y (mgr.14505) 1093 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:24 vm00 bash[20770]: cluster 2026-03-09T17:55:23.048549+0000 mgr.y (mgr.14505) 1094 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:24 vm00 bash[20770]: cluster 2026-03-09T17:55:23.048549+0000 mgr.y (mgr.14505) 1094 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:26 vm02 bash[23351]: cluster 2026-03-09T17:55:25.048889+0000 mgr.y (mgr.14505) 1095 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:26 vm02 bash[23351]: cluster 2026-03-09T17:55:25.048889+0000 mgr.y (mgr.14505) 1095 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:55:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:55:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:55:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:26 vm00 bash[28333]: cluster 2026-03-09T17:55:25.048889+0000 mgr.y (mgr.14505) 1095 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:26 vm00 bash[28333]: cluster 2026-03-09T17:55:25.048889+0000 mgr.y (mgr.14505) 1095 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:26 vm00 bash[20770]: cluster 2026-03-09T17:55:25.048889+0000 mgr.y (mgr.14505) 1095 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:26 vm00 bash[20770]: cluster 2026-03-09T17:55:25.048889+0000 mgr.y (mgr.14505) 1095 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:28 vm02 bash[23351]: cluster 2026-03-09T17:55:27.049185+0000 mgr.y (mgr.14505) 1096 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:28.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:28 vm02 bash[23351]: cluster 2026-03-09T17:55:27.049185+0000 mgr.y (mgr.14505) 1096 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:28 vm00 bash[28333]: cluster 2026-03-09T17:55:27.049185+0000 mgr.y (mgr.14505) 1096 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:28 vm00 bash[28333]: cluster 2026-03-09T17:55:27.049185+0000 mgr.y (mgr.14505) 1096 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:28 vm00 bash[20770]: cluster 2026-03-09T17:55:27.049185+0000 mgr.y (mgr.14505) 1096 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:28 vm00 bash[20770]: cluster 2026-03-09T17:55:27.049185+0000 mgr.y (mgr.14505) 1096 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T17:55:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:29 vm00 bash[28333]: audit 2026-03-09T17:55:28.817640+0000 mon.c (mon.2) 906 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:29 vm00 bash[28333]: audit 2026-03-09T17:55:28.817640+0000 mon.c (mon.2) 906 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:29 vm00 bash[20770]: audit 2026-03-09T17:55:28.817640+0000 mon.c (mon.2) 906 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:29 vm00 bash[20770]: audit 2026-03-09T17:55:28.817640+0000 mon.c (mon.2) 906 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:29.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:29 vm02 bash[23351]: audit 2026-03-09T17:55:28.817640+0000 mon.c (mon.2) 906 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:29.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:29 vm02 bash[23351]: audit 2026-03-09T17:55:28.817640+0000 mon.c (mon.2) 906 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:30 vm00 bash[20770]: cluster 2026-03-09T17:55:29.049743+0000 mgr.y (mgr.14505) 1097 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:30 vm00 bash[20770]: cluster 2026-03-09T17:55:29.049743+0000 mgr.y (mgr.14505) 1097 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:30 vm00 bash[28333]: cluster 2026-03-09T17:55:29.049743+0000 mgr.y (mgr.14505) 1097 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:30 vm00 bash[28333]: cluster 2026-03-09T17:55:29.049743+0000 mgr.y (mgr.14505) 1097 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:30.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:30 vm02 bash[23351]: cluster 2026-03-09T17:55:29.049743+0000 mgr.y (mgr.14505) 1097 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:30.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:30 vm02 bash[23351]: cluster 2026-03-09T17:55:29.049743+0000 mgr.y (mgr.14505) 1097 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:32 vm00 bash[28333]: cluster 2026-03-09T17:55:31.050054+0000 mgr.y (mgr.14505) 1098 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:32 vm00 bash[28333]: cluster 2026-03-09T17:55:31.050054+0000 mgr.y (mgr.14505) 1098 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:32 vm00 bash[20770]: cluster 2026-03-09T17:55:31.050054+0000 mgr.y (mgr.14505) 1098 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:32 vm00 bash[20770]: cluster 2026-03-09T17:55:31.050054+0000 mgr.y (mgr.14505) 1098 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:32.830 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:32 vm02 bash[23351]: cluster 2026-03-09T17:55:31.050054+0000 mgr.y (mgr.14505) 1098 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:32.830 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:32 vm02 bash[23351]: cluster 2026-03-09T17:55:31.050054+0000 mgr.y (mgr.14505) 1098 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:33.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:55:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:55:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:34 vm00 bash[28333]: audit 2026-03-09T17:55:32.830309+0000 mgr.y (mgr.14505) 1099 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:34 vm00 bash[28333]: audit 2026-03-09T17:55:32.830309+0000 mgr.y (mgr.14505) 1099 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:34 vm00 bash[28333]: cluster 2026-03-09T17:55:33.050665+0000 mgr.y (mgr.14505) 1100 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:34 vm00 bash[28333]: cluster 2026-03-09T17:55:33.050665+0000 mgr.y (mgr.14505) 1100 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:34 vm00 bash[20770]: audit 2026-03-09T17:55:32.830309+0000 mgr.y (mgr.14505) 1099 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:34 vm00 bash[20770]: audit 2026-03-09T17:55:32.830309+0000 mgr.y (mgr.14505) 1099 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:34 vm00 bash[20770]: cluster 2026-03-09T17:55:33.050665+0000 mgr.y (mgr.14505) 1100 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:34 vm00 bash[20770]: cluster 2026-03-09T17:55:33.050665+0000 mgr.y (mgr.14505) 1100 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:34 vm02 bash[23351]: audit 2026-03-09T17:55:32.830309+0000 mgr.y (mgr.14505) 1099 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:34 vm02 bash[23351]: audit 2026-03-09T17:55:32.830309+0000 mgr.y (mgr.14505) 1099 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:34 vm02 bash[23351]: cluster 2026-03-09T17:55:33.050665+0000 mgr.y (mgr.14505) 1100 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:34 vm02 bash[23351]: cluster 2026-03-09T17:55:33.050665+0000 mgr.y (mgr.14505) 1100 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:36 vm00 bash[28333]: cluster 2026-03-09T17:55:35.051069+0000 mgr.y (mgr.14505) 1101 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:36 vm00 bash[28333]: cluster 2026-03-09T17:55:35.051069+0000 mgr.y (mgr.14505) 1101 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:36 vm00 bash[20770]: cluster 2026-03-09T17:55:35.051069+0000 mgr.y (mgr.14505) 1101 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:36 vm00 bash[20770]: cluster 2026-03-09T17:55:35.051069+0000 mgr.y (mgr.14505) 1101 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:55:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:55:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:55:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:36 vm02 bash[23351]: cluster 2026-03-09T17:55:35.051069+0000 mgr.y (mgr.14505) 1101 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:36 vm02 bash[23351]: cluster 2026-03-09T17:55:35.051069+0000 mgr.y (mgr.14505) 1101 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:38 vm00 bash[28333]: cluster 2026-03-09T17:55:37.051396+0000 mgr.y (mgr.14505) 1102 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:38 vm00 bash[28333]: cluster 2026-03-09T17:55:37.051396+0000 mgr.y (mgr.14505) 1102 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:38 vm00 bash[20770]: cluster 2026-03-09T17:55:37.051396+0000 mgr.y (mgr.14505) 1102 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:38 vm00 bash[20770]: cluster 2026-03-09T17:55:37.051396+0000 mgr.y (mgr.14505) 1102 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:38 vm02 bash[23351]: cluster 2026-03-09T17:55:37.051396+0000 mgr.y (mgr.14505) 1102 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:38 vm02 bash[23351]: cluster 2026-03-09T17:55:37.051396+0000 mgr.y (mgr.14505) 1102 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:40 vm00 bash[28333]: cluster 2026-03-09T17:55:39.051942+0000 mgr.y (mgr.14505) 1103 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:40 vm00 bash[28333]: cluster 2026-03-09T17:55:39.051942+0000 mgr.y (mgr.14505) 1103 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:40 vm00 bash[20770]: cluster 2026-03-09T17:55:39.051942+0000 mgr.y (mgr.14505) 1103 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:40 vm00 bash[20770]: cluster 2026-03-09T17:55:39.051942+0000 mgr.y (mgr.14505) 1103 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:40 vm02 bash[23351]: cluster 2026-03-09T17:55:39.051942+0000 mgr.y (mgr.14505) 1103 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:40.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:40 vm02 bash[23351]: cluster 2026-03-09T17:55:39.051942+0000 mgr.y (mgr.14505) 1103 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:42 vm00 bash[28333]: cluster 2026-03-09T17:55:41.052177+0000 mgr.y (mgr.14505) 1104 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:42 vm00 bash[28333]: cluster 2026-03-09T17:55:41.052177+0000 mgr.y (mgr.14505) 1104 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:42 vm00 bash[20770]: cluster 2026-03-09T17:55:41.052177+0000 mgr.y (mgr.14505) 1104 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:42 vm00 bash[20770]: cluster 2026-03-09T17:55:41.052177+0000 mgr.y (mgr.14505) 1104 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:42.840 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:42 vm02 bash[23351]: cluster 2026-03-09T17:55:41.052177+0000 mgr.y (mgr.14505) 1104 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:42.840 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:42 vm02 bash[23351]: cluster 2026-03-09T17:55:41.052177+0000 mgr.y (mgr.14505) 1104 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:43.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:55:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:44 vm00 bash[28333]: audit 2026-03-09T17:55:42.840113+0000 mgr.y (mgr.14505) 1105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:44 vm00 bash[28333]: audit 2026-03-09T17:55:42.840113+0000 mgr.y (mgr.14505) 1105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:44 vm00 bash[28333]: cluster 2026-03-09T17:55:43.052849+0000 mgr.y (mgr.14505) 1106 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:44 vm00 bash[28333]: cluster 2026-03-09T17:55:43.052849+0000 mgr.y (mgr.14505) 1106 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:44 vm00 bash[28333]: audit 2026-03-09T17:55:43.823916+0000 mon.c (mon.2) 907 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:44 vm00 bash[28333]: audit 2026-03-09T17:55:43.823916+0000 mon.c (mon.2) 907 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:44 vm00 bash[20770]: audit 2026-03-09T17:55:42.840113+0000 mgr.y (mgr.14505) 1105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:44 vm00 bash[20770]: audit 2026-03-09T17:55:42.840113+0000 mgr.y (mgr.14505) 1105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:44 vm00 bash[20770]: cluster 2026-03-09T17:55:43.052849+0000 mgr.y (mgr.14505) 1106 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:44 vm00 bash[20770]: cluster 2026-03-09T17:55:43.052849+0000 mgr.y (mgr.14505) 1106 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:44 vm00 bash[20770]: audit 2026-03-09T17:55:43.823916+0000 mon.c (mon.2) 907 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:44 vm00 bash[20770]: audit 2026-03-09T17:55:43.823916+0000 mon.c (mon.2) 907 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:44 vm02 bash[23351]: audit 2026-03-09T17:55:42.840113+0000 mgr.y (mgr.14505) 1105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:44 vm02 bash[23351]: audit 2026-03-09T17:55:42.840113+0000 mgr.y (mgr.14505) 1105 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:44 vm02 bash[23351]: cluster 2026-03-09T17:55:43.052849+0000 mgr.y (mgr.14505) 1106 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:44 vm02 bash[23351]: cluster 2026-03-09T17:55:43.052849+0000 mgr.y (mgr.14505) 1106 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:44 vm02 bash[23351]: audit 2026-03-09T17:55:43.823916+0000 mon.c (mon.2) 907 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:44 vm02 bash[23351]: audit 2026-03-09T17:55:43.823916+0000 mon.c (mon.2) 907 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:46 vm00 bash[28333]: cluster 2026-03-09T17:55:45.053165+0000 mgr.y (mgr.14505) 1107 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:46 vm00 bash[28333]: cluster 2026-03-09T17:55:45.053165+0000 mgr.y (mgr.14505) 1107 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:46 vm00 bash[20770]: cluster 2026-03-09T17:55:45.053165+0000 mgr.y (mgr.14505) 1107 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:46 vm00 bash[20770]: cluster 2026-03-09T17:55:45.053165+0000 mgr.y (mgr.14505) 1107 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:55:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:55:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:55:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:46 vm02 bash[23351]: cluster 2026-03-09T17:55:45.053165+0000 mgr.y (mgr.14505) 1107 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:46.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:46 vm02 bash[23351]: cluster 2026-03-09T17:55:45.053165+0000 mgr.y (mgr.14505) 1107 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:48 vm00 bash[28333]: cluster 2026-03-09T17:55:47.053472+0000 mgr.y (mgr.14505) 1108 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:48 vm00 bash[28333]: cluster 2026-03-09T17:55:47.053472+0000 mgr.y (mgr.14505) 1108 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:48 vm00 bash[20770]: cluster 2026-03-09T17:55:47.053472+0000 mgr.y (mgr.14505) 1108 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:48 vm00 bash[20770]: cluster 2026-03-09T17:55:47.053472+0000 mgr.y (mgr.14505) 1108 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:48 vm02 bash[23351]: cluster 2026-03-09T17:55:47.053472+0000 mgr.y (mgr.14505) 1108 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:48.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:48 vm02 bash[23351]: cluster 2026-03-09T17:55:47.053472+0000 mgr.y (mgr.14505) 1108 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:50 vm00 bash[28333]: cluster 2026-03-09T17:55:49.054158+0000 mgr.y (mgr.14505) 1109 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:50 vm00 bash[28333]: cluster 2026-03-09T17:55:49.054158+0000 mgr.y (mgr.14505) 1109 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:50 vm00 bash[20770]: cluster 2026-03-09T17:55:49.054158+0000 mgr.y (mgr.14505) 1109 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:50 vm00 bash[20770]: cluster 2026-03-09T17:55:49.054158+0000 mgr.y (mgr.14505) 1109 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:50 vm02 bash[23351]: cluster 2026-03-09T17:55:49.054158+0000 mgr.y (mgr.14505) 1109 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:50.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:50 vm02 bash[23351]: cluster 2026-03-09T17:55:49.054158+0000 mgr.y (mgr.14505) 1109 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:52 vm00 bash[28333]: cluster 2026-03-09T17:55:51.054513+0000 mgr.y (mgr.14505) 1110 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:52 vm00 bash[28333]: cluster 2026-03-09T17:55:51.054513+0000 mgr.y (mgr.14505) 1110 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:52 vm00 bash[20770]: cluster 2026-03-09T17:55:51.054513+0000 mgr.y (mgr.14505) 1110 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:52 vm00 bash[20770]: cluster 2026-03-09T17:55:51.054513+0000 mgr.y (mgr.14505) 1110 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:52.849 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:52 vm02 bash[23351]: cluster 2026-03-09T17:55:51.054513+0000 mgr.y (mgr.14505) 1110 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:52.849 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:52 vm02 bash[23351]: cluster 2026-03-09T17:55:51.054513+0000 mgr.y (mgr.14505) 1110 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:53.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:55:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:55:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:54 vm00 bash[28333]: audit 2026-03-09T17:55:52.849237+0000 mgr.y (mgr.14505) 1111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:54 vm00 bash[28333]: audit 2026-03-09T17:55:52.849237+0000 mgr.y (mgr.14505) 1111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:54 vm00 bash[28333]: cluster 2026-03-09T17:55:53.055185+0000 mgr.y (mgr.14505) 1112 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:54 vm00 bash[28333]: cluster 2026-03-09T17:55:53.055185+0000 mgr.y (mgr.14505) 1112 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:54 vm00 bash[20770]: audit 2026-03-09T17:55:52.849237+0000 mgr.y (mgr.14505) 1111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:54 vm00 bash[20770]: audit 2026-03-09T17:55:52.849237+0000 mgr.y (mgr.14505) 1111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:54 vm00 bash[20770]: cluster 2026-03-09T17:55:53.055185+0000 mgr.y (mgr.14505) 1112 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:54 vm00 bash[20770]: cluster 2026-03-09T17:55:53.055185+0000 mgr.y (mgr.14505) 1112 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:54 vm02 bash[23351]: audit 2026-03-09T17:55:52.849237+0000 mgr.y (mgr.14505) 1111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:54 vm02 bash[23351]: audit 2026-03-09T17:55:52.849237+0000 mgr.y (mgr.14505) 1111 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:55:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:54 vm02 bash[23351]: cluster 2026-03-09T17:55:53.055185+0000 mgr.y (mgr.14505) 1112 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:54.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:54 vm02 bash[23351]: cluster 2026-03-09T17:55:53.055185+0000 mgr.y (mgr.14505) 1112 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:55:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:56 vm00 bash[28333]: cluster 2026-03-09T17:55:55.055521+0000 mgr.y (mgr.14505) 1113 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:56 vm00 bash[28333]: cluster 2026-03-09T17:55:55.055521+0000 mgr.y (mgr.14505) 1113 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:56 vm00 bash[20770]: cluster 2026-03-09T17:55:55.055521+0000 mgr.y (mgr.14505) 1113 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:56 vm00 bash[20770]: cluster 2026-03-09T17:55:55.055521+0000 mgr.y (mgr.14505) 1113 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:55:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:55:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:55:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:56 vm02 bash[23351]: cluster 2026-03-09T17:55:55.055521+0000 mgr.y (mgr.14505) 1113 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:56 vm02 bash[23351]: cluster 2026-03-09T17:55:55.055521+0000 mgr.y (mgr.14505) 1113 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:58 vm00 bash[28333]: cluster 2026-03-09T17:55:57.055911+0000 mgr.y (mgr.14505) 1114 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:58 vm00 bash[28333]: cluster 2026-03-09T17:55:57.055911+0000 mgr.y (mgr.14505) 1114 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:58 vm00 bash[20770]: cluster 2026-03-09T17:55:57.055911+0000 mgr.y (mgr.14505) 1114 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:58 vm00 bash[20770]: cluster 2026-03-09T17:55:57.055911+0000 mgr.y (mgr.14505) 1114 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:58 vm02 bash[23351]: cluster 2026-03-09T17:55:57.055911+0000 mgr.y (mgr.14505) 1114 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:58 vm02 bash[23351]: cluster 2026-03-09T17:55:57.055911+0000 mgr.y (mgr.14505) 1114 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:55:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:59 vm00 bash[28333]: audit 2026-03-09T17:55:58.829678+0000 mon.c (mon.2) 908 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:55:59 vm00 bash[28333]: audit 2026-03-09T17:55:58.829678+0000 mon.c (mon.2) 908 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:59 vm00 bash[20770]: audit 2026-03-09T17:55:58.829678+0000 mon.c (mon.2) 908 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:55:59 vm00 bash[20770]: audit 2026-03-09T17:55:58.829678+0000 mon.c (mon.2) 908 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:59 vm02 bash[23351]: audit 2026-03-09T17:55:58.829678+0000 mon.c (mon.2) 908 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:55:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:55:59 vm02 bash[23351]: audit 2026-03-09T17:55:58.829678+0000 mon.c (mon.2) 908 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:00 vm00 bash[28333]: cluster 2026-03-09T17:55:59.056685+0000 mgr.y (mgr.14505) 1115 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:00 vm00 bash[28333]: cluster 2026-03-09T17:55:59.056685+0000 mgr.y (mgr.14505) 1115 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:00 vm00 bash[20770]: cluster 2026-03-09T17:55:59.056685+0000 mgr.y (mgr.14505) 1115 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:00 vm00 bash[20770]: cluster 2026-03-09T17:55:59.056685+0000 mgr.y (mgr.14505) 1115 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:00 vm02 bash[23351]: cluster 2026-03-09T17:55:59.056685+0000 mgr.y (mgr.14505) 1115 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:00 vm02 bash[23351]: cluster 2026-03-09T17:55:59.056685+0000 mgr.y (mgr.14505) 1115 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:02 vm00 bash[28333]: cluster 2026-03-09T17:56:01.056978+0000 mgr.y (mgr.14505) 1116 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:02 vm00 bash[28333]: cluster 2026-03-09T17:56:01.056978+0000 mgr.y (mgr.14505) 1116 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:02 vm00 bash[20770]: cluster 2026-03-09T17:56:01.056978+0000 mgr.y (mgr.14505) 1116 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:02 vm00 bash[20770]: cluster 2026-03-09T17:56:01.056978+0000 mgr.y (mgr.14505) 1116 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:02.860 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:02 vm02 bash[23351]: cluster 2026-03-09T17:56:01.056978+0000 mgr.y (mgr.14505) 1116 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:02.860 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:02 vm02 bash[23351]: cluster 2026-03-09T17:56:01.056978+0000 mgr.y (mgr.14505) 1116 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:03.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:56:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:56:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:04 vm00 bash[28333]: audit 2026-03-09T17:56:02.859578+0000 mgr.y (mgr.14505) 1117 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:04 vm00 bash[28333]: audit 2026-03-09T17:56:02.859578+0000 mgr.y (mgr.14505) 1117 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:04 vm00 bash[28333]: cluster 2026-03-09T17:56:03.057543+0000 mgr.y (mgr.14505) 1118 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:04 vm00 bash[28333]: cluster 2026-03-09T17:56:03.057543+0000 mgr.y (mgr.14505) 1118 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:04 vm00 bash[20770]: audit 2026-03-09T17:56:02.859578+0000 mgr.y (mgr.14505) 1117 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:04 vm00 bash[20770]: audit 2026-03-09T17:56:02.859578+0000 mgr.y (mgr.14505) 1117 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:04 vm00 bash[20770]: cluster 2026-03-09T17:56:03.057543+0000 mgr.y (mgr.14505) 1118 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:04 vm00 bash[20770]: cluster 2026-03-09T17:56:03.057543+0000 mgr.y (mgr.14505) 1118 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:04 vm02 bash[23351]: audit 2026-03-09T17:56:02.859578+0000 mgr.y (mgr.14505) 1117 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:04 vm02 bash[23351]: audit 2026-03-09T17:56:02.859578+0000 mgr.y (mgr.14505) 1117 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:04 vm02 bash[23351]: cluster 2026-03-09T17:56:03.057543+0000 mgr.y (mgr.14505) 1118 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:04 vm02 bash[23351]: cluster 2026-03-09T17:56:03.057543+0000 mgr.y (mgr.14505) 1118 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:06 vm00 bash[28333]: cluster 2026-03-09T17:56:05.057868+0000 mgr.y (mgr.14505) 1119 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:06 vm00 bash[28333]: cluster 2026-03-09T17:56:05.057868+0000 mgr.y (mgr.14505) 1119 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:06 vm00 bash[20770]: cluster 2026-03-09T17:56:05.057868+0000 mgr.y (mgr.14505) 1119 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:06 vm00 bash[20770]: cluster 2026-03-09T17:56:05.057868+0000 mgr.y (mgr.14505) 1119 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:56:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:56:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:56:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:06 vm02 bash[23351]: cluster 2026-03-09T17:56:05.057868+0000 mgr.y (mgr.14505) 1119 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:06 vm02 bash[23351]: cluster 2026-03-09T17:56:05.057868+0000 mgr.y (mgr.14505) 1119 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:08 vm00 bash[28333]: cluster 2026-03-09T17:56:07.058132+0000 mgr.y (mgr.14505) 1120 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:08 vm00 bash[28333]: cluster 2026-03-09T17:56:07.058132+0000 mgr.y (mgr.14505) 1120 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:08 vm00 bash[20770]: cluster 2026-03-09T17:56:07.058132+0000 mgr.y (mgr.14505) 1120 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:08 vm00 bash[20770]: cluster 2026-03-09T17:56:07.058132+0000 mgr.y (mgr.14505) 1120 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:08 vm02 bash[23351]: cluster 2026-03-09T17:56:07.058132+0000 mgr.y (mgr.14505) 1120 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:08 vm02 bash[23351]: cluster 2026-03-09T17:56:07.058132+0000 mgr.y (mgr.14505) 1120 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:10.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:10 vm02 bash[23351]: cluster 2026-03-09T17:56:09.058867+0000 mgr.y (mgr.14505) 1121 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:10 vm02 bash[23351]: cluster 2026-03-09T17:56:09.058867+0000 mgr.y (mgr.14505) 1121 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:10 vm00 bash[28333]: cluster 2026-03-09T17:56:09.058867+0000 mgr.y (mgr.14505) 1121 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:11.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:10 vm00 bash[28333]: cluster 2026-03-09T17:56:09.058867+0000 mgr.y (mgr.14505) 1121 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:10 vm00 bash[20770]: cluster 2026-03-09T17:56:09.058867+0000 mgr.y (mgr.14505) 1121 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:11.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:10 vm00 bash[20770]: cluster 2026-03-09T17:56:09.058867+0000 mgr.y (mgr.14505) 1121 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:11.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:11 vm02 bash[23351]: cluster 2026-03-09T17:56:11.059211+0000 mgr.y (mgr.14505) 1122 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:11 vm02 bash[23351]: cluster 2026-03-09T17:56:11.059211+0000 mgr.y (mgr.14505) 1122 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:12.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:11 vm00 bash[28333]: cluster 2026-03-09T17:56:11.059211+0000 mgr.y (mgr.14505) 1122 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:12.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:11 vm00 bash[28333]: cluster 2026-03-09T17:56:11.059211+0000 mgr.y (mgr.14505) 1122 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:12.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:11 vm00 bash[20770]: cluster 2026-03-09T17:56:11.059211+0000 mgr.y (mgr.14505) 1122 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:12.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:11 vm00 bash[20770]: cluster 2026-03-09T17:56:11.059211+0000 mgr.y (mgr.14505) 1122 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:13.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:56:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:56:14.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:14 vm02 bash[23351]: audit 2026-03-09T17:56:12.865286+0000 mgr.y (mgr.14505) 1123 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:14 vm02 bash[23351]: audit 2026-03-09T17:56:12.865286+0000 mgr.y (mgr.14505) 1123 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:14 vm02 bash[23351]: cluster 2026-03-09T17:56:13.059780+0000 mgr.y (mgr.14505) 1124 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:14 vm02 bash[23351]: cluster 2026-03-09T17:56:13.059780+0000 mgr.y (mgr.14505) 1124 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:14 vm02 bash[23351]: audit 2026-03-09T17:56:13.835816+0000 mon.c (mon.2) 909 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:14 vm02 bash[23351]: audit 2026-03-09T17:56:13.835816+0000 mon.c (mon.2) 909 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:14 vm00 bash[28333]: audit 2026-03-09T17:56:12.865286+0000 mgr.y (mgr.14505) 1123 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:14 vm00 bash[28333]: audit 2026-03-09T17:56:12.865286+0000 mgr.y (mgr.14505) 1123 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:14 vm00 bash[28333]: cluster 2026-03-09T17:56:13.059780+0000 mgr.y (mgr.14505) 1124 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:14 vm00 bash[28333]: cluster 2026-03-09T17:56:13.059780+0000 mgr.y (mgr.14505) 1124 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:14 vm00 bash[28333]: audit 2026-03-09T17:56:13.835816+0000 mon.c (mon.2) 909 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:14 vm00 bash[28333]: audit 2026-03-09T17:56:13.835816+0000 mon.c (mon.2) 909 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:14 vm00 bash[20770]: audit 2026-03-09T17:56:12.865286+0000 mgr.y (mgr.14505) 1123 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:14 vm00 bash[20770]: audit 2026-03-09T17:56:12.865286+0000 mgr.y (mgr.14505) 1123 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:14 vm00 bash[20770]: cluster 2026-03-09T17:56:13.059780+0000 mgr.y (mgr.14505) 1124 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:14 vm00 bash[20770]: cluster 2026-03-09T17:56:13.059780+0000 mgr.y (mgr.14505) 1124 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:14 vm00 bash[20770]: audit 2026-03-09T17:56:13.835816+0000 mon.c (mon.2) 909 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:14.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:14 vm00 bash[20770]: audit 2026-03-09T17:56:13.835816+0000 mon.c (mon.2) 909 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:15.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:15 vm02 bash[23351]: audit 2026-03-09T17:56:14.332158+0000 mon.c (mon.2) 910 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:56:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:15 vm02 bash[23351]: audit 2026-03-09T17:56:14.332158+0000 mon.c (mon.2) 910 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:56:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:15 vm00 bash[28333]: audit 2026-03-09T17:56:14.332158+0000 mon.c (mon.2) 910 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:56:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:15 vm00 bash[28333]: audit 2026-03-09T17:56:14.332158+0000 mon.c (mon.2) 910 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:56:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:15 vm00 bash[20770]: audit 2026-03-09T17:56:14.332158+0000 mon.c (mon.2) 910 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:56:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:15 vm00 bash[20770]: audit 2026-03-09T17:56:14.332158+0000 mon.c (mon.2) 910 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:56:16.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:16 vm02 bash[23351]: cluster 2026-03-09T17:56:15.060155+0000 mgr.y (mgr.14505) 1125 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:16.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:16 vm02 bash[23351]: cluster 2026-03-09T17:56:15.060155+0000 mgr.y (mgr.14505) 1125 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:16 vm00 bash[28333]: cluster 2026-03-09T17:56:15.060155+0000 mgr.y (mgr.14505) 1125 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:16 vm00 bash[28333]: cluster 2026-03-09T17:56:15.060155+0000 mgr.y (mgr.14505) 1125 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:16 vm00 bash[20770]: cluster 2026-03-09T17:56:15.060155+0000 mgr.y (mgr.14505) 1125 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:16 vm00 bash[20770]: cluster 2026-03-09T17:56:15.060155+0000 mgr.y (mgr.14505) 1125 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:56:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:56:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:56:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:18 vm00 bash[28333]: cluster 2026-03-09T17:56:17.060450+0000 mgr.y (mgr.14505) 1126 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:18 vm00 bash[28333]: cluster 2026-03-09T17:56:17.060450+0000 mgr.y (mgr.14505) 1126 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:18 vm00 bash[20770]: cluster 2026-03-09T17:56:17.060450+0000 mgr.y (mgr.14505) 1126 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:18 vm00 bash[20770]: cluster 2026-03-09T17:56:17.060450+0000 mgr.y (mgr.14505) 1126 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:18 vm02 bash[23351]: cluster 2026-03-09T17:56:17.060450+0000 mgr.y (mgr.14505) 1126 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:18 vm02 bash[23351]: cluster 2026-03-09T17:56:17.060450+0000 mgr.y (mgr.14505) 1126 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:20.419 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:20 vm02 bash[23351]: cluster 2026-03-09T17:56:19.061014+0000 mgr.y (mgr.14505) 1127 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:20.419 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:20 vm02 bash[23351]: cluster 2026-03-09T17:56:19.061014+0000 mgr.y (mgr.14505) 1127 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:20 vm00 bash[28333]: cluster 2026-03-09T17:56:19.061014+0000 mgr.y (mgr.14505) 1127 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:20 vm00 bash[28333]: cluster 2026-03-09T17:56:19.061014+0000 mgr.y (mgr.14505) 1127 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:20 vm00 bash[20770]: cluster 2026-03-09T17:56:19.061014+0000 mgr.y (mgr.14505) 1127 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:20 vm00 bash[20770]: cluster 2026-03-09T17:56:19.061014+0000 mgr.y (mgr.14505) 1127 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.210057+0000 mon.a (mon.0) 3550 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.210057+0000 mon.a (mon.0) 3550 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.217575+0000 mon.a (mon.0) 3551 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.217575+0000 mon.a (mon.0) 3551 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.220553+0000 mon.c (mon.2) 911 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.220553+0000 mon.c (mon.2) 911 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.221460+0000 mon.c (mon.2) 912 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.221460+0000 mon.c (mon.2) 912 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.225587+0000 mon.a (mon.0) 3552 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:21 vm00 bash[28333]: audit 2026-03-09T17:56:20.225587+0000 mon.a (mon.0) 3552 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.210057+0000 mon.a (mon.0) 3550 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.210057+0000 mon.a (mon.0) 3550 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.217575+0000 mon.a (mon.0) 3551 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.217575+0000 mon.a (mon.0) 3551 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.220553+0000 mon.c (mon.2) 911 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.220553+0000 mon.c (mon.2) 911 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.221460+0000 mon.c (mon.2) 912 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.221460+0000 mon.c (mon.2) 912 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.225587+0000 mon.a (mon.0) 3552 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:21 vm00 bash[20770]: audit 2026-03-09T17:56:20.225587+0000 mon.a (mon.0) 3552 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.210057+0000 mon.a (mon.0) 3550 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.210057+0000 mon.a (mon.0) 3550 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.217575+0000 mon.a (mon.0) 3551 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.217575+0000 mon.a (mon.0) 3551 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.220553+0000 mon.c (mon.2) 911 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.220553+0000 mon.c (mon.2) 911 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.221460+0000 mon.c (mon.2) 912 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.221460+0000 mon.c (mon.2) 912 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.225587+0000 mon.a (mon.0) 3552 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:21 vm02 bash[23351]: audit 2026-03-09T17:56:20.225587+0000 mon.a (mon.0) 3552 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:56:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:22 vm00 bash[28333]: cluster 2026-03-09T17:56:21.061376+0000 mgr.y (mgr.14505) 1128 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:22.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:22 vm00 bash[28333]: cluster 2026-03-09T17:56:21.061376+0000 mgr.y (mgr.14505) 1128 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:22 vm00 bash[20770]: cluster 2026-03-09T17:56:21.061376+0000 mgr.y (mgr.14505) 1128 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:22.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:22 vm00 bash[20770]: cluster 2026-03-09T17:56:21.061376+0000 mgr.y (mgr.14505) 1128 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:22 vm02 bash[23351]: cluster 2026-03-09T17:56:21.061376+0000 mgr.y (mgr.14505) 1128 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:22.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:22 vm02 bash[23351]: cluster 2026-03-09T17:56:21.061376+0000 mgr.y (mgr.14505) 1128 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:23.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:56:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:56:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:24 vm00 bash[28333]: audit 2026-03-09T17:56:22.875763+0000 mgr.y (mgr.14505) 1129 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:24 vm00 bash[28333]: audit 2026-03-09T17:56:22.875763+0000 mgr.y (mgr.14505) 1129 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:24 vm00 bash[28333]: cluster 2026-03-09T17:56:23.061934+0000 mgr.y (mgr.14505) 1130 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:24 vm00 bash[28333]: cluster 2026-03-09T17:56:23.061934+0000 mgr.y (mgr.14505) 1130 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:24 vm00 bash[20770]: audit 2026-03-09T17:56:22.875763+0000 mgr.y (mgr.14505) 1129 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:24 vm00 bash[20770]: audit 2026-03-09T17:56:22.875763+0000 mgr.y (mgr.14505) 1129 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:24 vm00 bash[20770]: cluster 2026-03-09T17:56:23.061934+0000 mgr.y (mgr.14505) 1130 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:24 vm00 bash[20770]: cluster 2026-03-09T17:56:23.061934+0000 mgr.y (mgr.14505) 1130 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:24 vm02 bash[23351]: audit 2026-03-09T17:56:22.875763+0000 mgr.y (mgr.14505) 1129 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:24 vm02 bash[23351]: audit 2026-03-09T17:56:22.875763+0000 mgr.y (mgr.14505) 1129 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:24 vm02 bash[23351]: cluster 2026-03-09T17:56:23.061934+0000 mgr.y (mgr.14505) 1130 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:24.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:24 vm02 bash[23351]: cluster 2026-03-09T17:56:23.061934+0000 mgr.y (mgr.14505) 1130 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:26 vm00 bash[28333]: cluster 2026-03-09T17:56:25.062237+0000 mgr.y (mgr.14505) 1131 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:26 vm00 bash[28333]: cluster 2026-03-09T17:56:25.062237+0000 mgr.y (mgr.14505) 1131 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:26 vm00 bash[20770]: cluster 2026-03-09T17:56:25.062237+0000 mgr.y (mgr.14505) 1131 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:26 vm00 bash[20770]: cluster 2026-03-09T17:56:25.062237+0000 mgr.y (mgr.14505) 1131 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:26.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:56:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:56:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:56:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:26 vm02 bash[23351]: cluster 2026-03-09T17:56:25.062237+0000 mgr.y (mgr.14505) 1131 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:26.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:26 vm02 bash[23351]: cluster 2026-03-09T17:56:25.062237+0000 mgr.y (mgr.14505) 1131 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:28 vm00 bash[28333]: cluster 2026-03-09T17:56:27.062570+0000 mgr.y (mgr.14505) 1132 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:28 vm00 bash[28333]: cluster 2026-03-09T17:56:27.062570+0000 mgr.y (mgr.14505) 1132 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:28 vm00 bash[20770]: cluster 2026-03-09T17:56:27.062570+0000 mgr.y (mgr.14505) 1132 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:28 vm00 bash[20770]: cluster 2026-03-09T17:56:27.062570+0000 mgr.y (mgr.14505) 1132 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:28 vm02 bash[23351]: cluster 2026-03-09T17:56:27.062570+0000 mgr.y (mgr.14505) 1132 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:28 vm02 bash[23351]: cluster 2026-03-09T17:56:27.062570+0000 mgr.y (mgr.14505) 1132 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:29 vm00 bash[28333]: audit 2026-03-09T17:56:28.841709+0000 mon.c (mon.2) 913 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:29 vm00 bash[28333]: audit 2026-03-09T17:56:28.841709+0000 mon.c (mon.2) 913 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:29 vm00 bash[20770]: audit 2026-03-09T17:56:28.841709+0000 mon.c (mon.2) 913 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:29 vm00 bash[20770]: audit 2026-03-09T17:56:28.841709+0000 mon.c (mon.2) 913 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:29 vm02 bash[23351]: audit 2026-03-09T17:56:28.841709+0000 mon.c (mon.2) 913 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:29 vm02 bash[23351]: audit 2026-03-09T17:56:28.841709+0000 mon.c (mon.2) 913 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:30 vm00 bash[28333]: cluster 2026-03-09T17:56:29.063216+0000 mgr.y (mgr.14505) 1133 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:30 vm00 bash[28333]: cluster 2026-03-09T17:56:29.063216+0000 mgr.y (mgr.14505) 1133 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:30 vm00 bash[20770]: cluster 2026-03-09T17:56:29.063216+0000 mgr.y (mgr.14505) 1133 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:30 vm00 bash[20770]: cluster 2026-03-09T17:56:29.063216+0000 mgr.y (mgr.14505) 1133 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:30.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:30 vm02 bash[23351]: cluster 2026-03-09T17:56:29.063216+0000 mgr.y (mgr.14505) 1133 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:30 vm02 bash[23351]: cluster 2026-03-09T17:56:29.063216+0000 mgr.y (mgr.14505) 1133 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:32 vm00 bash[28333]: cluster 2026-03-09T17:56:31.063527+0000 mgr.y (mgr.14505) 1134 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:32 vm00 bash[28333]: cluster 2026-03-09T17:56:31.063527+0000 mgr.y (mgr.14505) 1134 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:32 vm00 bash[20770]: cluster 2026-03-09T17:56:31.063527+0000 mgr.y (mgr.14505) 1134 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:32 vm00 bash[20770]: cluster 2026-03-09T17:56:31.063527+0000 mgr.y (mgr.14505) 1134 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:32.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:32 vm02 bash[23351]: cluster 2026-03-09T17:56:31.063527+0000 mgr.y (mgr.14505) 1134 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:32 vm02 bash[23351]: cluster 2026-03-09T17:56:31.063527+0000 mgr.y (mgr.14505) 1134 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:33.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:56:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:56:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:34 vm00 bash[28333]: audit 2026-03-09T17:56:32.886510+0000 mgr.y (mgr.14505) 1135 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:34 vm00 bash[28333]: audit 2026-03-09T17:56:32.886510+0000 mgr.y (mgr.14505) 1135 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:34 vm00 bash[28333]: cluster 2026-03-09T17:56:33.064395+0000 mgr.y (mgr.14505) 1136 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:34 vm00 bash[28333]: cluster 2026-03-09T17:56:33.064395+0000 mgr.y (mgr.14505) 1136 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:34 vm00 bash[20770]: audit 2026-03-09T17:56:32.886510+0000 mgr.y (mgr.14505) 1135 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:34 vm00 bash[20770]: audit 2026-03-09T17:56:32.886510+0000 mgr.y (mgr.14505) 1135 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:34 vm00 bash[20770]: cluster 2026-03-09T17:56:33.064395+0000 mgr.y (mgr.14505) 1136 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:34 vm00 bash[20770]: cluster 2026-03-09T17:56:33.064395+0000 mgr.y (mgr.14505) 1136 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:34 vm02 bash[23351]: audit 2026-03-09T17:56:32.886510+0000 mgr.y (mgr.14505) 1135 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:34 vm02 bash[23351]: audit 2026-03-09T17:56:32.886510+0000 mgr.y (mgr.14505) 1135 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:34 vm02 bash[23351]: cluster 2026-03-09T17:56:33.064395+0000 mgr.y (mgr.14505) 1136 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:34 vm02 bash[23351]: cluster 2026-03-09T17:56:33.064395+0000 mgr.y (mgr.14505) 1136 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:36 vm00 bash[28333]: cluster 2026-03-09T17:56:35.064694+0000 mgr.y (mgr.14505) 1137 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:36 vm00 bash[28333]: cluster 2026-03-09T17:56:35.064694+0000 mgr.y (mgr.14505) 1137 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:36 vm00 bash[20770]: cluster 2026-03-09T17:56:35.064694+0000 mgr.y (mgr.14505) 1137 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:36 vm00 bash[20770]: cluster 2026-03-09T17:56:35.064694+0000 mgr.y (mgr.14505) 1137 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:56:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:56:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:56:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:36 vm02 bash[23351]: cluster 2026-03-09T17:56:35.064694+0000 mgr.y (mgr.14505) 1137 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:36 vm02 bash[23351]: cluster 2026-03-09T17:56:35.064694+0000 mgr.y (mgr.14505) 1137 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:38 vm00 bash[28333]: cluster 2026-03-09T17:56:37.064973+0000 mgr.y (mgr.14505) 1138 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:38 vm00 bash[28333]: cluster 2026-03-09T17:56:37.064973+0000 mgr.y (mgr.14505) 1138 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:38 vm00 bash[20770]: cluster 2026-03-09T17:56:37.064973+0000 mgr.y (mgr.14505) 1138 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:38 vm00 bash[20770]: cluster 2026-03-09T17:56:37.064973+0000 mgr.y (mgr.14505) 1138 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:38 vm02 bash[23351]: cluster 2026-03-09T17:56:37.064973+0000 mgr.y (mgr.14505) 1138 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:38 vm02 bash[23351]: cluster 2026-03-09T17:56:37.064973+0000 mgr.y (mgr.14505) 1138 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:40 vm02 bash[23351]: cluster 2026-03-09T17:56:39.065753+0000 mgr.y (mgr.14505) 1139 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:40 vm02 bash[23351]: cluster 2026-03-09T17:56:39.065753+0000 mgr.y (mgr.14505) 1139 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:40 vm00 bash[28333]: cluster 2026-03-09T17:56:39.065753+0000 mgr.y (mgr.14505) 1139 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:40 vm00 bash[28333]: cluster 2026-03-09T17:56:39.065753+0000 mgr.y (mgr.14505) 1139 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:40 vm00 bash[20770]: cluster 2026-03-09T17:56:39.065753+0000 mgr.y (mgr.14505) 1139 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:40 vm00 bash[20770]: cluster 2026-03-09T17:56:39.065753+0000 mgr.y (mgr.14505) 1139 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:42 vm02 bash[23351]: cluster 2026-03-09T17:56:41.066022+0000 mgr.y (mgr.14505) 1140 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:42 vm02 bash[23351]: cluster 2026-03-09T17:56:41.066022+0000 mgr.y (mgr.14505) 1140 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:42 vm00 bash[28333]: cluster 2026-03-09T17:56:41.066022+0000 mgr.y (mgr.14505) 1140 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:42 vm00 bash[28333]: cluster 2026-03-09T17:56:41.066022+0000 mgr.y (mgr.14505) 1140 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:42 vm00 bash[20770]: cluster 2026-03-09T17:56:41.066022+0000 mgr.y (mgr.14505) 1140 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:42 vm00 bash[20770]: cluster 2026-03-09T17:56:41.066022+0000 mgr.y (mgr.14505) 1140 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:43.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:56:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:56:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:44 vm02 bash[23351]: audit 2026-03-09T17:56:42.890179+0000 mgr.y (mgr.14505) 1141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:44 vm02 bash[23351]: audit 2026-03-09T17:56:42.890179+0000 mgr.y (mgr.14505) 1141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:44 vm02 bash[23351]: cluster 2026-03-09T17:56:43.066525+0000 mgr.y (mgr.14505) 1142 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:44 vm02 bash[23351]: cluster 2026-03-09T17:56:43.066525+0000 mgr.y (mgr.14505) 1142 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:44 vm02 bash[23351]: audit 2026-03-09T17:56:43.847793+0000 mon.c (mon.2) 914 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:44 vm02 bash[23351]: audit 2026-03-09T17:56:43.847793+0000 mon.c (mon.2) 914 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:44 vm00 bash[28333]: audit 2026-03-09T17:56:42.890179+0000 mgr.y (mgr.14505) 1141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:44 vm00 bash[28333]: audit 2026-03-09T17:56:42.890179+0000 mgr.y (mgr.14505) 1141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:44 vm00 bash[28333]: cluster 2026-03-09T17:56:43.066525+0000 mgr.y (mgr.14505) 1142 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:44 vm00 bash[28333]: cluster 2026-03-09T17:56:43.066525+0000 mgr.y (mgr.14505) 1142 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:44 vm00 bash[28333]: audit 2026-03-09T17:56:43.847793+0000 mon.c (mon.2) 914 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:44 vm00 bash[28333]: audit 2026-03-09T17:56:43.847793+0000 mon.c (mon.2) 914 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:44 vm00 bash[20770]: audit 2026-03-09T17:56:42.890179+0000 mgr.y (mgr.14505) 1141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:44 vm00 bash[20770]: audit 2026-03-09T17:56:42.890179+0000 mgr.y (mgr.14505) 1141 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:44 vm00 bash[20770]: cluster 2026-03-09T17:56:43.066525+0000 mgr.y (mgr.14505) 1142 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:44 vm00 bash[20770]: cluster 2026-03-09T17:56:43.066525+0000 mgr.y (mgr.14505) 1142 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:44 vm00 bash[20770]: audit 2026-03-09T17:56:43.847793+0000 mon.c (mon.2) 914 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:44 vm00 bash[20770]: audit 2026-03-09T17:56:43.847793+0000 mon.c (mon.2) 914 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:46 vm02 bash[23351]: cluster 2026-03-09T17:56:45.066955+0000 mgr.y (mgr.14505) 1143 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:46 vm02 bash[23351]: cluster 2026-03-09T17:56:45.066955+0000 mgr.y (mgr.14505) 1143 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:46 vm00 bash[20770]: cluster 2026-03-09T17:56:45.066955+0000 mgr.y (mgr.14505) 1143 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:46 vm00 bash[20770]: cluster 2026-03-09T17:56:45.066955+0000 mgr.y (mgr.14505) 1143 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:56:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:56:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:56:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:46 vm00 bash[28333]: cluster 2026-03-09T17:56:45.066955+0000 mgr.y (mgr.14505) 1143 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:46 vm00 bash[28333]: cluster 2026-03-09T17:56:45.066955+0000 mgr.y (mgr.14505) 1143 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:48 vm02 bash[23351]: cluster 2026-03-09T17:56:47.067236+0000 mgr.y (mgr.14505) 1144 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:48 vm02 bash[23351]: cluster 2026-03-09T17:56:47.067236+0000 mgr.y (mgr.14505) 1144 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:48 vm00 bash[28333]: cluster 2026-03-09T17:56:47.067236+0000 mgr.y (mgr.14505) 1144 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:48 vm00 bash[28333]: cluster 2026-03-09T17:56:47.067236+0000 mgr.y (mgr.14505) 1144 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:48 vm00 bash[20770]: cluster 2026-03-09T17:56:47.067236+0000 mgr.y (mgr.14505) 1144 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:48 vm00 bash[20770]: cluster 2026-03-09T17:56:47.067236+0000 mgr.y (mgr.14505) 1144 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:50 vm02 bash[23351]: cluster 2026-03-09T17:56:49.067829+0000 mgr.y (mgr.14505) 1145 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:50 vm02 bash[23351]: cluster 2026-03-09T17:56:49.067829+0000 mgr.y (mgr.14505) 1145 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:50 vm00 bash[28333]: cluster 2026-03-09T17:56:49.067829+0000 mgr.y (mgr.14505) 1145 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:50 vm00 bash[28333]: cluster 2026-03-09T17:56:49.067829+0000 mgr.y (mgr.14505) 1145 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:50 vm00 bash[20770]: cluster 2026-03-09T17:56:49.067829+0000 mgr.y (mgr.14505) 1145 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:50 vm00 bash[20770]: cluster 2026-03-09T17:56:49.067829+0000 mgr.y (mgr.14505) 1145 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T17:56:52.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:52 vm02 bash[23351]: cluster 2026-03-09T17:56:51.068151+0000 mgr.y (mgr.14505) 1146 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:52.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:52 vm02 bash[23351]: cluster 2026-03-09T17:56:51.068151+0000 mgr.y (mgr.14505) 1146 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:52 vm00 bash[28333]: cluster 2026-03-09T17:56:51.068151+0000 mgr.y (mgr.14505) 1146 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:52 vm00 bash[28333]: cluster 2026-03-09T17:56:51.068151+0000 mgr.y (mgr.14505) 1146 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:52 vm00 bash[20770]: cluster 2026-03-09T17:56:51.068151+0000 mgr.y (mgr.14505) 1146 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:52 vm00 bash[20770]: cluster 2026-03-09T17:56:51.068151+0000 mgr.y (mgr.14505) 1146 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:53.385 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:56:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:56:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:54 vm02 bash[23351]: audit 2026-03-09T17:56:52.897687+0000 mgr.y (mgr.14505) 1147 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:54 vm02 bash[23351]: audit 2026-03-09T17:56:52.897687+0000 mgr.y (mgr.14505) 1147 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:54 vm02 bash[23351]: cluster 2026-03-09T17:56:53.068708+0000 mgr.y (mgr.14505) 1148 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:54 vm02 bash[23351]: cluster 2026-03-09T17:56:53.068708+0000 mgr.y (mgr.14505) 1148 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:54 vm00 bash[28333]: audit 2026-03-09T17:56:52.897687+0000 mgr.y (mgr.14505) 1147 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:54 vm00 bash[28333]: audit 2026-03-09T17:56:52.897687+0000 mgr.y (mgr.14505) 1147 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:54 vm00 bash[28333]: cluster 2026-03-09T17:56:53.068708+0000 mgr.y (mgr.14505) 1148 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:54 vm00 bash[28333]: cluster 2026-03-09T17:56:53.068708+0000 mgr.y (mgr.14505) 1148 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:54 vm00 bash[20770]: audit 2026-03-09T17:56:52.897687+0000 mgr.y (mgr.14505) 1147 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:54 vm00 bash[20770]: audit 2026-03-09T17:56:52.897687+0000 mgr.y (mgr.14505) 1147 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:56:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:54 vm00 bash[20770]: cluster 2026-03-09T17:56:53.068708+0000 mgr.y (mgr.14505) 1148 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:54.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:54 vm00 bash[20770]: cluster 2026-03-09T17:56:53.068708+0000 mgr.y (mgr.14505) 1148 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:56:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:56 vm00 bash[28333]: cluster 2026-03-09T17:56:55.068994+0000 mgr.y (mgr.14505) 1149 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:56 vm00 bash[28333]: cluster 2026-03-09T17:56:55.068994+0000 mgr.y (mgr.14505) 1149 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:56 vm00 bash[20770]: cluster 2026-03-09T17:56:55.068994+0000 mgr.y (mgr.14505) 1149 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:56 vm00 bash[20770]: cluster 2026-03-09T17:56:55.068994+0000 mgr.y (mgr.14505) 1149 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:56:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:56:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:56:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:56 vm02 bash[23351]: cluster 2026-03-09T17:56:55.068994+0000 mgr.y (mgr.14505) 1149 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:56.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:56 vm02 bash[23351]: cluster 2026-03-09T17:56:55.068994+0000 mgr.y (mgr.14505) 1149 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:58 vm00 bash[28333]: cluster 2026-03-09T17:56:57.069243+0000 mgr.y (mgr.14505) 1150 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:58 vm00 bash[28333]: cluster 2026-03-09T17:56:57.069243+0000 mgr.y (mgr.14505) 1150 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:58 vm00 bash[20770]: cluster 2026-03-09T17:56:57.069243+0000 mgr.y (mgr.14505) 1150 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:58 vm00 bash[20770]: cluster 2026-03-09T17:56:57.069243+0000 mgr.y (mgr.14505) 1150 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:58 vm02 bash[23351]: cluster 2026-03-09T17:56:57.069243+0000 mgr.y (mgr.14505) 1150 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:58.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:58 vm02 bash[23351]: cluster 2026-03-09T17:56:57.069243+0000 mgr.y (mgr.14505) 1150 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:56:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:59 vm00 bash[28333]: audit 2026-03-09T17:56:58.854165+0000 mon.c (mon.2) 915 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:56:59 vm00 bash[28333]: audit 2026-03-09T17:56:58.854165+0000 mon.c (mon.2) 915 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:59 vm00 bash[20770]: audit 2026-03-09T17:56:58.854165+0000 mon.c (mon.2) 915 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:56:59 vm00 bash[20770]: audit 2026-03-09T17:56:58.854165+0000 mon.c (mon.2) 915 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:59 vm02 bash[23351]: audit 2026-03-09T17:56:58.854165+0000 mon.c (mon.2) 915 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:56:59.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:56:59 vm02 bash[23351]: audit 2026-03-09T17:56:58.854165+0000 mon.c (mon.2) 915 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:00 vm00 bash[28333]: cluster 2026-03-09T17:56:59.069787+0000 mgr.y (mgr.14505) 1151 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:00 vm00 bash[28333]: cluster 2026-03-09T17:56:59.069787+0000 mgr.y (mgr.14505) 1151 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:00 vm00 bash[20770]: cluster 2026-03-09T17:56:59.069787+0000 mgr.y (mgr.14505) 1151 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:00 vm00 bash[20770]: cluster 2026-03-09T17:56:59.069787+0000 mgr.y (mgr.14505) 1151 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:00 vm02 bash[23351]: cluster 2026-03-09T17:56:59.069787+0000 mgr.y (mgr.14505) 1151 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:00.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:00 vm02 bash[23351]: cluster 2026-03-09T17:56:59.069787+0000 mgr.y (mgr.14505) 1151 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:02 vm00 bash[28333]: cluster 2026-03-09T17:57:01.070039+0000 mgr.y (mgr.14505) 1152 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:02 vm00 bash[28333]: cluster 2026-03-09T17:57:01.070039+0000 mgr.y (mgr.14505) 1152 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:02 vm00 bash[20770]: cluster 2026-03-09T17:57:01.070039+0000 mgr.y (mgr.14505) 1152 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:02 vm00 bash[20770]: cluster 2026-03-09T17:57:01.070039+0000 mgr.y (mgr.14505) 1152 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:02.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:02 vm02 bash[23351]: cluster 2026-03-09T17:57:01.070039+0000 mgr.y (mgr.14505) 1152 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:02.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:02 vm02 bash[23351]: cluster 2026-03-09T17:57:01.070039+0000 mgr.y (mgr.14505) 1152 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:03.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:57:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:57:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:04 vm00 bash[28333]: audit 2026-03-09T17:57:02.908198+0000 mgr.y (mgr.14505) 1153 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:04 vm00 bash[28333]: audit 2026-03-09T17:57:02.908198+0000 mgr.y (mgr.14505) 1153 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:04 vm00 bash[28333]: cluster 2026-03-09T17:57:03.070679+0000 mgr.y (mgr.14505) 1154 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:04 vm00 bash[28333]: cluster 2026-03-09T17:57:03.070679+0000 mgr.y (mgr.14505) 1154 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:04 vm00 bash[20770]: audit 2026-03-09T17:57:02.908198+0000 mgr.y (mgr.14505) 1153 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:04 vm00 bash[20770]: audit 2026-03-09T17:57:02.908198+0000 mgr.y (mgr.14505) 1153 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:04 vm00 bash[20770]: cluster 2026-03-09T17:57:03.070679+0000 mgr.y (mgr.14505) 1154 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:04 vm00 bash[20770]: cluster 2026-03-09T17:57:03.070679+0000 mgr.y (mgr.14505) 1154 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:04 vm02 bash[23351]: audit 2026-03-09T17:57:02.908198+0000 mgr.y (mgr.14505) 1153 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:04 vm02 bash[23351]: audit 2026-03-09T17:57:02.908198+0000 mgr.y (mgr.14505) 1153 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:04 vm02 bash[23351]: cluster 2026-03-09T17:57:03.070679+0000 mgr.y (mgr.14505) 1154 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:04 vm02 bash[23351]: cluster 2026-03-09T17:57:03.070679+0000 mgr.y (mgr.14505) 1154 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:06 vm00 bash[28333]: cluster 2026-03-09T17:57:05.071000+0000 mgr.y (mgr.14505) 1155 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:06 vm00 bash[28333]: cluster 2026-03-09T17:57:05.071000+0000 mgr.y (mgr.14505) 1155 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:57:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:57:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:57:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:06 vm00 bash[20770]: cluster 2026-03-09T17:57:05.071000+0000 mgr.y (mgr.14505) 1155 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:06 vm00 bash[20770]: cluster 2026-03-09T17:57:05.071000+0000 mgr.y (mgr.14505) 1155 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:06 vm02 bash[23351]: cluster 2026-03-09T17:57:05.071000+0000 mgr.y (mgr.14505) 1155 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:06.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:06 vm02 bash[23351]: cluster 2026-03-09T17:57:05.071000+0000 mgr.y (mgr.14505) 1155 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:08 vm00 bash[28333]: cluster 2026-03-09T17:57:07.071297+0000 mgr.y (mgr.14505) 1156 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:08 vm00 bash[28333]: cluster 2026-03-09T17:57:07.071297+0000 mgr.y (mgr.14505) 1156 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:08 vm00 bash[20770]: cluster 2026-03-09T17:57:07.071297+0000 mgr.y (mgr.14505) 1156 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:08 vm00 bash[20770]: cluster 2026-03-09T17:57:07.071297+0000 mgr.y (mgr.14505) 1156 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:08.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:08 vm02 bash[23351]: cluster 2026-03-09T17:57:07.071297+0000 mgr.y (mgr.14505) 1156 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:08 vm02 bash[23351]: cluster 2026-03-09T17:57:07.071297+0000 mgr.y (mgr.14505) 1156 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:10 vm00 bash[28333]: cluster 2026-03-09T17:57:09.071881+0000 mgr.y (mgr.14505) 1157 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:10 vm00 bash[28333]: cluster 2026-03-09T17:57:09.071881+0000 mgr.y (mgr.14505) 1157 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:10 vm00 bash[20770]: cluster 2026-03-09T17:57:09.071881+0000 mgr.y (mgr.14505) 1157 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:10 vm00 bash[20770]: cluster 2026-03-09T17:57:09.071881+0000 mgr.y (mgr.14505) 1157 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:10.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:10 vm02 bash[23351]: cluster 2026-03-09T17:57:09.071881+0000 mgr.y (mgr.14505) 1157 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:10 vm02 bash[23351]: cluster 2026-03-09T17:57:09.071881+0000 mgr.y (mgr.14505) 1157 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:12 vm00 bash[28333]: cluster 2026-03-09T17:57:11.072151+0000 mgr.y (mgr.14505) 1158 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:12 vm00 bash[28333]: cluster 2026-03-09T17:57:11.072151+0000 mgr.y (mgr.14505) 1158 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:12 vm00 bash[20770]: cluster 2026-03-09T17:57:11.072151+0000 mgr.y (mgr.14505) 1158 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:12 vm00 bash[20770]: cluster 2026-03-09T17:57:11.072151+0000 mgr.y (mgr.14505) 1158 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:12.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:12 vm02 bash[23351]: cluster 2026-03-09T17:57:11.072151+0000 mgr.y (mgr.14505) 1158 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:12.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:12 vm02 bash[23351]: cluster 2026-03-09T17:57:11.072151+0000 mgr.y (mgr.14505) 1158 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:13.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:57:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:14 vm00 bash[28333]: audit 2026-03-09T17:57:12.918705+0000 mgr.y (mgr.14505) 1159 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:14 vm00 bash[28333]: audit 2026-03-09T17:57:12.918705+0000 mgr.y (mgr.14505) 1159 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:14 vm00 bash[28333]: cluster 2026-03-09T17:57:13.072736+0000 mgr.y (mgr.14505) 1160 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:14 vm00 bash[28333]: cluster 2026-03-09T17:57:13.072736+0000 mgr.y (mgr.14505) 1160 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:14 vm00 bash[28333]: audit 2026-03-09T17:57:13.859971+0000 mon.c (mon.2) 916 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:14 vm00 bash[28333]: audit 2026-03-09T17:57:13.859971+0000 mon.c (mon.2) 916 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:14 vm00 bash[20770]: audit 2026-03-09T17:57:12.918705+0000 mgr.y (mgr.14505) 1159 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:14 vm00 bash[20770]: audit 2026-03-09T17:57:12.918705+0000 mgr.y (mgr.14505) 1159 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:14 vm00 bash[20770]: cluster 2026-03-09T17:57:13.072736+0000 mgr.y (mgr.14505) 1160 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:14 vm00 bash[20770]: cluster 2026-03-09T17:57:13.072736+0000 mgr.y (mgr.14505) 1160 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:14 vm00 bash[20770]: audit 2026-03-09T17:57:13.859971+0000 mon.c (mon.2) 916 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:14 vm00 bash[20770]: audit 2026-03-09T17:57:13.859971+0000 mon.c (mon.2) 916 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:14 vm02 bash[23351]: audit 2026-03-09T17:57:12.918705+0000 mgr.y (mgr.14505) 1159 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:14 vm02 bash[23351]: audit 2026-03-09T17:57:12.918705+0000 mgr.y (mgr.14505) 1159 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:14.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:14 vm02 bash[23351]: cluster 2026-03-09T17:57:13.072736+0000 mgr.y (mgr.14505) 1160 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:14 vm02 bash[23351]: cluster 2026-03-09T17:57:13.072736+0000 mgr.y (mgr.14505) 1160 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:14 vm02 bash[23351]: audit 2026-03-09T17:57:13.859971+0000 mon.c (mon.2) 916 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:14 vm02 bash[23351]: audit 2026-03-09T17:57:13.859971+0000 mon.c (mon.2) 916 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:16 vm00 bash[28333]: cluster 2026-03-09T17:57:15.073007+0000 mgr.y (mgr.14505) 1161 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:16 vm00 bash[28333]: cluster 2026-03-09T17:57:15.073007+0000 mgr.y (mgr.14505) 1161 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:16 vm00 bash[20770]: cluster 2026-03-09T17:57:15.073007+0000 mgr.y (mgr.14505) 1161 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:16 vm00 bash[20770]: cluster 2026-03-09T17:57:15.073007+0000 mgr.y (mgr.14505) 1161 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:57:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:57:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:57:16.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:16 vm02 bash[23351]: cluster 2026-03-09T17:57:15.073007+0000 mgr.y (mgr.14505) 1161 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:16 vm02 bash[23351]: cluster 2026-03-09T17:57:15.073007+0000 mgr.y (mgr.14505) 1161 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:18.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:18 vm02 bash[23351]: cluster 2026-03-09T17:57:17.073277+0000 mgr.y (mgr.14505) 1162 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:18 vm02 bash[23351]: cluster 2026-03-09T17:57:17.073277+0000 mgr.y (mgr.14505) 1162 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:19.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:18 vm00 bash[28333]: cluster 2026-03-09T17:57:17.073277+0000 mgr.y (mgr.14505) 1162 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:19.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:18 vm00 bash[28333]: cluster 2026-03-09T17:57:17.073277+0000 mgr.y (mgr.14505) 1162 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:18 vm00 bash[20770]: cluster 2026-03-09T17:57:17.073277+0000 mgr.y (mgr.14505) 1162 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:19.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:18 vm00 bash[20770]: cluster 2026-03-09T17:57:17.073277+0000 mgr.y (mgr.14505) 1162 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:20.385 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:57:20 vm02 bash[51223]: logger=cleanup t=2026-03-09T17:57:20.032408641Z level=info msg="Completed cleanup jobs" duration=1.206779ms 2026-03-09T17:57:20.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:57:20 vm02 bash[51223]: logger=plugins.update.checker t=2026-03-09T17:57:20.19361899Z level=info msg="Update check succeeded" duration=54.771256ms 2026-03-09T17:57:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:20 vm02 bash[23351]: cluster 2026-03-09T17:57:19.073878+0000 mgr.y (mgr.14505) 1163 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:20 vm02 bash[23351]: cluster 2026-03-09T17:57:19.073878+0000 mgr.y (mgr.14505) 1163 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:20 vm02 bash[23351]: audit 2026-03-09T17:57:20.264398+0000 mon.c (mon.2) 917 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:57:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:20 vm02 bash[23351]: audit 2026-03-09T17:57:20.264398+0000 mon.c (mon.2) 917 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:57:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:20 vm00 bash[28333]: cluster 2026-03-09T17:57:19.073878+0000 mgr.y (mgr.14505) 1163 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:20 vm00 bash[28333]: cluster 2026-03-09T17:57:19.073878+0000 mgr.y (mgr.14505) 1163 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:20 vm00 bash[28333]: audit 2026-03-09T17:57:20.264398+0000 mon.c (mon.2) 917 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:57:21.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:20 vm00 bash[28333]: audit 2026-03-09T17:57:20.264398+0000 mon.c (mon.2) 917 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:57:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:20 vm00 bash[20770]: cluster 2026-03-09T17:57:19.073878+0000 mgr.y (mgr.14505) 1163 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:20 vm00 bash[20770]: cluster 2026-03-09T17:57:19.073878+0000 mgr.y (mgr.14505) 1163 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:20 vm00 bash[20770]: audit 2026-03-09T17:57:20.264398+0000 mon.c (mon.2) 917 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:57:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:20 vm00 bash[20770]: audit 2026-03-09T17:57:20.264398+0000 mon.c (mon.2) 917 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.833273+0000 mon.a (mon.0) 3553 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.833273+0000 mon.a (mon.0) 3553 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.841001+0000 mon.a (mon.0) 3554 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.841001+0000 mon.a (mon.0) 3554 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.846389+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.846389+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.851610+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.851610+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.853624+0000 mon.c (mon.2) 918 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.853624+0000 mon.c (mon.2) 918 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.854256+0000 mon.c (mon.2) 919 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.854256+0000 mon.c (mon.2) 919 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.858141+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: audit 2026-03-09T17:57:20.858141+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: cluster 2026-03-09T17:57:21.074168+0000 mgr.y (mgr.14505) 1164 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:22.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:21 vm02 bash[23351]: cluster 2026-03-09T17:57:21.074168+0000 mgr.y (mgr.14505) 1164 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.833273+0000 mon.a (mon.0) 3553 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.833273+0000 mon.a (mon.0) 3553 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.841001+0000 mon.a (mon.0) 3554 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.841001+0000 mon.a (mon.0) 3554 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.846389+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.846389+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.851610+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.851610+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.853624+0000 mon.c (mon.2) 918 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.853624+0000 mon.c (mon.2) 918 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.854256+0000 mon.c (mon.2) 919 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.854256+0000 mon.c (mon.2) 919 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.858141+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: audit 2026-03-09T17:57:20.858141+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: cluster 2026-03-09T17:57:21.074168+0000 mgr.y (mgr.14505) 1164 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:21 vm00 bash[28333]: cluster 2026-03-09T17:57:21.074168+0000 mgr.y (mgr.14505) 1164 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:22.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.833273+0000 mon.a (mon.0) 3553 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.833273+0000 mon.a (mon.0) 3553 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.841001+0000 mon.a (mon.0) 3554 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.841001+0000 mon.a (mon.0) 3554 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.846389+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.846389+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.851610+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.851610+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.853624+0000 mon.c (mon.2) 918 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.853624+0000 mon.c (mon.2) 918 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.854256+0000 mon.c (mon.2) 919 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.854256+0000 mon.c (mon.2) 919 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.858141+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: audit 2026-03-09T17:57:20.858141+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: cluster 2026-03-09T17:57:21.074168+0000 mgr.y (mgr.14505) 1164 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:22.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:21 vm00 bash[20770]: cluster 2026-03-09T17:57:21.074168+0000 mgr.y (mgr.14505) 1164 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:23.385 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:57:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:57:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:24 vm00 bash[28333]: audit 2026-03-09T17:57:22.929288+0000 mgr.y (mgr.14505) 1165 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:24 vm00 bash[28333]: audit 2026-03-09T17:57:22.929288+0000 mgr.y (mgr.14505) 1165 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:24 vm00 bash[28333]: cluster 2026-03-09T17:57:23.074830+0000 mgr.y (mgr.14505) 1166 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:24.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:24 vm00 bash[28333]: cluster 2026-03-09T17:57:23.074830+0000 mgr.y (mgr.14505) 1166 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:24 vm00 bash[20770]: audit 2026-03-09T17:57:22.929288+0000 mgr.y (mgr.14505) 1165 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:24 vm00 bash[20770]: audit 2026-03-09T17:57:22.929288+0000 mgr.y (mgr.14505) 1165 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:24 vm00 bash[20770]: cluster 2026-03-09T17:57:23.074830+0000 mgr.y (mgr.14505) 1166 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:24.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:24 vm00 bash[20770]: cluster 2026-03-09T17:57:23.074830+0000 mgr.y (mgr.14505) 1166 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:24 vm02 bash[23351]: audit 2026-03-09T17:57:22.929288+0000 mgr.y (mgr.14505) 1165 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:24 vm02 bash[23351]: audit 2026-03-09T17:57:22.929288+0000 mgr.y (mgr.14505) 1165 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:24 vm02 bash[23351]: cluster 2026-03-09T17:57:23.074830+0000 mgr.y (mgr.14505) 1166 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:24.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:24 vm02 bash[23351]: cluster 2026-03-09T17:57:23.074830+0000 mgr.y (mgr.14505) 1166 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:26 vm00 bash[28333]: cluster 2026-03-09T17:57:25.075103+0000 mgr.y (mgr.14505) 1167 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:26 vm00 bash[28333]: cluster 2026-03-09T17:57:25.075103+0000 mgr.y (mgr.14505) 1167 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:26 vm00 bash[20770]: cluster 2026-03-09T17:57:25.075103+0000 mgr.y (mgr.14505) 1167 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:26 vm00 bash[20770]: cluster 2026-03-09T17:57:25.075103+0000 mgr.y (mgr.14505) 1167 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:26.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:57:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:57:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:57:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:26 vm02 bash[23351]: cluster 2026-03-09T17:57:25.075103+0000 mgr.y (mgr.14505) 1167 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:26.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:26 vm02 bash[23351]: cluster 2026-03-09T17:57:25.075103+0000 mgr.y (mgr.14505) 1167 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:28 vm00 bash[28333]: cluster 2026-03-09T17:57:27.075452+0000 mgr.y (mgr.14505) 1168 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:28 vm00 bash[28333]: cluster 2026-03-09T17:57:27.075452+0000 mgr.y (mgr.14505) 1168 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:28 vm00 bash[20770]: cluster 2026-03-09T17:57:27.075452+0000 mgr.y (mgr.14505) 1168 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:28 vm00 bash[20770]: cluster 2026-03-09T17:57:27.075452+0000 mgr.y (mgr.14505) 1168 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:28 vm02 bash[23351]: cluster 2026-03-09T17:57:27.075452+0000 mgr.y (mgr.14505) 1168 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:28 vm02 bash[23351]: cluster 2026-03-09T17:57:27.075452+0000 mgr.y (mgr.14505) 1168 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:29 vm00 bash[28333]: audit 2026-03-09T17:57:28.865729+0000 mon.c (mon.2) 920 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:29 vm00 bash[28333]: audit 2026-03-09T17:57:28.865729+0000 mon.c (mon.2) 920 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:29 vm00 bash[20770]: audit 2026-03-09T17:57:28.865729+0000 mon.c (mon.2) 920 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:29.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:29 vm00 bash[20770]: audit 2026-03-09T17:57:28.865729+0000 mon.c (mon.2) 920 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:29 vm02 bash[23351]: audit 2026-03-09T17:57:28.865729+0000 mon.c (mon.2) 920 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:29 vm02 bash[23351]: audit 2026-03-09T17:57:28.865729+0000 mon.c (mon.2) 920 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:30 vm00 bash[28333]: cluster 2026-03-09T17:57:29.076193+0000 mgr.y (mgr.14505) 1169 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:30 vm00 bash[28333]: cluster 2026-03-09T17:57:29.076193+0000 mgr.y (mgr.14505) 1169 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:30 vm00 bash[20770]: cluster 2026-03-09T17:57:29.076193+0000 mgr.y (mgr.14505) 1169 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:30 vm00 bash[20770]: cluster 2026-03-09T17:57:29.076193+0000 mgr.y (mgr.14505) 1169 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:30 vm02 bash[23351]: cluster 2026-03-09T17:57:29.076193+0000 mgr.y (mgr.14505) 1169 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:30.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:30 vm02 bash[23351]: cluster 2026-03-09T17:57:29.076193+0000 mgr.y (mgr.14505) 1169 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:32 vm00 bash[28333]: cluster 2026-03-09T17:57:31.076516+0000 mgr.y (mgr.14505) 1170 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:32 vm00 bash[28333]: cluster 2026-03-09T17:57:31.076516+0000 mgr.y (mgr.14505) 1170 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:32 vm00 bash[20770]: cluster 2026-03-09T17:57:31.076516+0000 mgr.y (mgr.14505) 1170 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:32 vm00 bash[20770]: cluster 2026-03-09T17:57:31.076516+0000 mgr.y (mgr.14505) 1170 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:32 vm02 bash[23351]: cluster 2026-03-09T17:57:31.076516+0000 mgr.y (mgr.14505) 1170 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:32 vm02 bash[23351]: cluster 2026-03-09T17:57:31.076516+0000 mgr.y (mgr.14505) 1170 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:33.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:57:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:57:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:34 vm00 bash[28333]: audit 2026-03-09T17:57:32.935846+0000 mgr.y (mgr.14505) 1171 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:34 vm00 bash[28333]: audit 2026-03-09T17:57:32.935846+0000 mgr.y (mgr.14505) 1171 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:34 vm00 bash[28333]: cluster 2026-03-09T17:57:33.077288+0000 mgr.y (mgr.14505) 1172 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:34 vm00 bash[28333]: cluster 2026-03-09T17:57:33.077288+0000 mgr.y (mgr.14505) 1172 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:34.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:34 vm00 bash[20770]: audit 2026-03-09T17:57:32.935846+0000 mgr.y (mgr.14505) 1171 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:34.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:34 vm00 bash[20770]: audit 2026-03-09T17:57:32.935846+0000 mgr.y (mgr.14505) 1171 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:34.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:34 vm00 bash[20770]: cluster 2026-03-09T17:57:33.077288+0000 mgr.y (mgr.14505) 1172 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:34.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:34 vm00 bash[20770]: cluster 2026-03-09T17:57:33.077288+0000 mgr.y (mgr.14505) 1172 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:34.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:34 vm02 bash[23351]: audit 2026-03-09T17:57:32.935846+0000 mgr.y (mgr.14505) 1171 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:34 vm02 bash[23351]: audit 2026-03-09T17:57:32.935846+0000 mgr.y (mgr.14505) 1171 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:34 vm02 bash[23351]: cluster 2026-03-09T17:57:33.077288+0000 mgr.y (mgr.14505) 1172 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:34 vm02 bash[23351]: cluster 2026-03-09T17:57:33.077288+0000 mgr.y (mgr.14505) 1172 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:36 vm00 bash[28333]: cluster 2026-03-09T17:57:35.077662+0000 mgr.y (mgr.14505) 1173 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:36 vm00 bash[28333]: cluster 2026-03-09T17:57:35.077662+0000 mgr.y (mgr.14505) 1173 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:57:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:57:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:57:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:36 vm00 bash[20770]: cluster 2026-03-09T17:57:35.077662+0000 mgr.y (mgr.14505) 1173 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:36 vm00 bash[20770]: cluster 2026-03-09T17:57:35.077662+0000 mgr.y (mgr.14505) 1173 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:36 vm02 bash[23351]: cluster 2026-03-09T17:57:35.077662+0000 mgr.y (mgr.14505) 1173 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:36 vm02 bash[23351]: cluster 2026-03-09T17:57:35.077662+0000 mgr.y (mgr.14505) 1173 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:38 vm00 bash[28333]: cluster 2026-03-09T17:57:37.078013+0000 mgr.y (mgr.14505) 1174 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:38 vm00 bash[28333]: cluster 2026-03-09T17:57:37.078013+0000 mgr.y (mgr.14505) 1174 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:38 vm00 bash[20770]: cluster 2026-03-09T17:57:37.078013+0000 mgr.y (mgr.14505) 1174 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:38 vm00 bash[20770]: cluster 2026-03-09T17:57:37.078013+0000 mgr.y (mgr.14505) 1174 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:38.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:38 vm02 bash[23351]: cluster 2026-03-09T17:57:37.078013+0000 mgr.y (mgr.14505) 1174 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:38.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:38 vm02 bash[23351]: cluster 2026-03-09T17:57:37.078013+0000 mgr.y (mgr.14505) 1174 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:40 vm00 bash[28333]: cluster 2026-03-09T17:57:39.078700+0000 mgr.y (mgr.14505) 1175 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:40 vm00 bash[28333]: cluster 2026-03-09T17:57:39.078700+0000 mgr.y (mgr.14505) 1175 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:40 vm00 bash[20770]: cluster 2026-03-09T17:57:39.078700+0000 mgr.y (mgr.14505) 1175 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:40 vm00 bash[20770]: cluster 2026-03-09T17:57:39.078700+0000 mgr.y (mgr.14505) 1175 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:40 vm02 bash[23351]: cluster 2026-03-09T17:57:39.078700+0000 mgr.y (mgr.14505) 1175 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:40.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:40 vm02 bash[23351]: cluster 2026-03-09T17:57:39.078700+0000 mgr.y (mgr.14505) 1175 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:42 vm00 bash[28333]: cluster 2026-03-09T17:57:41.078985+0000 mgr.y (mgr.14505) 1176 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:42 vm00 bash[28333]: cluster 2026-03-09T17:57:41.078985+0000 mgr.y (mgr.14505) 1176 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:42 vm00 bash[20770]: cluster 2026-03-09T17:57:41.078985+0000 mgr.y (mgr.14505) 1176 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:42 vm00 bash[20770]: cluster 2026-03-09T17:57:41.078985+0000 mgr.y (mgr.14505) 1176 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:42 vm02 bash[23351]: cluster 2026-03-09T17:57:41.078985+0000 mgr.y (mgr.14505) 1176 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:42 vm02 bash[23351]: cluster 2026-03-09T17:57:41.078985+0000 mgr.y (mgr.14505) 1176 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:43.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:57:42 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:44 vm00 bash[28333]: audit 2026-03-09T17:57:42.945364+0000 mgr.y (mgr.14505) 1177 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:44 vm00 bash[28333]: audit 2026-03-09T17:57:42.945364+0000 mgr.y (mgr.14505) 1177 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:44 vm00 bash[28333]: cluster 2026-03-09T17:57:43.079569+0000 mgr.y (mgr.14505) 1178 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:44 vm00 bash[28333]: cluster 2026-03-09T17:57:43.079569+0000 mgr.y (mgr.14505) 1178 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:44 vm00 bash[28333]: audit 2026-03-09T17:57:43.872459+0000 mon.c (mon.2) 921 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:44 vm00 bash[28333]: audit 2026-03-09T17:57:43.872459+0000 mon.c (mon.2) 921 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:44 vm00 bash[20770]: audit 2026-03-09T17:57:42.945364+0000 mgr.y (mgr.14505) 1177 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:44 vm00 bash[20770]: audit 2026-03-09T17:57:42.945364+0000 mgr.y (mgr.14505) 1177 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:44 vm00 bash[20770]: cluster 2026-03-09T17:57:43.079569+0000 mgr.y (mgr.14505) 1178 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:44 vm00 bash[20770]: cluster 2026-03-09T17:57:43.079569+0000 mgr.y (mgr.14505) 1178 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:44 vm00 bash[20770]: audit 2026-03-09T17:57:43.872459+0000 mon.c (mon.2) 921 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:44 vm00 bash[20770]: audit 2026-03-09T17:57:43.872459+0000 mon.c (mon.2) 921 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:44 vm02 bash[23351]: audit 2026-03-09T17:57:42.945364+0000 mgr.y (mgr.14505) 1177 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:44 vm02 bash[23351]: audit 2026-03-09T17:57:42.945364+0000 mgr.y (mgr.14505) 1177 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:44 vm02 bash[23351]: cluster 2026-03-09T17:57:43.079569+0000 mgr.y (mgr.14505) 1178 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:44 vm02 bash[23351]: cluster 2026-03-09T17:57:43.079569+0000 mgr.y (mgr.14505) 1178 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:44 vm02 bash[23351]: audit 2026-03-09T17:57:43.872459+0000 mon.c (mon.2) 921 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:44 vm02 bash[23351]: audit 2026-03-09T17:57:43.872459+0000 mon.c (mon.2) 921 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:46 vm00 bash[28333]: cluster 2026-03-09T17:57:45.079912+0000 mgr.y (mgr.14505) 1179 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:46 vm00 bash[28333]: cluster 2026-03-09T17:57:45.079912+0000 mgr.y (mgr.14505) 1179 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:46 vm00 bash[20770]: cluster 2026-03-09T17:57:45.079912+0000 mgr.y (mgr.14505) 1179 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:46 vm00 bash[20770]: cluster 2026-03-09T17:57:45.079912+0000 mgr.y (mgr.14505) 1179 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:57:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:57:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:57:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:46 vm02 bash[23351]: cluster 2026-03-09T17:57:45.079912+0000 mgr.y (mgr.14505) 1179 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:46.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:46 vm02 bash[23351]: cluster 2026-03-09T17:57:45.079912+0000 mgr.y (mgr.14505) 1179 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:48 vm00 bash[28333]: cluster 2026-03-09T17:57:47.080192+0000 mgr.y (mgr.14505) 1180 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:48 vm00 bash[28333]: cluster 2026-03-09T17:57:47.080192+0000 mgr.y (mgr.14505) 1180 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:48 vm00 bash[20770]: cluster 2026-03-09T17:57:47.080192+0000 mgr.y (mgr.14505) 1180 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:48 vm00 bash[20770]: cluster 2026-03-09T17:57:47.080192+0000 mgr.y (mgr.14505) 1180 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:48 vm02 bash[23351]: cluster 2026-03-09T17:57:47.080192+0000 mgr.y (mgr.14505) 1180 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:48.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:48 vm02 bash[23351]: cluster 2026-03-09T17:57:47.080192+0000 mgr.y (mgr.14505) 1180 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:50 vm00 bash[28333]: cluster 2026-03-09T17:57:49.080963+0000 mgr.y (mgr.14505) 1181 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:50 vm00 bash[28333]: cluster 2026-03-09T17:57:49.080963+0000 mgr.y (mgr.14505) 1181 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:50 vm00 bash[20770]: cluster 2026-03-09T17:57:49.080963+0000 mgr.y (mgr.14505) 1181 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:50 vm00 bash[20770]: cluster 2026-03-09T17:57:49.080963+0000 mgr.y (mgr.14505) 1181 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:50.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:50 vm02 bash[23351]: cluster 2026-03-09T17:57:49.080963+0000 mgr.y (mgr.14505) 1181 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:50 vm02 bash[23351]: cluster 2026-03-09T17:57:49.080963+0000 mgr.y (mgr.14505) 1181 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:52 vm00 bash[28333]: cluster 2026-03-09T17:57:51.081306+0000 mgr.y (mgr.14505) 1182 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:52 vm00 bash[28333]: cluster 2026-03-09T17:57:51.081306+0000 mgr.y (mgr.14505) 1182 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:52 vm00 bash[20770]: cluster 2026-03-09T17:57:51.081306+0000 mgr.y (mgr.14505) 1182 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:52 vm00 bash[20770]: cluster 2026-03-09T17:57:51.081306+0000 mgr.y (mgr.14505) 1182 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:52.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:52 vm02 bash[23351]: cluster 2026-03-09T17:57:51.081306+0000 mgr.y (mgr.14505) 1182 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:52.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:52 vm02 bash[23351]: cluster 2026-03-09T17:57:51.081306+0000 mgr.y (mgr.14505) 1182 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:53.385 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:57:52 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:57:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:54 vm00 bash[28333]: audit 2026-03-09T17:57:52.955924+0000 mgr.y (mgr.14505) 1183 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:54 vm00 bash[28333]: audit 2026-03-09T17:57:52.955924+0000 mgr.y (mgr.14505) 1183 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:54 vm00 bash[28333]: cluster 2026-03-09T17:57:53.081893+0000 mgr.y (mgr.14505) 1184 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:54 vm00 bash[28333]: cluster 2026-03-09T17:57:53.081893+0000 mgr.y (mgr.14505) 1184 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:54 vm00 bash[20770]: audit 2026-03-09T17:57:52.955924+0000 mgr.y (mgr.14505) 1183 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:54 vm00 bash[20770]: audit 2026-03-09T17:57:52.955924+0000 mgr.y (mgr.14505) 1183 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:54 vm00 bash[20770]: cluster 2026-03-09T17:57:53.081893+0000 mgr.y (mgr.14505) 1184 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:54 vm00 bash[20770]: cluster 2026-03-09T17:57:53.081893+0000 mgr.y (mgr.14505) 1184 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:54 vm02 bash[23351]: audit 2026-03-09T17:57:52.955924+0000 mgr.y (mgr.14505) 1183 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:54 vm02 bash[23351]: audit 2026-03-09T17:57:52.955924+0000 mgr.y (mgr.14505) 1183 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:57:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:54 vm02 bash[23351]: cluster 2026-03-09T17:57:53.081893+0000 mgr.y (mgr.14505) 1184 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:54.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:54 vm02 bash[23351]: cluster 2026-03-09T17:57:53.081893+0000 mgr.y (mgr.14505) 1184 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:57:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:56 vm00 bash[28333]: cluster 2026-03-09T17:57:55.082201+0000 mgr.y (mgr.14505) 1185 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:56 vm00 bash[28333]: cluster 2026-03-09T17:57:55.082201+0000 mgr.y (mgr.14505) 1185 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:56 vm00 bash[20770]: cluster 2026-03-09T17:57:55.082201+0000 mgr.y (mgr.14505) 1185 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:56 vm00 bash[20770]: cluster 2026-03-09T17:57:55.082201+0000 mgr.y (mgr.14505) 1185 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:57:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:57:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:57:56.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:56 vm02 bash[23351]: cluster 2026-03-09T17:57:55.082201+0000 mgr.y (mgr.14505) 1185 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:56 vm02 bash[23351]: cluster 2026-03-09T17:57:55.082201+0000 mgr.y (mgr.14505) 1185 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:58 vm00 bash[28333]: cluster 2026-03-09T17:57:57.082468+0000 mgr.y (mgr.14505) 1186 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:58 vm00 bash[28333]: cluster 2026-03-09T17:57:57.082468+0000 mgr.y (mgr.14505) 1186 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:58 vm00 bash[20770]: cluster 2026-03-09T17:57:57.082468+0000 mgr.y (mgr.14505) 1186 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:58 vm00 bash[20770]: cluster 2026-03-09T17:57:57.082468+0000 mgr.y (mgr.14505) 1186 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:58 vm02 bash[23351]: cluster 2026-03-09T17:57:57.082468+0000 mgr.y (mgr.14505) 1186 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:58 vm02 bash[23351]: cluster 2026-03-09T17:57:57.082468+0000 mgr.y (mgr.14505) 1186 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:57:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:59 vm00 bash[28333]: audit 2026-03-09T17:57:58.878861+0000 mon.c (mon.2) 922 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:57:59 vm00 bash[28333]: audit 2026-03-09T17:57:58.878861+0000 mon.c (mon.2) 922 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:59 vm00 bash[20770]: audit 2026-03-09T17:57:58.878861+0000 mon.c (mon.2) 922 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:57:59 vm00 bash[20770]: audit 2026-03-09T17:57:58.878861+0000 mon.c (mon.2) 922 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:59 vm02 bash[23351]: audit 2026-03-09T17:57:58.878861+0000 mon.c (mon.2) 922 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:57:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:57:59 vm02 bash[23351]: audit 2026-03-09T17:57:58.878861+0000 mon.c (mon.2) 922 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:00 vm00 bash[28333]: cluster 2026-03-09T17:57:59.083172+0000 mgr.y (mgr.14505) 1187 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:00 vm00 bash[28333]: cluster 2026-03-09T17:57:59.083172+0000 mgr.y (mgr.14505) 1187 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:00 vm00 bash[20770]: cluster 2026-03-09T17:57:59.083172+0000 mgr.y (mgr.14505) 1187 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:00.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:00 vm00 bash[20770]: cluster 2026-03-09T17:57:59.083172+0000 mgr.y (mgr.14505) 1187 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:00 vm02 bash[23351]: cluster 2026-03-09T17:57:59.083172+0000 mgr.y (mgr.14505) 1187 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:00 vm02 bash[23351]: cluster 2026-03-09T17:57:59.083172+0000 mgr.y (mgr.14505) 1187 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:02 vm00 bash[28333]: cluster 2026-03-09T17:58:01.083445+0000 mgr.y (mgr.14505) 1188 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:02 vm00 bash[28333]: cluster 2026-03-09T17:58:01.083445+0000 mgr.y (mgr.14505) 1188 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:02 vm00 bash[20770]: cluster 2026-03-09T17:58:01.083445+0000 mgr.y (mgr.14505) 1188 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:02 vm00 bash[20770]: cluster 2026-03-09T17:58:01.083445+0000 mgr.y (mgr.14505) 1188 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:02.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:02 vm02 bash[23351]: cluster 2026-03-09T17:58:01.083445+0000 mgr.y (mgr.14505) 1188 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:02 vm02 bash[23351]: cluster 2026-03-09T17:58:01.083445+0000 mgr.y (mgr.14505) 1188 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:03.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:58:02 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:58:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:04 vm02 bash[23351]: audit 2026-03-09T17:58:02.966592+0000 mgr.y (mgr.14505) 1189 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:04 vm02 bash[23351]: audit 2026-03-09T17:58:02.966592+0000 mgr.y (mgr.14505) 1189 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:04 vm02 bash[23351]: cluster 2026-03-09T17:58:03.084127+0000 mgr.y (mgr.14505) 1190 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:04.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:04 vm02 bash[23351]: cluster 2026-03-09T17:58:03.084127+0000 mgr.y (mgr.14505) 1190 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:04.386 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 17:58:04 vm02 bash[51223]: logger=infra.usagestats t=2026-03-09T17:58:04.049851326Z level=info msg="Usage stats are ready to report" 2026-03-09T17:58:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:04 vm00 bash[28333]: audit 2026-03-09T17:58:02.966592+0000 mgr.y (mgr.14505) 1189 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:04 vm00 bash[28333]: audit 2026-03-09T17:58:02.966592+0000 mgr.y (mgr.14505) 1189 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:04 vm00 bash[28333]: cluster 2026-03-09T17:58:03.084127+0000 mgr.y (mgr.14505) 1190 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:04.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:04 vm00 bash[28333]: cluster 2026-03-09T17:58:03.084127+0000 mgr.y (mgr.14505) 1190 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:04 vm00 bash[20770]: audit 2026-03-09T17:58:02.966592+0000 mgr.y (mgr.14505) 1189 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:04 vm00 bash[20770]: audit 2026-03-09T17:58:02.966592+0000 mgr.y (mgr.14505) 1189 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:04 vm00 bash[20770]: cluster 2026-03-09T17:58:03.084127+0000 mgr.y (mgr.14505) 1190 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:04.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:04 vm00 bash[20770]: cluster 2026-03-09T17:58:03.084127+0000 mgr.y (mgr.14505) 1190 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:06 vm02 bash[23351]: cluster 2026-03-09T17:58:05.084440+0000 mgr.y (mgr.14505) 1191 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:06 vm02 bash[23351]: cluster 2026-03-09T17:58:05.084440+0000 mgr.y (mgr.14505) 1191 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:06 vm00 bash[28333]: cluster 2026-03-09T17:58:05.084440+0000 mgr.y (mgr.14505) 1191 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:06 vm00 bash[28333]: cluster 2026-03-09T17:58:05.084440+0000 mgr.y (mgr.14505) 1191 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:06 vm00 bash[20770]: cluster 2026-03-09T17:58:05.084440+0000 mgr.y (mgr.14505) 1191 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:06 vm00 bash[20770]: cluster 2026-03-09T17:58:05.084440+0000 mgr.y (mgr.14505) 1191 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:58:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:58:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:58:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:08 vm02 bash[23351]: cluster 2026-03-09T17:58:07.084710+0000 mgr.y (mgr.14505) 1192 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:08 vm02 bash[23351]: cluster 2026-03-09T17:58:07.084710+0000 mgr.y (mgr.14505) 1192 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:08 vm00 bash[28333]: cluster 2026-03-09T17:58:07.084710+0000 mgr.y (mgr.14505) 1192 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:08 vm00 bash[28333]: cluster 2026-03-09T17:58:07.084710+0000 mgr.y (mgr.14505) 1192 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:08 vm00 bash[20770]: cluster 2026-03-09T17:58:07.084710+0000 mgr.y (mgr.14505) 1192 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:08 vm00 bash[20770]: cluster 2026-03-09T17:58:07.084710+0000 mgr.y (mgr.14505) 1192 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:10 vm02 bash[23351]: cluster 2026-03-09T17:58:09.085525+0000 mgr.y (mgr.14505) 1193 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:10.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:10 vm02 bash[23351]: cluster 2026-03-09T17:58:09.085525+0000 mgr.y (mgr.14505) 1193 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:10 vm00 bash[28333]: cluster 2026-03-09T17:58:09.085525+0000 mgr.y (mgr.14505) 1193 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:10 vm00 bash[28333]: cluster 2026-03-09T17:58:09.085525+0000 mgr.y (mgr.14505) 1193 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:10 vm00 bash[20770]: cluster 2026-03-09T17:58:09.085525+0000 mgr.y (mgr.14505) 1193 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:10 vm00 bash[20770]: cluster 2026-03-09T17:58:09.085525+0000 mgr.y (mgr.14505) 1193 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:12 vm02 bash[23351]: cluster 2026-03-09T17:58:11.085908+0000 mgr.y (mgr.14505) 1194 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:12.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:12 vm02 bash[23351]: cluster 2026-03-09T17:58:11.085908+0000 mgr.y (mgr.14505) 1194 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:12 vm00 bash[28333]: cluster 2026-03-09T17:58:11.085908+0000 mgr.y (mgr.14505) 1194 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:12 vm00 bash[28333]: cluster 2026-03-09T17:58:11.085908+0000 mgr.y (mgr.14505) 1194 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:12 vm00 bash[20770]: cluster 2026-03-09T17:58:11.085908+0000 mgr.y (mgr.14505) 1194 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:12 vm00 bash[20770]: cluster 2026-03-09T17:58:11.085908+0000 mgr.y (mgr.14505) 1194 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:13.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:58:12 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:58:14.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:14 vm02 bash[23351]: audit 2026-03-09T17:58:12.977326+0000 mgr.y (mgr.14505) 1195 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:14 vm02 bash[23351]: audit 2026-03-09T17:58:12.977326+0000 mgr.y (mgr.14505) 1195 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:14 vm02 bash[23351]: cluster 2026-03-09T17:58:13.086719+0000 mgr.y (mgr.14505) 1196 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:14 vm02 bash[23351]: cluster 2026-03-09T17:58:13.086719+0000 mgr.y (mgr.14505) 1196 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:14 vm02 bash[23351]: audit 2026-03-09T17:58:13.886410+0000 mon.c (mon.2) 923 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:14.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:14 vm02 bash[23351]: audit 2026-03-09T17:58:13.886410+0000 mon.c (mon.2) 923 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:14 vm00 bash[28333]: audit 2026-03-09T17:58:12.977326+0000 mgr.y (mgr.14505) 1195 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:14 vm00 bash[28333]: audit 2026-03-09T17:58:12.977326+0000 mgr.y (mgr.14505) 1195 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:14 vm00 bash[28333]: cluster 2026-03-09T17:58:13.086719+0000 mgr.y (mgr.14505) 1196 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:14 vm00 bash[28333]: cluster 2026-03-09T17:58:13.086719+0000 mgr.y (mgr.14505) 1196 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:14 vm00 bash[28333]: audit 2026-03-09T17:58:13.886410+0000 mon.c (mon.2) 923 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:14 vm00 bash[28333]: audit 2026-03-09T17:58:13.886410+0000 mon.c (mon.2) 923 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:14 vm00 bash[20770]: audit 2026-03-09T17:58:12.977326+0000 mgr.y (mgr.14505) 1195 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:14 vm00 bash[20770]: audit 2026-03-09T17:58:12.977326+0000 mgr.y (mgr.14505) 1195 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:14 vm00 bash[20770]: cluster 2026-03-09T17:58:13.086719+0000 mgr.y (mgr.14505) 1196 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:14 vm00 bash[20770]: cluster 2026-03-09T17:58:13.086719+0000 mgr.y (mgr.14505) 1196 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:14 vm00 bash[20770]: audit 2026-03-09T17:58:13.886410+0000 mon.c (mon.2) 923 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:14 vm00 bash[20770]: audit 2026-03-09T17:58:13.886410+0000 mon.c (mon.2) 923 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:16 vm02 bash[23351]: cluster 2026-03-09T17:58:15.087055+0000 mgr.y (mgr.14505) 1197 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:16 vm02 bash[23351]: cluster 2026-03-09T17:58:15.087055+0000 mgr.y (mgr.14505) 1197 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:16 vm00 bash[28333]: cluster 2026-03-09T17:58:15.087055+0000 mgr.y (mgr.14505) 1197 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:16 vm00 bash[28333]: cluster 2026-03-09T17:58:15.087055+0000 mgr.y (mgr.14505) 1197 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:16 vm00 bash[20770]: cluster 2026-03-09T17:58:15.087055+0000 mgr.y (mgr.14505) 1197 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:16 vm00 bash[20770]: cluster 2026-03-09T17:58:15.087055+0000 mgr.y (mgr.14505) 1197 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:58:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:58:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:58:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:18 vm02 bash[23351]: cluster 2026-03-09T17:58:17.087430+0000 mgr.y (mgr.14505) 1198 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:18 vm02 bash[23351]: cluster 2026-03-09T17:58:17.087430+0000 mgr.y (mgr.14505) 1198 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:18 vm00 bash[28333]: cluster 2026-03-09T17:58:17.087430+0000 mgr.y (mgr.14505) 1198 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:18 vm00 bash[28333]: cluster 2026-03-09T17:58:17.087430+0000 mgr.y (mgr.14505) 1198 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:18 vm00 bash[20770]: cluster 2026-03-09T17:58:17.087430+0000 mgr.y (mgr.14505) 1198 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:18 vm00 bash[20770]: cluster 2026-03-09T17:58:17.087430+0000 mgr.y (mgr.14505) 1198 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:20 vm02 bash[23351]: cluster 2026-03-09T17:58:19.088137+0000 mgr.y (mgr.14505) 1199 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:20 vm02 bash[23351]: cluster 2026-03-09T17:58:19.088137+0000 mgr.y (mgr.14505) 1199 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:20 vm00 bash[28333]: cluster 2026-03-09T17:58:19.088137+0000 mgr.y (mgr.14505) 1199 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:20 vm00 bash[28333]: cluster 2026-03-09T17:58:19.088137+0000 mgr.y (mgr.14505) 1199 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:20 vm00 bash[20770]: cluster 2026-03-09T17:58:19.088137+0000 mgr.y (mgr.14505) 1199 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:20 vm00 bash[20770]: cluster 2026-03-09T17:58:19.088137+0000 mgr.y (mgr.14505) 1199 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:20.904427+0000 mon.c (mon.2) 924 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:20.904427+0000 mon.c (mon.2) 924 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.252790+0000 mon.c (mon.2) 925 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.252790+0000 mon.c (mon.2) 925 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.253216+0000 mon.c (mon.2) 926 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.253216+0000 mon.c (mon.2) 926 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.253521+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.253521+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.253590+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.253590+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.256110+0000 mon.c (mon.2) 927 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.256110+0000 mon.c (mon.2) 927 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.257162+0000 mon.c (mon.2) 928 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.257162+0000 mon.c (mon.2) 928 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.263593+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:21 vm00 bash[28333]: audit 2026-03-09T17:58:21.263593+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:20.904427+0000 mon.c (mon.2) 924 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:20.904427+0000 mon.c (mon.2) 924 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.252790+0000 mon.c (mon.2) 925 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.252790+0000 mon.c (mon.2) 925 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.253216+0000 mon.c (mon.2) 926 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.253216+0000 mon.c (mon.2) 926 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.253521+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.253521+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.253590+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.253590+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.256110+0000 mon.c (mon.2) 927 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.256110+0000 mon.c (mon.2) 927 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.257162+0000 mon.c (mon.2) 928 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.257162+0000 mon.c (mon.2) 928 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.263593+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:58:21.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:21 vm00 bash[20770]: audit 2026-03-09T17:58:21.263593+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:58:21.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:20.904427+0000 mon.c (mon.2) 924 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:20.904427+0000 mon.c (mon.2) 924 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.252790+0000 mon.c (mon.2) 925 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.252790+0000 mon.c (mon.2) 925 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.253216+0000 mon.c (mon.2) 926 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.253216+0000 mon.c (mon.2) 926 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.253521+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.253521+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.253590+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.253590+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.14505 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.256110+0000 mon.c (mon.2) 927 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.256110+0000 mon.c (mon.2) 927 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.257162+0000 mon.c (mon.2) 928 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.257162+0000 mon.c (mon.2) 928 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.263593+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:58:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:21 vm02 bash[23351]: audit 2026-03-09T17:58:21.263593+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:58:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:22 vm00 bash[28333]: cluster 2026-03-09T17:58:21.088523+0000 mgr.y (mgr.14505) 1200 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:22 vm00 bash[28333]: cluster 2026-03-09T17:58:21.088523+0000 mgr.y (mgr.14505) 1200 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:22 vm00 bash[20770]: cluster 2026-03-09T17:58:21.088523+0000 mgr.y (mgr.14505) 1200 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:22 vm00 bash[20770]: cluster 2026-03-09T17:58:21.088523+0000 mgr.y (mgr.14505) 1200 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:22.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:22 vm02 bash[23351]: cluster 2026-03-09T17:58:21.088523+0000 mgr.y (mgr.14505) 1200 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:22 vm02 bash[23351]: cluster 2026-03-09T17:58:21.088523+0000 mgr.y (mgr.14505) 1200 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:23.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:58:22 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:58:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:24 vm00 bash[28333]: audit 2026-03-09T17:58:22.987890+0000 mgr.y (mgr.14505) 1201 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:24 vm00 bash[28333]: audit 2026-03-09T17:58:22.987890+0000 mgr.y (mgr.14505) 1201 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:24 vm00 bash[28333]: cluster 2026-03-09T17:58:23.089209+0000 mgr.y (mgr.14505) 1202 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:24 vm00 bash[28333]: cluster 2026-03-09T17:58:23.089209+0000 mgr.y (mgr.14505) 1202 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:24 vm00 bash[20770]: audit 2026-03-09T17:58:22.987890+0000 mgr.y (mgr.14505) 1201 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:24 vm00 bash[20770]: audit 2026-03-09T17:58:22.987890+0000 mgr.y (mgr.14505) 1201 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:24 vm00 bash[20770]: cluster 2026-03-09T17:58:23.089209+0000 mgr.y (mgr.14505) 1202 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:24 vm00 bash[20770]: cluster 2026-03-09T17:58:23.089209+0000 mgr.y (mgr.14505) 1202 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:24 vm02 bash[23351]: audit 2026-03-09T17:58:22.987890+0000 mgr.y (mgr.14505) 1201 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:24 vm02 bash[23351]: audit 2026-03-09T17:58:22.987890+0000 mgr.y (mgr.14505) 1201 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:24 vm02 bash[23351]: cluster 2026-03-09T17:58:23.089209+0000 mgr.y (mgr.14505) 1202 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:24 vm02 bash[23351]: cluster 2026-03-09T17:58:23.089209+0000 mgr.y (mgr.14505) 1202 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:26 vm00 bash[20770]: cluster 2026-03-09T17:58:25.089634+0000 mgr.y (mgr.14505) 1203 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:26 vm00 bash[20770]: cluster 2026-03-09T17:58:25.089634+0000 mgr.y (mgr.14505) 1203 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:26 vm00 bash[28333]: cluster 2026-03-09T17:58:25.089634+0000 mgr.y (mgr.14505) 1203 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:26 vm00 bash[28333]: cluster 2026-03-09T17:58:25.089634+0000 mgr.y (mgr.14505) 1203 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:58:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:58:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:58:26.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:26 vm02 bash[23351]: cluster 2026-03-09T17:58:25.089634+0000 mgr.y (mgr.14505) 1203 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:26.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:26 vm02 bash[23351]: cluster 2026-03-09T17:58:25.089634+0000 mgr.y (mgr.14505) 1203 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:28 vm00 bash[28333]: cluster 2026-03-09T17:58:27.090013+0000 mgr.y (mgr.14505) 1204 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:28 vm00 bash[28333]: cluster 2026-03-09T17:58:27.090013+0000 mgr.y (mgr.14505) 1204 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:28 vm00 bash[20770]: cluster 2026-03-09T17:58:27.090013+0000 mgr.y (mgr.14505) 1204 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:28 vm00 bash[20770]: cluster 2026-03-09T17:58:27.090013+0000 mgr.y (mgr.14505) 1204 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:28.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:28 vm02 bash[23351]: cluster 2026-03-09T17:58:27.090013+0000 mgr.y (mgr.14505) 1204 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:28 vm02 bash[23351]: cluster 2026-03-09T17:58:27.090013+0000 mgr.y (mgr.14505) 1204 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:29 vm00 bash[28333]: audit 2026-03-09T17:58:28.894402+0000 mon.c (mon.2) 929 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:29 vm00 bash[28333]: audit 2026-03-09T17:58:28.894402+0000 mon.c (mon.2) 929 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:29 vm00 bash[20770]: audit 2026-03-09T17:58:28.894402+0000 mon.c (mon.2) 929 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:29.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:29 vm00 bash[20770]: audit 2026-03-09T17:58:28.894402+0000 mon.c (mon.2) 929 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:29.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:29 vm02 bash[23351]: audit 2026-03-09T17:58:28.894402+0000 mon.c (mon.2) 929 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:29.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:29 vm02 bash[23351]: audit 2026-03-09T17:58:28.894402+0000 mon.c (mon.2) 929 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:30 vm00 bash[28333]: cluster 2026-03-09T17:58:29.090916+0000 mgr.y (mgr.14505) 1205 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:30 vm00 bash[28333]: cluster 2026-03-09T17:58:29.090916+0000 mgr.y (mgr.14505) 1205 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:30 vm00 bash[20770]: cluster 2026-03-09T17:58:29.090916+0000 mgr.y (mgr.14505) 1205 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:30 vm00 bash[20770]: cluster 2026-03-09T17:58:29.090916+0000 mgr.y (mgr.14505) 1205 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:30.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:30 vm02 bash[23351]: cluster 2026-03-09T17:58:29.090916+0000 mgr.y (mgr.14505) 1205 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:30.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:30 vm02 bash[23351]: cluster 2026-03-09T17:58:29.090916+0000 mgr.y (mgr.14505) 1205 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:32 vm00 bash[28333]: cluster 2026-03-09T17:58:31.091274+0000 mgr.y (mgr.14505) 1206 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:32 vm00 bash[28333]: cluster 2026-03-09T17:58:31.091274+0000 mgr.y (mgr.14505) 1206 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:32 vm00 bash[20770]: cluster 2026-03-09T17:58:31.091274+0000 mgr.y (mgr.14505) 1206 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:32 vm00 bash[20770]: cluster 2026-03-09T17:58:31.091274+0000 mgr.y (mgr.14505) 1206 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:32.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:32 vm02 bash[23351]: cluster 2026-03-09T17:58:31.091274+0000 mgr.y (mgr.14505) 1206 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:32 vm02 bash[23351]: cluster 2026-03-09T17:58:31.091274+0000 mgr.y (mgr.14505) 1206 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:33.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:58:32 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:58:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:34 vm00 bash[28333]: audit 2026-03-09T17:58:32.998561+0000 mgr.y (mgr.14505) 1207 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:34 vm00 bash[28333]: audit 2026-03-09T17:58:32.998561+0000 mgr.y (mgr.14505) 1207 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:34 vm00 bash[28333]: cluster 2026-03-09T17:58:33.092040+0000 mgr.y (mgr.14505) 1208 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:34 vm00 bash[28333]: cluster 2026-03-09T17:58:33.092040+0000 mgr.y (mgr.14505) 1208 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:34 vm00 bash[20770]: audit 2026-03-09T17:58:32.998561+0000 mgr.y (mgr.14505) 1207 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:34 vm00 bash[20770]: audit 2026-03-09T17:58:32.998561+0000 mgr.y (mgr.14505) 1207 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:34 vm00 bash[20770]: cluster 2026-03-09T17:58:33.092040+0000 mgr.y (mgr.14505) 1208 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:34.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:34 vm00 bash[20770]: cluster 2026-03-09T17:58:33.092040+0000 mgr.y (mgr.14505) 1208 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:34 vm02 bash[23351]: audit 2026-03-09T17:58:32.998561+0000 mgr.y (mgr.14505) 1207 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:34 vm02 bash[23351]: audit 2026-03-09T17:58:32.998561+0000 mgr.y (mgr.14505) 1207 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:34.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:34 vm02 bash[23351]: cluster 2026-03-09T17:58:33.092040+0000 mgr.y (mgr.14505) 1208 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:34 vm02 bash[23351]: cluster 2026-03-09T17:58:33.092040+0000 mgr.y (mgr.14505) 1208 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:36 vm00 bash[28333]: cluster 2026-03-09T17:58:35.092504+0000 mgr.y (mgr.14505) 1209 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:36.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:36 vm00 bash[28333]: cluster 2026-03-09T17:58:35.092504+0000 mgr.y (mgr.14505) 1209 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:36 vm00 bash[20770]: cluster 2026-03-09T17:58:35.092504+0000 mgr.y (mgr.14505) 1209 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:36.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:36 vm00 bash[20770]: cluster 2026-03-09T17:58:35.092504+0000 mgr.y (mgr.14505) 1209 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:36.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:58:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:58:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:58:36.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:36 vm02 bash[23351]: cluster 2026-03-09T17:58:35.092504+0000 mgr.y (mgr.14505) 1209 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:36 vm02 bash[23351]: cluster 2026-03-09T17:58:35.092504+0000 mgr.y (mgr.14505) 1209 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:38 vm00 bash[28333]: cluster 2026-03-09T17:58:37.092809+0000 mgr.y (mgr.14505) 1210 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:38.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:38 vm00 bash[28333]: cluster 2026-03-09T17:58:37.092809+0000 mgr.y (mgr.14505) 1210 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:38 vm00 bash[20770]: cluster 2026-03-09T17:58:37.092809+0000 mgr.y (mgr.14505) 1210 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:38.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:38 vm00 bash[20770]: cluster 2026-03-09T17:58:37.092809+0000 mgr.y (mgr.14505) 1210 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:38 vm02 bash[23351]: cluster 2026-03-09T17:58:37.092809+0000 mgr.y (mgr.14505) 1210 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:38.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:38 vm02 bash[23351]: cluster 2026-03-09T17:58:37.092809+0000 mgr.y (mgr.14505) 1210 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:40 vm00 bash[28333]: cluster 2026-03-09T17:58:39.093529+0000 mgr.y (mgr.14505) 1211 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:40 vm00 bash[28333]: cluster 2026-03-09T17:58:39.093529+0000 mgr.y (mgr.14505) 1211 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:40 vm00 bash[20770]: cluster 2026-03-09T17:58:39.093529+0000 mgr.y (mgr.14505) 1211 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:40 vm00 bash[20770]: cluster 2026-03-09T17:58:39.093529+0000 mgr.y (mgr.14505) 1211 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:40 vm02 bash[23351]: cluster 2026-03-09T17:58:39.093529+0000 mgr.y (mgr.14505) 1211 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:40.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:40 vm02 bash[23351]: cluster 2026-03-09T17:58:39.093529+0000 mgr.y (mgr.14505) 1211 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:42 vm00 bash[28333]: cluster 2026-03-09T17:58:41.093945+0000 mgr.y (mgr.14505) 1212 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:42 vm00 bash[28333]: cluster 2026-03-09T17:58:41.093945+0000 mgr.y (mgr.14505) 1212 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:42 vm00 bash[20770]: cluster 2026-03-09T17:58:41.093945+0000 mgr.y (mgr.14505) 1212 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:42 vm00 bash[20770]: cluster 2026-03-09T17:58:41.093945+0000 mgr.y (mgr.14505) 1212 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:42.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:42 vm02 bash[23351]: cluster 2026-03-09T17:58:41.093945+0000 mgr.y (mgr.14505) 1212 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:42.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:42 vm02 bash[23351]: cluster 2026-03-09T17:58:41.093945+0000 mgr.y (mgr.14505) 1212 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:43.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:58:43 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:44 vm00 bash[28333]: audit 2026-03-09T17:58:43.004113+0000 mgr.y (mgr.14505) 1213 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:44 vm00 bash[28333]: audit 2026-03-09T17:58:43.004113+0000 mgr.y (mgr.14505) 1213 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:44 vm00 bash[28333]: cluster 2026-03-09T17:58:43.094611+0000 mgr.y (mgr.14505) 1214 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:44 vm00 bash[28333]: cluster 2026-03-09T17:58:43.094611+0000 mgr.y (mgr.14505) 1214 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:44 vm00 bash[28333]: audit 2026-03-09T17:58:43.902079+0000 mon.c (mon.2) 930 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:44 vm00 bash[28333]: audit 2026-03-09T17:58:43.902079+0000 mon.c (mon.2) 930 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:44 vm00 bash[20770]: audit 2026-03-09T17:58:43.004113+0000 mgr.y (mgr.14505) 1213 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:44 vm00 bash[20770]: audit 2026-03-09T17:58:43.004113+0000 mgr.y (mgr.14505) 1213 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:44 vm00 bash[20770]: cluster 2026-03-09T17:58:43.094611+0000 mgr.y (mgr.14505) 1214 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:44 vm00 bash[20770]: cluster 2026-03-09T17:58:43.094611+0000 mgr.y (mgr.14505) 1214 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:44 vm00 bash[20770]: audit 2026-03-09T17:58:43.902079+0000 mon.c (mon.2) 930 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:44.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:44 vm00 bash[20770]: audit 2026-03-09T17:58:43.902079+0000 mon.c (mon.2) 930 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:44 vm02 bash[23351]: audit 2026-03-09T17:58:43.004113+0000 mgr.y (mgr.14505) 1213 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:44 vm02 bash[23351]: audit 2026-03-09T17:58:43.004113+0000 mgr.y (mgr.14505) 1213 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:44 vm02 bash[23351]: cluster 2026-03-09T17:58:43.094611+0000 mgr.y (mgr.14505) 1214 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:44 vm02 bash[23351]: cluster 2026-03-09T17:58:43.094611+0000 mgr.y (mgr.14505) 1214 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:44 vm02 bash[23351]: audit 2026-03-09T17:58:43.902079+0000 mon.c (mon.2) 930 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:44 vm02 bash[23351]: audit 2026-03-09T17:58:43.902079+0000 mon.c (mon.2) 930 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:46 vm00 bash[28333]: cluster 2026-03-09T17:58:45.095018+0000 mgr.y (mgr.14505) 1215 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:46 vm00 bash[28333]: cluster 2026-03-09T17:58:45.095018+0000 mgr.y (mgr.14505) 1215 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:46 vm00 bash[20770]: cluster 2026-03-09T17:58:45.095018+0000 mgr.y (mgr.14505) 1215 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:46 vm00 bash[20770]: cluster 2026-03-09T17:58:45.095018+0000 mgr.y (mgr.14505) 1215 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:58:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:58:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:58:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:46 vm02 bash[23351]: cluster 2026-03-09T17:58:45.095018+0000 mgr.y (mgr.14505) 1215 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:46 vm02 bash[23351]: cluster 2026-03-09T17:58:45.095018+0000 mgr.y (mgr.14505) 1215 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:48 vm00 bash[28333]: cluster 2026-03-09T17:58:47.095386+0000 mgr.y (mgr.14505) 1216 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:48 vm00 bash[28333]: cluster 2026-03-09T17:58:47.095386+0000 mgr.y (mgr.14505) 1216 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:48 vm00 bash[20770]: cluster 2026-03-09T17:58:47.095386+0000 mgr.y (mgr.14505) 1216 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:48 vm00 bash[20770]: cluster 2026-03-09T17:58:47.095386+0000 mgr.y (mgr.14505) 1216 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:48 vm02 bash[23351]: cluster 2026-03-09T17:58:47.095386+0000 mgr.y (mgr.14505) 1216 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:48 vm02 bash[23351]: cluster 2026-03-09T17:58:47.095386+0000 mgr.y (mgr.14505) 1216 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:50 vm00 bash[28333]: cluster 2026-03-09T17:58:49.096100+0000 mgr.y (mgr.14505) 1217 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:50.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:50 vm00 bash[28333]: cluster 2026-03-09T17:58:49.096100+0000 mgr.y (mgr.14505) 1217 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:50 vm00 bash[20770]: cluster 2026-03-09T17:58:49.096100+0000 mgr.y (mgr.14505) 1217 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:50.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:50 vm00 bash[20770]: cluster 2026-03-09T17:58:49.096100+0000 mgr.y (mgr.14505) 1217 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:50 vm02 bash[23351]: cluster 2026-03-09T17:58:49.096100+0000 mgr.y (mgr.14505) 1217 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:50.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:50 vm02 bash[23351]: cluster 2026-03-09T17:58:49.096100+0000 mgr.y (mgr.14505) 1217 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:52 vm00 bash[28333]: cluster 2026-03-09T17:58:51.096459+0000 mgr.y (mgr.14505) 1218 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:52.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:52 vm00 bash[28333]: cluster 2026-03-09T17:58:51.096459+0000 mgr.y (mgr.14505) 1218 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:52 vm00 bash[20770]: cluster 2026-03-09T17:58:51.096459+0000 mgr.y (mgr.14505) 1218 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:52.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:52 vm00 bash[20770]: cluster 2026-03-09T17:58:51.096459+0000 mgr.y (mgr.14505) 1218 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:52 vm02 bash[23351]: cluster 2026-03-09T17:58:51.096459+0000 mgr.y (mgr.14505) 1218 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:52.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:52 vm02 bash[23351]: cluster 2026-03-09T17:58:51.096459+0000 mgr.y (mgr.14505) 1218 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T17:58:53.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:58:53 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:58:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:54 vm02 bash[23351]: audit 2026-03-09T17:58:53.012664+0000 mgr.y (mgr.14505) 1219 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:54 vm02 bash[23351]: audit 2026-03-09T17:58:53.012664+0000 mgr.y (mgr.14505) 1219 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:54 vm02 bash[23351]: cluster 2026-03-09T17:58:53.097135+0000 mgr.y (mgr.14505) 1220 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:54 vm02 bash[23351]: cluster 2026-03-09T17:58:53.097135+0000 mgr.y (mgr.14505) 1220 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:54 vm02 bash[23351]: cluster 2026-03-09T17:58:53.530144+0000 mon.a (mon.0) 3561 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T17:58:54.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:54 vm02 bash[23351]: cluster 2026-03-09T17:58:53.530144+0000 mon.a (mon.0) 3561 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:54 vm00 bash[28333]: audit 2026-03-09T17:58:53.012664+0000 mgr.y (mgr.14505) 1219 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:54 vm00 bash[28333]: audit 2026-03-09T17:58:53.012664+0000 mgr.y (mgr.14505) 1219 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:54 vm00 bash[28333]: cluster 2026-03-09T17:58:53.097135+0000 mgr.y (mgr.14505) 1220 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:54 vm00 bash[28333]: cluster 2026-03-09T17:58:53.097135+0000 mgr.y (mgr.14505) 1220 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:54 vm00 bash[28333]: cluster 2026-03-09T17:58:53.530144+0000 mon.a (mon.0) 3561 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:54 vm00 bash[28333]: cluster 2026-03-09T17:58:53.530144+0000 mon.a (mon.0) 3561 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:54 vm00 bash[20770]: audit 2026-03-09T17:58:53.012664+0000 mgr.y (mgr.14505) 1219 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:54 vm00 bash[20770]: audit 2026-03-09T17:58:53.012664+0000 mgr.y (mgr.14505) 1219 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:54 vm00 bash[20770]: cluster 2026-03-09T17:58:53.097135+0000 mgr.y (mgr.14505) 1220 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:54 vm00 bash[20770]: cluster 2026-03-09T17:58:53.097135+0000 mgr.y (mgr.14505) 1220 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:54 vm00 bash[20770]: cluster 2026-03-09T17:58:53.530144+0000 mon.a (mon.0) 3561 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T17:58:55.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:54 vm00 bash[20770]: cluster 2026-03-09T17:58:53.530144+0000 mon.a (mon.0) 3561 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T17:58:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:55 vm02 bash[23351]: cluster 2026-03-09T17:58:54.555213+0000 mon.a (mon.0) 3562 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T17:58:55.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:55 vm02 bash[23351]: cluster 2026-03-09T17:58:54.555213+0000 mon.a (mon.0) 3562 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T17:58:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:55 vm00 bash[28333]: cluster 2026-03-09T17:58:54.555213+0000 mon.a (mon.0) 3562 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T17:58:56.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:55 vm00 bash[28333]: cluster 2026-03-09T17:58:54.555213+0000 mon.a (mon.0) 3562 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T17:58:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:55 vm00 bash[20770]: cluster 2026-03-09T17:58:54.555213+0000 mon.a (mon.0) 3562 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T17:58:56.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:55 vm00 bash[20770]: cluster 2026-03-09T17:58:54.555213+0000 mon.a (mon.0) 3562 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T17:58:56.572 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:58:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:58:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:58:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:56 vm02 bash[23351]: cluster 2026-03-09T17:58:55.097475+0000 mgr.y (mgr.14505) 1221 : cluster [DBG] pgmap v1637: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:56 vm02 bash[23351]: cluster 2026-03-09T17:58:55.097475+0000 mgr.y (mgr.14505) 1221 : cluster [DBG] pgmap v1637: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:56 vm02 bash[23351]: cluster 2026-03-09T17:58:55.584579+0000 mon.a (mon.0) 3563 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T17:58:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:56 vm02 bash[23351]: cluster 2026-03-09T17:58:55.584579+0000 mon.a (mon.0) 3563 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T17:58:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:56 vm00 bash[28333]: cluster 2026-03-09T17:58:55.097475+0000 mgr.y (mgr.14505) 1221 : cluster [DBG] pgmap v1637: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:56 vm00 bash[28333]: cluster 2026-03-09T17:58:55.097475+0000 mgr.y (mgr.14505) 1221 : cluster [DBG] pgmap v1637: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:56 vm00 bash[28333]: cluster 2026-03-09T17:58:55.584579+0000 mon.a (mon.0) 3563 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T17:58:57.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:56 vm00 bash[28333]: cluster 2026-03-09T17:58:55.584579+0000 mon.a (mon.0) 3563 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T17:58:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:56 vm00 bash[20770]: cluster 2026-03-09T17:58:55.097475+0000 mgr.y (mgr.14505) 1221 : cluster [DBG] pgmap v1637: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:56 vm00 bash[20770]: cluster 2026-03-09T17:58:55.097475+0000 mgr.y (mgr.14505) 1221 : cluster [DBG] pgmap v1637: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:56 vm00 bash[20770]: cluster 2026-03-09T17:58:55.584579+0000 mon.a (mon.0) 3563 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T17:58:57.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:56 vm00 bash[20770]: cluster 2026-03-09T17:58:55.584579+0000 mon.a (mon.0) 3563 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T17:58:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:57 vm02 bash[23351]: cluster 2026-03-09T17:58:56.602723+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T17:58:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:57 vm02 bash[23351]: cluster 2026-03-09T17:58:56.602723+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T17:58:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:57 vm02 bash[23351]: cluster 2026-03-09T17:58:57.098355+0000 mgr.y (mgr.14505) 1222 : cluster [DBG] pgmap v1640: 164 pgs: 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:58:57.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:57 vm02 bash[23351]: cluster 2026-03-09T17:58:57.098355+0000 mgr.y (mgr.14505) 1222 : cluster [DBG] pgmap v1640: 164 pgs: 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:58:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:57 vm00 bash[28333]: cluster 2026-03-09T17:58:56.602723+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T17:58:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:57 vm00 bash[28333]: cluster 2026-03-09T17:58:56.602723+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T17:58:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:57 vm00 bash[28333]: cluster 2026-03-09T17:58:57.098355+0000 mgr.y (mgr.14505) 1222 : cluster [DBG] pgmap v1640: 164 pgs: 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:58:58.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:57 vm00 bash[28333]: cluster 2026-03-09T17:58:57.098355+0000 mgr.y (mgr.14505) 1222 : cluster [DBG] pgmap v1640: 164 pgs: 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:58:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:57 vm00 bash[20770]: cluster 2026-03-09T17:58:56.602723+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T17:58:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:57 vm00 bash[20770]: cluster 2026-03-09T17:58:56.602723+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T17:58:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:57 vm00 bash[20770]: cluster 2026-03-09T17:58:57.098355+0000 mgr.y (mgr.14505) 1222 : cluster [DBG] pgmap v1640: 164 pgs: 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:58:58.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:57 vm00 bash[20770]: cluster 2026-03-09T17:58:57.098355+0000 mgr.y (mgr.14505) 1222 : cluster [DBG] pgmap v1640: 164 pgs: 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:58:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:58 vm02 bash[23351]: cluster 2026-03-09T17:58:57.582301+0000 mon.a (mon.0) 3565 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:58:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:58 vm02 bash[23351]: cluster 2026-03-09T17:58:57.582301+0000 mon.a (mon.0) 3565 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:58:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:58 vm02 bash[23351]: cluster 2026-03-09T17:58:57.610861+0000 mon.a (mon.0) 3566 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T17:58:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:58 vm02 bash[23351]: cluster 2026-03-09T17:58:57.610861+0000 mon.a (mon.0) 3566 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T17:58:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:58 vm00 bash[28333]: cluster 2026-03-09T17:58:57.582301+0000 mon.a (mon.0) 3565 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:58:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:58 vm00 bash[28333]: cluster 2026-03-09T17:58:57.582301+0000 mon.a (mon.0) 3565 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:58:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:58 vm00 bash[28333]: cluster 2026-03-09T17:58:57.610861+0000 mon.a (mon.0) 3566 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T17:58:59.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:58 vm00 bash[28333]: cluster 2026-03-09T17:58:57.610861+0000 mon.a (mon.0) 3566 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T17:58:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:58 vm00 bash[20770]: cluster 2026-03-09T17:58:57.582301+0000 mon.a (mon.0) 3565 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:58:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:58 vm00 bash[20770]: cluster 2026-03-09T17:58:57.582301+0000 mon.a (mon.0) 3565 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:58:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:58 vm00 bash[20770]: cluster 2026-03-09T17:58:57.610861+0000 mon.a (mon.0) 3566 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T17:58:59.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:58 vm00 bash[20770]: cluster 2026-03-09T17:58:57.610861+0000 mon.a (mon.0) 3566 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T17:58:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:59 vm02 bash[23351]: cluster 2026-03-09T17:58:58.600789+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T17:58:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:59 vm02 bash[23351]: cluster 2026-03-09T17:58:58.600789+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T17:58:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:59 vm02 bash[23351]: audit 2026-03-09T17:58:58.908333+0000 mon.c (mon.2) 931 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:59 vm02 bash[23351]: audit 2026-03-09T17:58:58.908333+0000 mon.c (mon.2) 931 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:58:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:59 vm02 bash[23351]: cluster 2026-03-09T17:58:59.098666+0000 mgr.y (mgr.14505) 1223 : cluster [DBG] pgmap v1643: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:58:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:58:59 vm02 bash[23351]: cluster 2026-03-09T17:58:59.098666+0000 mgr.y (mgr.14505) 1223 : cluster [DBG] pgmap v1643: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:59 vm00 bash[28333]: cluster 2026-03-09T17:58:58.600789+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:59 vm00 bash[28333]: cluster 2026-03-09T17:58:58.600789+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:59 vm00 bash[28333]: audit 2026-03-09T17:58:58.908333+0000 mon.c (mon.2) 931 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:59 vm00 bash[28333]: audit 2026-03-09T17:58:58.908333+0000 mon.c (mon.2) 931 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:59 vm00 bash[28333]: cluster 2026-03-09T17:58:59.098666+0000 mgr.y (mgr.14505) 1223 : cluster [DBG] pgmap v1643: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:58:59 vm00 bash[28333]: cluster 2026-03-09T17:58:59.098666+0000 mgr.y (mgr.14505) 1223 : cluster [DBG] pgmap v1643: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:59 vm00 bash[20770]: cluster 2026-03-09T17:58:58.600789+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:59 vm00 bash[20770]: cluster 2026-03-09T17:58:58.600789+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:59 vm00 bash[20770]: audit 2026-03-09T17:58:58.908333+0000 mon.c (mon.2) 931 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:59 vm00 bash[20770]: audit 2026-03-09T17:58:58.908333+0000 mon.c (mon.2) 931 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:59 vm00 bash[20770]: cluster 2026-03-09T17:58:59.098666+0000 mgr.y (mgr.14505) 1223 : cluster [DBG] pgmap v1643: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:00.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:58:59 vm00 bash[20770]: cluster 2026-03-09T17:58:59.098666+0000 mgr.y (mgr.14505) 1223 : cluster [DBG] pgmap v1643: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:01.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:00 vm00 bash[28333]: cluster 2026-03-09T17:58:59.644993+0000 mon.a (mon.0) 3568 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T17:59:01.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:00 vm00 bash[28333]: cluster 2026-03-09T17:58:59.644993+0000 mon.a (mon.0) 3568 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T17:59:01.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:00 vm00 bash[20770]: cluster 2026-03-09T17:58:59.644993+0000 mon.a (mon.0) 3568 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T17:59:01.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:00 vm00 bash[20770]: cluster 2026-03-09T17:58:59.644993+0000 mon.a (mon.0) 3568 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T17:59:01.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:00 vm02 bash[23351]: cluster 2026-03-09T17:58:59.644993+0000 mon.a (mon.0) 3568 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T17:59:01.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:00 vm02 bash[23351]: cluster 2026-03-09T17:58:59.644993+0000 mon.a (mon.0) 3568 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:01 vm00 bash[28333]: cluster 2026-03-09T17:59:00.636893+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:01 vm00 bash[28333]: cluster 2026-03-09T17:59:00.636893+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:01 vm00 bash[28333]: cluster 2026-03-09T17:59:01.099035+0000 mgr.y (mgr.14505) 1224 : cluster [DBG] pgmap v1646: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:01 vm00 bash[28333]: cluster 2026-03-09T17:59:01.099035+0000 mgr.y (mgr.14505) 1224 : cluster [DBG] pgmap v1646: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:01 vm00 bash[28333]: cluster 2026-03-09T17:59:01.650394+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:01 vm00 bash[28333]: cluster 2026-03-09T17:59:01.650394+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:01 vm00 bash[20770]: cluster 2026-03-09T17:59:00.636893+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:01 vm00 bash[20770]: cluster 2026-03-09T17:59:00.636893+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:01 vm00 bash[20770]: cluster 2026-03-09T17:59:01.099035+0000 mgr.y (mgr.14505) 1224 : cluster [DBG] pgmap v1646: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:01 vm00 bash[20770]: cluster 2026-03-09T17:59:01.099035+0000 mgr.y (mgr.14505) 1224 : cluster [DBG] pgmap v1646: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:02.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:01 vm00 bash[20770]: cluster 2026-03-09T17:59:01.650394+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T17:59:02.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:01 vm00 bash[20770]: cluster 2026-03-09T17:59:01.650394+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T17:59:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:01 vm02 bash[23351]: cluster 2026-03-09T17:59:00.636893+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T17:59:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:01 vm02 bash[23351]: cluster 2026-03-09T17:59:00.636893+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T17:59:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:01 vm02 bash[23351]: cluster 2026-03-09T17:59:01.099035+0000 mgr.y (mgr.14505) 1224 : cluster [DBG] pgmap v1646: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:01 vm02 bash[23351]: cluster 2026-03-09T17:59:01.099035+0000 mgr.y (mgr.14505) 1224 : cluster [DBG] pgmap v1646: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:01 vm02 bash[23351]: cluster 2026-03-09T17:59:01.650394+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T17:59:02.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:01 vm02 bash[23351]: cluster 2026-03-09T17:59:01.650394+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T17:59:03.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:59:03 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:03 vm00 bash[28333]: cluster 2026-03-09T17:59:02.699570+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:03 vm00 bash[28333]: cluster 2026-03-09T17:59:02.699570+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:03 vm00 bash[28333]: audit 2026-03-09T17:59:03.023249+0000 mgr.y (mgr.14505) 1225 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:03 vm00 bash[28333]: audit 2026-03-09T17:59:03.023249+0000 mgr.y (mgr.14505) 1225 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:03 vm00 bash[28333]: cluster 2026-03-09T17:59:03.099672+0000 mgr.y (mgr.14505) 1226 : cluster [DBG] pgmap v1649: 196 pgs: 21 creating+peering, 11 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:03 vm00 bash[28333]: cluster 2026-03-09T17:59:03.099672+0000 mgr.y (mgr.14505) 1226 : cluster [DBG] pgmap v1649: 196 pgs: 21 creating+peering, 11 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:03 vm00 bash[20770]: cluster 2026-03-09T17:59:02.699570+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:03 vm00 bash[20770]: cluster 2026-03-09T17:59:02.699570+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:03 vm00 bash[20770]: audit 2026-03-09T17:59:03.023249+0000 mgr.y (mgr.14505) 1225 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:03 vm00 bash[20770]: audit 2026-03-09T17:59:03.023249+0000 mgr.y (mgr.14505) 1225 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:03 vm00 bash[20770]: cluster 2026-03-09T17:59:03.099672+0000 mgr.y (mgr.14505) 1226 : cluster [DBG] pgmap v1649: 196 pgs: 21 creating+peering, 11 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:04.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:03 vm00 bash[20770]: cluster 2026-03-09T17:59:03.099672+0000 mgr.y (mgr.14505) 1226 : cluster [DBG] pgmap v1649: 196 pgs: 21 creating+peering, 11 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:04.135 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:03 vm02 bash[23351]: cluster 2026-03-09T17:59:02.699570+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T17:59:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:03 vm02 bash[23351]: cluster 2026-03-09T17:59:02.699570+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T17:59:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:03 vm02 bash[23351]: audit 2026-03-09T17:59:03.023249+0000 mgr.y (mgr.14505) 1225 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:03 vm02 bash[23351]: audit 2026-03-09T17:59:03.023249+0000 mgr.y (mgr.14505) 1225 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:03 vm02 bash[23351]: cluster 2026-03-09T17:59:03.099672+0000 mgr.y (mgr.14505) 1226 : cluster [DBG] pgmap v1649: 196 pgs: 21 creating+peering, 11 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:04.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:03 vm02 bash[23351]: cluster 2026-03-09T17:59:03.099672+0000 mgr.y (mgr.14505) 1226 : cluster [DBG] pgmap v1649: 196 pgs: 21 creating+peering, 11 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:04 vm00 bash[28333]: cluster 2026-03-09T17:59:03.691349+0000 mon.a (mon.0) 3572 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:04 vm00 bash[28333]: cluster 2026-03-09T17:59:03.691349+0000 mon.a (mon.0) 3572 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:04 vm00 bash[28333]: cluster 2026-03-09T17:59:03.726434+0000 mon.a (mon.0) 3573 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T17:59:05.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:04 vm00 bash[28333]: cluster 2026-03-09T17:59:03.726434+0000 mon.a (mon.0) 3573 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T17:59:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:04 vm00 bash[20770]: cluster 2026-03-09T17:59:03.691349+0000 mon.a (mon.0) 3572 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:04 vm00 bash[20770]: cluster 2026-03-09T17:59:03.691349+0000 mon.a (mon.0) 3572 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:04 vm00 bash[20770]: cluster 2026-03-09T17:59:03.726434+0000 mon.a (mon.0) 3573 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T17:59:05.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:04 vm00 bash[20770]: cluster 2026-03-09T17:59:03.726434+0000 mon.a (mon.0) 3573 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T17:59:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:04 vm02 bash[23351]: cluster 2026-03-09T17:59:03.691349+0000 mon.a (mon.0) 3572 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:04 vm02 bash[23351]: cluster 2026-03-09T17:59:03.691349+0000 mon.a (mon.0) 3572 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:04 vm02 bash[23351]: cluster 2026-03-09T17:59:03.726434+0000 mon.a (mon.0) 3573 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T17:59:05.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:04 vm02 bash[23351]: cluster 2026-03-09T17:59:03.726434+0000 mon.a (mon.0) 3573 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T17:59:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:05 vm00 bash[28333]: cluster 2026-03-09T17:59:04.708918+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T17:59:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:05 vm00 bash[28333]: cluster 2026-03-09T17:59:04.708918+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T17:59:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:05 vm00 bash[28333]: cluster 2026-03-09T17:59:05.099994+0000 mgr.y (mgr.14505) 1227 : cluster [DBG] pgmap v1652: 228 pgs: 21 creating+peering, 43 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:06.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:05 vm00 bash[28333]: cluster 2026-03-09T17:59:05.099994+0000 mgr.y (mgr.14505) 1227 : cluster [DBG] pgmap v1652: 228 pgs: 21 creating+peering, 43 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:05 vm00 bash[20770]: cluster 2026-03-09T17:59:04.708918+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T17:59:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:05 vm00 bash[20770]: cluster 2026-03-09T17:59:04.708918+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T17:59:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:05 vm00 bash[20770]: cluster 2026-03-09T17:59:05.099994+0000 mgr.y (mgr.14505) 1227 : cluster [DBG] pgmap v1652: 228 pgs: 21 creating+peering, 43 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:06.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:05 vm00 bash[20770]: cluster 2026-03-09T17:59:05.099994+0000 mgr.y (mgr.14505) 1227 : cluster [DBG] pgmap v1652: 228 pgs: 21 creating+peering, 43 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:05 vm02 bash[23351]: cluster 2026-03-09T17:59:04.708918+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T17:59:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:05 vm02 bash[23351]: cluster 2026-03-09T17:59:04.708918+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T17:59:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:05 vm02 bash[23351]: cluster 2026-03-09T17:59:05.099994+0000 mgr.y (mgr.14505) 1227 : cluster [DBG] pgmap v1652: 228 pgs: 21 creating+peering, 43 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:06.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:05 vm02 bash[23351]: cluster 2026-03-09T17:59:05.099994+0000 mgr.y (mgr.14505) 1227 : cluster [DBG] pgmap v1652: 228 pgs: 21 creating+peering, 43 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:06.747 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:59:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:59:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:59:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:06 vm00 bash[28333]: cluster 2026-03-09T17:59:05.727822+0000 mon.a (mon.0) 3575 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T17:59:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:06 vm00 bash[28333]: cluster 2026-03-09T17:59:05.727822+0000 mon.a (mon.0) 3575 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T17:59:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:06 vm00 bash[20770]: cluster 2026-03-09T17:59:05.727822+0000 mon.a (mon.0) 3575 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T17:59:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:06 vm00 bash[20770]: cluster 2026-03-09T17:59:05.727822+0000 mon.a (mon.0) 3575 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T17:59:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:06 vm02 bash[23351]: cluster 2026-03-09T17:59:05.727822+0000 mon.a (mon.0) 3575 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T17:59:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:06 vm02 bash[23351]: cluster 2026-03-09T17:59:05.727822+0000 mon.a (mon.0) 3575 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T17:59:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:07 vm00 bash[28333]: cluster 2026-03-09T17:59:06.772494+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T17:59:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:07 vm00 bash[28333]: cluster 2026-03-09T17:59:06.772494+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T17:59:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:07 vm00 bash[28333]: cluster 2026-03-09T17:59:07.100441+0000 mgr.y (mgr.14505) 1228 : cluster [DBG] pgmap v1655: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:07 vm00 bash[28333]: cluster 2026-03-09T17:59:07.100441+0000 mgr.y (mgr.14505) 1228 : cluster [DBG] pgmap v1655: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:07 vm00 bash[20770]: cluster 2026-03-09T17:59:06.772494+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T17:59:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:07 vm00 bash[20770]: cluster 2026-03-09T17:59:06.772494+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T17:59:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:07 vm00 bash[20770]: cluster 2026-03-09T17:59:07.100441+0000 mgr.y (mgr.14505) 1228 : cluster [DBG] pgmap v1655: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:07 vm00 bash[20770]: cluster 2026-03-09T17:59:07.100441+0000 mgr.y (mgr.14505) 1228 : cluster [DBG] pgmap v1655: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:07 vm02 bash[23351]: cluster 2026-03-09T17:59:06.772494+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T17:59:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:07 vm02 bash[23351]: cluster 2026-03-09T17:59:06.772494+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T17:59:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:07 vm02 bash[23351]: cluster 2026-03-09T17:59:07.100441+0000 mgr.y (mgr.14505) 1228 : cluster [DBG] pgmap v1655: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:07 vm02 bash[23351]: cluster 2026-03-09T17:59:07.100441+0000 mgr.y (mgr.14505) 1228 : cluster [DBG] pgmap v1655: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:08 vm00 bash[28333]: cluster 2026-03-09T17:59:07.897610+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T17:59:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:08 vm00 bash[28333]: cluster 2026-03-09T17:59:07.897610+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T17:59:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:08 vm00 bash[20770]: cluster 2026-03-09T17:59:07.897610+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T17:59:09.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:08 vm00 bash[20770]: cluster 2026-03-09T17:59:07.897610+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T17:59:09.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:08 vm02 bash[23351]: cluster 2026-03-09T17:59:07.897610+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T17:59:09.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:08 vm02 bash[23351]: cluster 2026-03-09T17:59:07.897610+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T17:59:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:09 vm00 bash[28333]: cluster 2026-03-09T17:59:08.923862+0000 mon.a (mon.0) 3578 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T17:59:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:09 vm00 bash[28333]: cluster 2026-03-09T17:59:08.923862+0000 mon.a (mon.0) 3578 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T17:59:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:09 vm00 bash[28333]: cluster 2026-03-09T17:59:09.100784+0000 mgr.y (mgr.14505) 1229 : cluster [DBG] pgmap v1658: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:10.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:09 vm00 bash[28333]: cluster 2026-03-09T17:59:09.100784+0000 mgr.y (mgr.14505) 1229 : cluster [DBG] pgmap v1658: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:09 vm00 bash[20770]: cluster 2026-03-09T17:59:08.923862+0000 mon.a (mon.0) 3578 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T17:59:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:09 vm00 bash[20770]: cluster 2026-03-09T17:59:08.923862+0000 mon.a (mon.0) 3578 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T17:59:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:09 vm00 bash[20770]: cluster 2026-03-09T17:59:09.100784+0000 mgr.y (mgr.14505) 1229 : cluster [DBG] pgmap v1658: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:10.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:09 vm00 bash[20770]: cluster 2026-03-09T17:59:09.100784+0000 mgr.y (mgr.14505) 1229 : cluster [DBG] pgmap v1658: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:09 vm02 bash[23351]: cluster 2026-03-09T17:59:08.923862+0000 mon.a (mon.0) 3578 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T17:59:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:09 vm02 bash[23351]: cluster 2026-03-09T17:59:08.923862+0000 mon.a (mon.0) 3578 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T17:59:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:09 vm02 bash[23351]: cluster 2026-03-09T17:59:09.100784+0000 mgr.y (mgr.14505) 1229 : cluster [DBG] pgmap v1658: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:10.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:09 vm02 bash[23351]: cluster 2026-03-09T17:59:09.100784+0000 mgr.y (mgr.14505) 1229 : cluster [DBG] pgmap v1658: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:10 vm00 bash[28333]: cluster 2026-03-09T17:59:09.936368+0000 mon.a (mon.0) 3579 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:10 vm00 bash[28333]: cluster 2026-03-09T17:59:09.936368+0000 mon.a (mon.0) 3579 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:10 vm00 bash[28333]: cluster 2026-03-09T17:59:09.975669+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T17:59:11.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:10 vm00 bash[28333]: cluster 2026-03-09T17:59:09.975669+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T17:59:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:10 vm00 bash[20770]: cluster 2026-03-09T17:59:09.936368+0000 mon.a (mon.0) 3579 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:10 vm00 bash[20770]: cluster 2026-03-09T17:59:09.936368+0000 mon.a (mon.0) 3579 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:10 vm00 bash[20770]: cluster 2026-03-09T17:59:09.975669+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T17:59:11.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:10 vm00 bash[20770]: cluster 2026-03-09T17:59:09.975669+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T17:59:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:10 vm02 bash[23351]: cluster 2026-03-09T17:59:09.936368+0000 mon.a (mon.0) 3579 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:10 vm02 bash[23351]: cluster 2026-03-09T17:59:09.936368+0000 mon.a (mon.0) 3579 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:10 vm02 bash[23351]: cluster 2026-03-09T17:59:09.975669+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T17:59:11.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:10 vm02 bash[23351]: cluster 2026-03-09T17:59:09.975669+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T17:59:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:11 vm00 bash[28333]: cluster 2026-03-09T17:59:10.963515+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T17:59:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:11 vm00 bash[28333]: cluster 2026-03-09T17:59:10.963515+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T17:59:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:11 vm00 bash[28333]: cluster 2026-03-09T17:59:11.101232+0000 mgr.y (mgr.14505) 1230 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:12.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:11 vm00 bash[28333]: cluster 2026-03-09T17:59:11.101232+0000 mgr.y (mgr.14505) 1230 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:11 vm00 bash[20770]: cluster 2026-03-09T17:59:10.963515+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T17:59:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:11 vm00 bash[20770]: cluster 2026-03-09T17:59:10.963515+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T17:59:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:11 vm00 bash[20770]: cluster 2026-03-09T17:59:11.101232+0000 mgr.y (mgr.14505) 1230 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:12.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:11 vm00 bash[20770]: cluster 2026-03-09T17:59:11.101232+0000 mgr.y (mgr.14505) 1230 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:11 vm02 bash[23351]: cluster 2026-03-09T17:59:10.963515+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T17:59:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:11 vm02 bash[23351]: cluster 2026-03-09T17:59:10.963515+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T17:59:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:11 vm02 bash[23351]: cluster 2026-03-09T17:59:11.101232+0000 mgr.y (mgr.14505) 1230 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:12.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:11 vm02 bash[23351]: cluster 2026-03-09T17:59:11.101232+0000 mgr.y (mgr.14505) 1230 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: Running main() from gmock_main.cc 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [==========] Running 7 tests from 1 test suite. 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [----------] Global test environment set-up. 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertExists 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertExists (1800644 ms) 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertVersion 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertVersion (3069 ms) 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Xattrs 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.Xattrs (3037 ms) 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Write 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.Write (3059 ms) 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Exec 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.Exec (3032 ms) 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.WriteSame 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.WriteSame (3192 ms) 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.CmpExt 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.CmpExt (4062 ms) 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps (1820096 ms total) 2026-03-09T17:59:12.995 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: 2026-03-09T17:59:12.996 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [----------] Global test environment tear-down 2026-03-09T17:59:12.996 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [==========] 7 tests from 1 test suite ran. (1820097 ms total) 2026-03-09T17:59:12.996 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ PASSED ] 7 tests. 2026-03-09T17:59:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:13 vm00 bash[28333]: cluster 2026-03-09T17:59:11.986234+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T17:59:13.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:13 vm00 bash[28333]: cluster 2026-03-09T17:59:11.986234+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T17:59:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:13 vm00 bash[20770]: cluster 2026-03-09T17:59:11.986234+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T17:59:13.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:13 vm00 bash[20770]: cluster 2026-03-09T17:59:11.986234+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T17:59:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:13 vm02 bash[23351]: cluster 2026-03-09T17:59:11.986234+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T17:59:13.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:13 vm02 bash[23351]: cluster 2026-03-09T17:59:11.986234+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T17:59:13.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:59:13 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:14 vm00 bash[28333]: cluster 2026-03-09T17:59:12.978621+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:14 vm00 bash[28333]: cluster 2026-03-09T17:59:12.978621+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:14 vm00 bash[28333]: audit 2026-03-09T17:59:13.033466+0000 mgr.y (mgr.14505) 1231 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:14 vm00 bash[28333]: audit 2026-03-09T17:59:13.033466+0000 mgr.y (mgr.14505) 1231 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:14 vm00 bash[28333]: cluster 2026-03-09T17:59:13.101593+0000 mgr.y (mgr.14505) 1232 : cluster [DBG] pgmap v1664: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:14 vm00 bash[28333]: cluster 2026-03-09T17:59:13.101593+0000 mgr.y (mgr.14505) 1232 : cluster [DBG] pgmap v1664: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:14 vm00 bash[28333]: audit 2026-03-09T17:59:13.914891+0000 mon.c (mon.2) 932 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:14 vm00 bash[28333]: audit 2026-03-09T17:59:13.914891+0000 mon.c (mon.2) 932 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:14 vm00 bash[20770]: cluster 2026-03-09T17:59:12.978621+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:14 vm00 bash[20770]: cluster 2026-03-09T17:59:12.978621+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:14 vm00 bash[20770]: audit 2026-03-09T17:59:13.033466+0000 mgr.y (mgr.14505) 1231 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:14 vm00 bash[20770]: audit 2026-03-09T17:59:13.033466+0000 mgr.y (mgr.14505) 1231 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:14 vm00 bash[20770]: cluster 2026-03-09T17:59:13.101593+0000 mgr.y (mgr.14505) 1232 : cluster [DBG] pgmap v1664: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:14 vm00 bash[20770]: cluster 2026-03-09T17:59:13.101593+0000 mgr.y (mgr.14505) 1232 : cluster [DBG] pgmap v1664: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:14 vm00 bash[20770]: audit 2026-03-09T17:59:13.914891+0000 mon.c (mon.2) 932 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:14.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:14 vm00 bash[20770]: audit 2026-03-09T17:59:13.914891+0000 mon.c (mon.2) 932 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:14 vm02 bash[23351]: cluster 2026-03-09T17:59:12.978621+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T17:59:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:14 vm02 bash[23351]: cluster 2026-03-09T17:59:12.978621+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T17:59:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:14 vm02 bash[23351]: audit 2026-03-09T17:59:13.033466+0000 mgr.y (mgr.14505) 1231 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:14 vm02 bash[23351]: audit 2026-03-09T17:59:13.033466+0000 mgr.y (mgr.14505) 1231 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:14 vm02 bash[23351]: cluster 2026-03-09T17:59:13.101593+0000 mgr.y (mgr.14505) 1232 : cluster [DBG] pgmap v1664: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:14 vm02 bash[23351]: cluster 2026-03-09T17:59:13.101593+0000 mgr.y (mgr.14505) 1232 : cluster [DBG] pgmap v1664: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:14 vm02 bash[23351]: audit 2026-03-09T17:59:13.914891+0000 mon.c (mon.2) 932 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:14.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:14 vm02 bash[23351]: audit 2026-03-09T17:59:13.914891+0000 mon.c (mon.2) 932 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:15 vm02 bash[23351]: cluster 2026-03-09T17:59:14.017805+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T17:59:15.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:15 vm02 bash[23351]: cluster 2026-03-09T17:59:14.017805+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T17:59:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:15 vm00 bash[28333]: cluster 2026-03-09T17:59:14.017805+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T17:59:15.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:15 vm00 bash[28333]: cluster 2026-03-09T17:59:14.017805+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T17:59:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:15 vm00 bash[20770]: cluster 2026-03-09T17:59:14.017805+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T17:59:15.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:15 vm00 bash[20770]: cluster 2026-03-09T17:59:14.017805+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:16 vm00 bash[28333]: cluster 2026-03-09T17:59:15.101962+0000 mgr.y (mgr.14505) 1233 : cluster [DBG] pgmap v1666: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:16 vm00 bash[28333]: cluster 2026-03-09T17:59:15.101962+0000 mgr.y (mgr.14505) 1233 : cluster [DBG] pgmap v1666: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:16 vm00 bash[28333]: cluster 2026-03-09T17:59:15.137038+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:16 vm00 bash[28333]: cluster 2026-03-09T17:59:15.137038+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:16 vm00 bash[20770]: cluster 2026-03-09T17:59:15.101962+0000 mgr.y (mgr.14505) 1233 : cluster [DBG] pgmap v1666: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:16 vm00 bash[20770]: cluster 2026-03-09T17:59:15.101962+0000 mgr.y (mgr.14505) 1233 : cluster [DBG] pgmap v1666: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:16 vm00 bash[20770]: cluster 2026-03-09T17:59:15.137038+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:16 vm00 bash[20770]: cluster 2026-03-09T17:59:15.137038+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T17:59:16.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:59:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:59:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:59:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:16 vm02 bash[23351]: cluster 2026-03-09T17:59:15.101962+0000 mgr.y (mgr.14505) 1233 : cluster [DBG] pgmap v1666: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:16 vm02 bash[23351]: cluster 2026-03-09T17:59:15.101962+0000 mgr.y (mgr.14505) 1233 : cluster [DBG] pgmap v1666: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:16 vm02 bash[23351]: cluster 2026-03-09T17:59:15.137038+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T17:59:16.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:16 vm02 bash[23351]: cluster 2026-03-09T17:59:15.137038+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T17:59:17.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:17 vm00 bash[28333]: cluster 2026-03-09T17:59:16.176198+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T17:59:17.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:17 vm00 bash[28333]: cluster 2026-03-09T17:59:16.176198+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T17:59:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:17 vm00 bash[20770]: cluster 2026-03-09T17:59:16.176198+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T17:59:17.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:17 vm00 bash[20770]: cluster 2026-03-09T17:59:16.176198+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T17:59:17.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:17 vm02 bash[23351]: cluster 2026-03-09T17:59:16.176198+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T17:59:17.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:17 vm02 bash[23351]: cluster 2026-03-09T17:59:16.176198+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:18 vm00 bash[28333]: cluster 2026-03-09T17:59:17.102356+0000 mgr.y (mgr.14505) 1234 : cluster [DBG] pgmap v1669: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:18 vm00 bash[28333]: cluster 2026-03-09T17:59:17.102356+0000 mgr.y (mgr.14505) 1234 : cluster [DBG] pgmap v1669: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:18 vm00 bash[28333]: cluster 2026-03-09T17:59:17.163902+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:18 vm00 bash[28333]: cluster 2026-03-09T17:59:17.163902+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:18 vm00 bash[28333]: cluster 2026-03-09T17:59:18.166346+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:18 vm00 bash[28333]: cluster 2026-03-09T17:59:18.166346+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:18 vm00 bash[20770]: cluster 2026-03-09T17:59:17.102356+0000 mgr.y (mgr.14505) 1234 : cluster [DBG] pgmap v1669: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:18 vm00 bash[20770]: cluster 2026-03-09T17:59:17.102356+0000 mgr.y (mgr.14505) 1234 : cluster [DBG] pgmap v1669: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:18 vm00 bash[20770]: cluster 2026-03-09T17:59:17.163902+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:18 vm00 bash[20770]: cluster 2026-03-09T17:59:17.163902+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:18 vm00 bash[20770]: cluster 2026-03-09T17:59:18.166346+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T17:59:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:18 vm00 bash[20770]: cluster 2026-03-09T17:59:18.166346+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T17:59:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:18 vm02 bash[23351]: cluster 2026-03-09T17:59:17.102356+0000 mgr.y (mgr.14505) 1234 : cluster [DBG] pgmap v1669: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:18 vm02 bash[23351]: cluster 2026-03-09T17:59:17.102356+0000 mgr.y (mgr.14505) 1234 : cluster [DBG] pgmap v1669: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:18 vm02 bash[23351]: cluster 2026-03-09T17:59:17.163902+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T17:59:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:18 vm02 bash[23351]: cluster 2026-03-09T17:59:17.163902+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T17:59:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:18 vm02 bash[23351]: cluster 2026-03-09T17:59:18.166346+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T17:59:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:18 vm02 bash[23351]: cluster 2026-03-09T17:59:18.166346+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T17:59:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:19 vm00 bash[28333]: cluster 2026-03-09T17:59:19.177262+0000 mon.a (mon.0) 3589 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:19 vm00 bash[28333]: cluster 2026-03-09T17:59:19.177262+0000 mon.a (mon.0) 3589 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:19 vm00 bash[28333]: cluster 2026-03-09T17:59:19.205741+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T17:59:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:19 vm00 bash[28333]: cluster 2026-03-09T17:59:19.205741+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T17:59:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:19 vm00 bash[20770]: cluster 2026-03-09T17:59:19.177262+0000 mon.a (mon.0) 3589 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:19 vm00 bash[20770]: cluster 2026-03-09T17:59:19.177262+0000 mon.a (mon.0) 3589 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:19 vm00 bash[20770]: cluster 2026-03-09T17:59:19.205741+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T17:59:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:19 vm00 bash[20770]: cluster 2026-03-09T17:59:19.205741+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T17:59:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:19 vm02 bash[23351]: cluster 2026-03-09T17:59:19.177262+0000 mon.a (mon.0) 3589 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:19 vm02 bash[23351]: cluster 2026-03-09T17:59:19.177262+0000 mon.a (mon.0) 3589 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:19 vm02 bash[23351]: cluster 2026-03-09T17:59:19.205741+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T17:59:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:19 vm02 bash[23351]: cluster 2026-03-09T17:59:19.205741+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T17:59:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:20 vm00 bash[28333]: cluster 2026-03-09T17:59:19.102677+0000 mgr.y (mgr.14505) 1235 : cluster [DBG] pgmap v1672: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:20.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:20 vm00 bash[28333]: cluster 2026-03-09T17:59:19.102677+0000 mgr.y (mgr.14505) 1235 : cluster [DBG] pgmap v1672: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:20 vm00 bash[20770]: cluster 2026-03-09T17:59:19.102677+0000 mgr.y (mgr.14505) 1235 : cluster [DBG] pgmap v1672: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:20.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:20 vm00 bash[20770]: cluster 2026-03-09T17:59:19.102677+0000 mgr.y (mgr.14505) 1235 : cluster [DBG] pgmap v1672: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:20 vm02 bash[23351]: cluster 2026-03-09T17:59:19.102677+0000 mgr.y (mgr.14505) 1235 : cluster [DBG] pgmap v1672: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:20.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:20 vm02 bash[23351]: cluster 2026-03-09T17:59:19.102677+0000 mgr.y (mgr.14505) 1235 : cluster [DBG] pgmap v1672: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:21 vm02 bash[23351]: cluster 2026-03-09T17:59:20.205655+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T17:59:21.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:21 vm02 bash[23351]: cluster 2026-03-09T17:59:20.205655+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T17:59:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:21 vm00 bash[20770]: cluster 2026-03-09T17:59:20.205655+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T17:59:21.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:21 vm00 bash[20770]: cluster 2026-03-09T17:59:20.205655+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T17:59:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:21 vm00 bash[28333]: cluster 2026-03-09T17:59:20.205655+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T17:59:21.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:21 vm00 bash[28333]: cluster 2026-03-09T17:59:20.205655+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T17:59:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:22 vm02 bash[23351]: cluster 2026-03-09T17:59:21.102955+0000 mgr.y (mgr.14505) 1236 : cluster [DBG] pgmap v1675: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:22 vm02 bash[23351]: cluster 2026-03-09T17:59:21.102955+0000 mgr.y (mgr.14505) 1236 : cluster [DBG] pgmap v1675: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:22 vm02 bash[23351]: cluster 2026-03-09T17:59:21.301729+0000 mon.a (mon.0) 3592 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T17:59:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:22 vm02 bash[23351]: cluster 2026-03-09T17:59:21.301729+0000 mon.a (mon.0) 3592 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T17:59:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:22 vm02 bash[23351]: audit 2026-03-09T17:59:21.308258+0000 mon.c (mon.2) 933 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:59:22.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:22 vm02 bash[23351]: audit 2026-03-09T17:59:21.308258+0000 mon.c (mon.2) 933 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:22 vm00 bash[28333]: cluster 2026-03-09T17:59:21.102955+0000 mgr.y (mgr.14505) 1236 : cluster [DBG] pgmap v1675: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:22 vm00 bash[28333]: cluster 2026-03-09T17:59:21.102955+0000 mgr.y (mgr.14505) 1236 : cluster [DBG] pgmap v1675: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:22 vm00 bash[28333]: cluster 2026-03-09T17:59:21.301729+0000 mon.a (mon.0) 3592 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:22 vm00 bash[28333]: cluster 2026-03-09T17:59:21.301729+0000 mon.a (mon.0) 3592 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:22 vm00 bash[28333]: audit 2026-03-09T17:59:21.308258+0000 mon.c (mon.2) 933 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:22 vm00 bash[28333]: audit 2026-03-09T17:59:21.308258+0000 mon.c (mon.2) 933 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:22 vm00 bash[20770]: cluster 2026-03-09T17:59:21.102955+0000 mgr.y (mgr.14505) 1236 : cluster [DBG] pgmap v1675: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:22 vm00 bash[20770]: cluster 2026-03-09T17:59:21.102955+0000 mgr.y (mgr.14505) 1236 : cluster [DBG] pgmap v1675: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:22 vm00 bash[20770]: cluster 2026-03-09T17:59:21.301729+0000 mon.a (mon.0) 3592 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:22 vm00 bash[20770]: cluster 2026-03-09T17:59:21.301729+0000 mon.a (mon.0) 3592 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:22 vm00 bash[20770]: audit 2026-03-09T17:59:21.308258+0000 mon.c (mon.2) 933 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:59:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:22 vm00 bash[20770]: audit 2026-03-09T17:59:21.308258+0000 mon.c (mon.2) 933 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:59:23.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:59:23 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:59:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:23 vm02 bash[23351]: cluster 2026-03-09T17:59:22.360583+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T17:59:23.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:23 vm02 bash[23351]: cluster 2026-03-09T17:59:22.360583+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T17:59:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:23 vm00 bash[28333]: cluster 2026-03-09T17:59:22.360583+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T17:59:23.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:23 vm00 bash[28333]: cluster 2026-03-09T17:59:22.360583+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T17:59:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:23 vm00 bash[20770]: cluster 2026-03-09T17:59:22.360583+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T17:59:23.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:23 vm00 bash[20770]: cluster 2026-03-09T17:59:22.360583+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:24 vm00 bash[28333]: audit 2026-03-09T17:59:23.040179+0000 mgr.y (mgr.14505) 1237 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:24 vm00 bash[28333]: audit 2026-03-09T17:59:23.040179+0000 mgr.y (mgr.14505) 1237 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:24 vm00 bash[28333]: cluster 2026-03-09T17:59:23.103586+0000 mgr.y (mgr.14505) 1238 : cluster [DBG] pgmap v1678: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:24 vm00 bash[28333]: cluster 2026-03-09T17:59:23.103586+0000 mgr.y (mgr.14505) 1238 : cluster [DBG] pgmap v1678: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:24 vm00 bash[28333]: cluster 2026-03-09T17:59:23.375593+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:24 vm00 bash[28333]: cluster 2026-03-09T17:59:23.375593+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:24 vm00 bash[20770]: audit 2026-03-09T17:59:23.040179+0000 mgr.y (mgr.14505) 1237 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:24 vm00 bash[20770]: audit 2026-03-09T17:59:23.040179+0000 mgr.y (mgr.14505) 1237 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:24 vm00 bash[20770]: cluster 2026-03-09T17:59:23.103586+0000 mgr.y (mgr.14505) 1238 : cluster [DBG] pgmap v1678: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:24 vm00 bash[20770]: cluster 2026-03-09T17:59:23.103586+0000 mgr.y (mgr.14505) 1238 : cluster [DBG] pgmap v1678: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:24 vm00 bash[20770]: cluster 2026-03-09T17:59:23.375593+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T17:59:24.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:24 vm00 bash[20770]: cluster 2026-03-09T17:59:23.375593+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T17:59:24.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:24 vm02 bash[23351]: audit 2026-03-09T17:59:23.040179+0000 mgr.y (mgr.14505) 1237 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:24 vm02 bash[23351]: audit 2026-03-09T17:59:23.040179+0000 mgr.y (mgr.14505) 1237 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:24 vm02 bash[23351]: cluster 2026-03-09T17:59:23.103586+0000 mgr.y (mgr.14505) 1238 : cluster [DBG] pgmap v1678: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:59:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:24 vm02 bash[23351]: cluster 2026-03-09T17:59:23.103586+0000 mgr.y (mgr.14505) 1238 : cluster [DBG] pgmap v1678: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T17:59:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:24 vm02 bash[23351]: cluster 2026-03-09T17:59:23.375593+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T17:59:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:24 vm02 bash[23351]: cluster 2026-03-09T17:59:23.375593+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T17:59:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:25 vm00 bash[28333]: cluster 2026-03-09T17:59:24.396036+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T17:59:25.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:25 vm00 bash[28333]: cluster 2026-03-09T17:59:24.396036+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T17:59:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:25 vm00 bash[20770]: cluster 2026-03-09T17:59:24.396036+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T17:59:25.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:25 vm00 bash[20770]: cluster 2026-03-09T17:59:24.396036+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T17:59:25.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:25 vm02 bash[23351]: cluster 2026-03-09T17:59:24.396036+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T17:59:25.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:25 vm02 bash[23351]: cluster 2026-03-09T17:59:24.396036+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:26 vm00 bash[28333]: cluster 2026-03-09T17:59:25.103887+0000 mgr.y (mgr.14505) 1239 : cluster [DBG] pgmap v1681: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:26 vm00 bash[28333]: cluster 2026-03-09T17:59:25.103887+0000 mgr.y (mgr.14505) 1239 : cluster [DBG] pgmap v1681: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:26 vm00 bash[28333]: cluster 2026-03-09T17:59:25.409173+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:26 vm00 bash[28333]: cluster 2026-03-09T17:59:25.409173+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:26 vm00 bash[20770]: cluster 2026-03-09T17:59:25.103887+0000 mgr.y (mgr.14505) 1239 : cluster [DBG] pgmap v1681: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:26 vm00 bash[20770]: cluster 2026-03-09T17:59:25.103887+0000 mgr.y (mgr.14505) 1239 : cluster [DBG] pgmap v1681: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:26 vm00 bash[20770]: cluster 2026-03-09T17:59:25.409173+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:26 vm00 bash[20770]: cluster 2026-03-09T17:59:25.409173+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T17:59:26.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:59:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:59:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:59:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:26 vm02 bash[23351]: cluster 2026-03-09T17:59:25.103887+0000 mgr.y (mgr.14505) 1239 : cluster [DBG] pgmap v1681: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:26 vm02 bash[23351]: cluster 2026-03-09T17:59:25.103887+0000 mgr.y (mgr.14505) 1239 : cluster [DBG] pgmap v1681: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:26 vm02 bash[23351]: cluster 2026-03-09T17:59:25.409173+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T17:59:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:26 vm02 bash[23351]: cluster 2026-03-09T17:59:25.409173+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: cluster 2026-03-09T17:59:26.443059+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: cluster 2026-03-09T17:59:26.443059+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:26.525194+0000 mon.a (mon.0) 3598 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:26.525194+0000 mon.a (mon.0) 3598 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:26.530310+0000 mon.a (mon.0) 3599 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:26.530310+0000 mon.a (mon.0) 3599 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:26.730754+0000 mon.a (mon.0) 3600 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:26.730754+0000 mon.a (mon.0) 3600 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:26.738341+0000 mon.a (mon.0) 3601 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:26.738341+0000 mon.a (mon.0) 3601 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: cluster 2026-03-09T17:59:26.955230+0000 mon.a (mon.0) 3602 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: cluster 2026-03-09T17:59:26.955230+0000 mon.a (mon.0) 3602 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:27.047111+0000 mon.c (mon.2) 934 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:27.047111+0000 mon.c (mon.2) 934 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:27.047804+0000 mon.c (mon.2) 935 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:27.047804+0000 mon.c (mon.2) 935 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:27.053959+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:27 vm00 bash[28333]: audit 2026-03-09T17:59:27.053959+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: cluster 2026-03-09T17:59:26.443059+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: cluster 2026-03-09T17:59:26.443059+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:26.525194+0000 mon.a (mon.0) 3598 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:26.525194+0000 mon.a (mon.0) 3598 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:26.530310+0000 mon.a (mon.0) 3599 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:26.530310+0000 mon.a (mon.0) 3599 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:26.730754+0000 mon.a (mon.0) 3600 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:26.730754+0000 mon.a (mon.0) 3600 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:26.738341+0000 mon.a (mon.0) 3601 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:26.738341+0000 mon.a (mon.0) 3601 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: cluster 2026-03-09T17:59:26.955230+0000 mon.a (mon.0) 3602 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: cluster 2026-03-09T17:59:26.955230+0000 mon.a (mon.0) 3602 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:27.047111+0000 mon.c (mon.2) 934 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:27.047111+0000 mon.c (mon.2) 934 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:27.047804+0000 mon.c (mon.2) 935 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:27.047804+0000 mon.c (mon.2) 935 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:27.053959+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:27 vm00 bash[20770]: audit 2026-03-09T17:59:27.053959+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: cluster 2026-03-09T17:59:26.443059+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: cluster 2026-03-09T17:59:26.443059+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:26.525194+0000 mon.a (mon.0) 3598 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:26.525194+0000 mon.a (mon.0) 3598 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:26.530310+0000 mon.a (mon.0) 3599 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:26.530310+0000 mon.a (mon.0) 3599 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:26.730754+0000 mon.a (mon.0) 3600 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:26.730754+0000 mon.a (mon.0) 3600 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:26.738341+0000 mon.a (mon.0) 3601 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:26.738341+0000 mon.a (mon.0) 3601 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: cluster 2026-03-09T17:59:26.955230+0000 mon.a (mon.0) 3602 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: cluster 2026-03-09T17:59:26.955230+0000 mon.a (mon.0) 3602 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:27.047111+0000 mon.c (mon.2) 934 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:27.047111+0000 mon.c (mon.2) 934 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:27.047804+0000 mon.c (mon.2) 935 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:27.047804+0000 mon.c (mon.2) 935 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:27.053959+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:27 vm02 bash[23351]: audit 2026-03-09T17:59:27.053959+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T17:59:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:28 vm00 bash[28333]: cluster 2026-03-09T17:59:27.104909+0000 mgr.y (mgr.14505) 1240 : cluster [DBG] pgmap v1684: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:28 vm00 bash[28333]: cluster 2026-03-09T17:59:27.104909+0000 mgr.y (mgr.14505) 1240 : cluster [DBG] pgmap v1684: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:28 vm00 bash[28333]: cluster 2026-03-09T17:59:27.481438+0000 mon.a (mon.0) 3604 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T17:59:28.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:28 vm00 bash[28333]: cluster 2026-03-09T17:59:27.481438+0000 mon.a (mon.0) 3604 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T17:59:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:28 vm00 bash[20770]: cluster 2026-03-09T17:59:27.104909+0000 mgr.y (mgr.14505) 1240 : cluster [DBG] pgmap v1684: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:28 vm00 bash[20770]: cluster 2026-03-09T17:59:27.104909+0000 mgr.y (mgr.14505) 1240 : cluster [DBG] pgmap v1684: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:28 vm00 bash[20770]: cluster 2026-03-09T17:59:27.481438+0000 mon.a (mon.0) 3604 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T17:59:28.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:28 vm00 bash[20770]: cluster 2026-03-09T17:59:27.481438+0000 mon.a (mon.0) 3604 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T17:59:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:28 vm02 bash[23351]: cluster 2026-03-09T17:59:27.104909+0000 mgr.y (mgr.14505) 1240 : cluster [DBG] pgmap v1684: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:28 vm02 bash[23351]: cluster 2026-03-09T17:59:27.104909+0000 mgr.y (mgr.14505) 1240 : cluster [DBG] pgmap v1684: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:28 vm02 bash[23351]: cluster 2026-03-09T17:59:27.481438+0000 mon.a (mon.0) 3604 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T17:59:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:28 vm02 bash[23351]: cluster 2026-03-09T17:59:27.481438+0000 mon.a (mon.0) 3604 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T17:59:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:29 vm00 bash[28333]: cluster 2026-03-09T17:59:28.486522+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T17:59:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:29 vm00 bash[28333]: cluster 2026-03-09T17:59:28.486522+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T17:59:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:29 vm00 bash[28333]: audit 2026-03-09T17:59:28.921256+0000 mon.c (mon.2) 936 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:29 vm00 bash[28333]: audit 2026-03-09T17:59:28.921256+0000 mon.c (mon.2) 936 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:29 vm00 bash[28333]: cluster 2026-03-09T17:59:29.482769+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T17:59:29.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:29 vm00 bash[28333]: cluster 2026-03-09T17:59:29.482769+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T17:59:29.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:29 vm00 bash[20770]: cluster 2026-03-09T17:59:28.486522+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T17:59:29.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:29 vm00 bash[20770]: cluster 2026-03-09T17:59:28.486522+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T17:59:29.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:29 vm00 bash[20770]: audit 2026-03-09T17:59:28.921256+0000 mon.c (mon.2) 936 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:29.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:29 vm00 bash[20770]: audit 2026-03-09T17:59:28.921256+0000 mon.c (mon.2) 936 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:29.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:29 vm00 bash[20770]: cluster 2026-03-09T17:59:29.482769+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T17:59:29.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:29 vm00 bash[20770]: cluster 2026-03-09T17:59:29.482769+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T17:59:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:29 vm02 bash[23351]: cluster 2026-03-09T17:59:28.486522+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T17:59:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:29 vm02 bash[23351]: cluster 2026-03-09T17:59:28.486522+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T17:59:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:29 vm02 bash[23351]: audit 2026-03-09T17:59:28.921256+0000 mon.c (mon.2) 936 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:29 vm02 bash[23351]: audit 2026-03-09T17:59:28.921256+0000 mon.c (mon.2) 936 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:29 vm02 bash[23351]: cluster 2026-03-09T17:59:29.482769+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T17:59:29.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:29 vm02 bash[23351]: cluster 2026-03-09T17:59:29.482769+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T17:59:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:30 vm00 bash[28333]: cluster 2026-03-09T17:59:29.105286+0000 mgr.y (mgr.14505) 1241 : cluster [DBG] pgmap v1687: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:59:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:30 vm00 bash[28333]: cluster 2026-03-09T17:59:29.105286+0000 mgr.y (mgr.14505) 1241 : cluster [DBG] pgmap v1687: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:59:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:30 vm00 bash[28333]: cluster 2026-03-09T17:59:30.485923+0000 mon.a (mon.0) 3607 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T17:59:30.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:30 vm00 bash[28333]: cluster 2026-03-09T17:59:30.485923+0000 mon.a (mon.0) 3607 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T17:59:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:30 vm00 bash[20770]: cluster 2026-03-09T17:59:29.105286+0000 mgr.y (mgr.14505) 1241 : cluster [DBG] pgmap v1687: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:59:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:30 vm00 bash[20770]: cluster 2026-03-09T17:59:29.105286+0000 mgr.y (mgr.14505) 1241 : cluster [DBG] pgmap v1687: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:59:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:30 vm00 bash[20770]: cluster 2026-03-09T17:59:30.485923+0000 mon.a (mon.0) 3607 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T17:59:30.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:30 vm00 bash[20770]: cluster 2026-03-09T17:59:30.485923+0000 mon.a (mon.0) 3607 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T17:59:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:30 vm02 bash[23351]: cluster 2026-03-09T17:59:29.105286+0000 mgr.y (mgr.14505) 1241 : cluster [DBG] pgmap v1687: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:59:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:30 vm02 bash[23351]: cluster 2026-03-09T17:59:29.105286+0000 mgr.y (mgr.14505) 1241 : cluster [DBG] pgmap v1687: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T17:59:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:30 vm02 bash[23351]: cluster 2026-03-09T17:59:30.485923+0000 mon.a (mon.0) 3607 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T17:59:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:30 vm02 bash[23351]: cluster 2026-03-09T17:59:30.485923+0000 mon.a (mon.0) 3607 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T17:59:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:32 vm00 bash[28333]: cluster 2026-03-09T17:59:31.105633+0000 mgr.y (mgr.14505) 1242 : cluster [DBG] pgmap v1690: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:32 vm00 bash[28333]: cluster 2026-03-09T17:59:31.105633+0000 mgr.y (mgr.14505) 1242 : cluster [DBG] pgmap v1690: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:32 vm00 bash[28333]: cluster 2026-03-09T17:59:31.494138+0000 mon.a (mon.0) 3608 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T17:59:32.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:32 vm00 bash[28333]: cluster 2026-03-09T17:59:31.494138+0000 mon.a (mon.0) 3608 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T17:59:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:32 vm00 bash[20770]: cluster 2026-03-09T17:59:31.105633+0000 mgr.y (mgr.14505) 1242 : cluster [DBG] pgmap v1690: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:32 vm00 bash[20770]: cluster 2026-03-09T17:59:31.105633+0000 mgr.y (mgr.14505) 1242 : cluster [DBG] pgmap v1690: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:32 vm00 bash[20770]: cluster 2026-03-09T17:59:31.494138+0000 mon.a (mon.0) 3608 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T17:59:32.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:32 vm00 bash[20770]: cluster 2026-03-09T17:59:31.494138+0000 mon.a (mon.0) 3608 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T17:59:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:32 vm02 bash[23351]: cluster 2026-03-09T17:59:31.105633+0000 mgr.y (mgr.14505) 1242 : cluster [DBG] pgmap v1690: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:32 vm02 bash[23351]: cluster 2026-03-09T17:59:31.105633+0000 mgr.y (mgr.14505) 1242 : cluster [DBG] pgmap v1690: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:32 vm02 bash[23351]: cluster 2026-03-09T17:59:31.494138+0000 mon.a (mon.0) 3608 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T17:59:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:32 vm02 bash[23351]: cluster 2026-03-09T17:59:31.494138+0000 mon.a (mon.0) 3608 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T17:59:33.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:59:33 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:59:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:33 vm00 bash[28333]: cluster 2026-03-09T17:59:32.505478+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T17:59:33.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:33 vm00 bash[28333]: cluster 2026-03-09T17:59:32.505478+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T17:59:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:33 vm00 bash[20770]: cluster 2026-03-09T17:59:32.505478+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T17:59:33.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:33 vm00 bash[20770]: cluster 2026-03-09T17:59:32.505478+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T17:59:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:33 vm02 bash[23351]: cluster 2026-03-09T17:59:32.505478+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T17:59:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:33 vm02 bash[23351]: cluster 2026-03-09T17:59:32.505478+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T17:59:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:34 vm00 bash[28333]: audit 2026-03-09T17:59:33.050788+0000 mgr.y (mgr.14505) 1243 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:34 vm00 bash[28333]: audit 2026-03-09T17:59:33.050788+0000 mgr.y (mgr.14505) 1243 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:34 vm00 bash[28333]: cluster 2026-03-09T17:59:33.106444+0000 mgr.y (mgr.14505) 1244 : cluster [DBG] pgmap v1693: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-09T17:59:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:34 vm00 bash[28333]: cluster 2026-03-09T17:59:33.106444+0000 mgr.y (mgr.14505) 1244 : cluster [DBG] pgmap v1693: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-09T17:59:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:34 vm00 bash[28333]: cluster 2026-03-09T17:59:33.511395+0000 mon.a (mon.0) 3610 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T17:59:34.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:34 vm00 bash[28333]: cluster 2026-03-09T17:59:33.511395+0000 mon.a (mon.0) 3610 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T17:59:34.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:34 vm00 bash[20770]: audit 2026-03-09T17:59:33.050788+0000 mgr.y (mgr.14505) 1243 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:34.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:34 vm00 bash[20770]: audit 2026-03-09T17:59:33.050788+0000 mgr.y (mgr.14505) 1243 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:34.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:34 vm00 bash[20770]: cluster 2026-03-09T17:59:33.106444+0000 mgr.y (mgr.14505) 1244 : cluster [DBG] pgmap v1693: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-09T17:59:34.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:34 vm00 bash[20770]: cluster 2026-03-09T17:59:33.106444+0000 mgr.y (mgr.14505) 1244 : cluster [DBG] pgmap v1693: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-09T17:59:34.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:34 vm00 bash[20770]: cluster 2026-03-09T17:59:33.511395+0000 mon.a (mon.0) 3610 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T17:59:34.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:34 vm00 bash[20770]: cluster 2026-03-09T17:59:33.511395+0000 mon.a (mon.0) 3610 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T17:59:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:34 vm02 bash[23351]: audit 2026-03-09T17:59:33.050788+0000 mgr.y (mgr.14505) 1243 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:34 vm02 bash[23351]: audit 2026-03-09T17:59:33.050788+0000 mgr.y (mgr.14505) 1243 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:34 vm02 bash[23351]: cluster 2026-03-09T17:59:33.106444+0000 mgr.y (mgr.14505) 1244 : cluster [DBG] pgmap v1693: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-09T17:59:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:34 vm02 bash[23351]: cluster 2026-03-09T17:59:33.106444+0000 mgr.y (mgr.14505) 1244 : cluster [DBG] pgmap v1693: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-09T17:59:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:34 vm02 bash[23351]: cluster 2026-03-09T17:59:33.511395+0000 mon.a (mon.0) 3610 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T17:59:34.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:34 vm02 bash[23351]: cluster 2026-03-09T17:59:33.511395+0000 mon.a (mon.0) 3610 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T17:59:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:35 vm00 bash[28333]: cluster 2026-03-09T17:59:34.536050+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T17:59:35.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:35 vm00 bash[28333]: cluster 2026-03-09T17:59:34.536050+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T17:59:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:35 vm00 bash[20770]: cluster 2026-03-09T17:59:34.536050+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T17:59:35.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:35 vm00 bash[20770]: cluster 2026-03-09T17:59:34.536050+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T17:59:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:35 vm02 bash[23351]: cluster 2026-03-09T17:59:34.536050+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T17:59:35.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:35 vm02 bash[23351]: cluster 2026-03-09T17:59:34.536050+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T17:59:36.551 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:59:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:59:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:59:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:36 vm02 bash[23351]: cluster 2026-03-09T17:59:35.106828+0000 mgr.y (mgr.14505) 1245 : cluster [DBG] pgmap v1696: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:59:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:36 vm02 bash[23351]: cluster 2026-03-09T17:59:35.106828+0000 mgr.y (mgr.14505) 1245 : cluster [DBG] pgmap v1696: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:59:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:36 vm02 bash[23351]: cluster 2026-03-09T17:59:35.540114+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T17:59:36.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:36 vm02 bash[23351]: cluster 2026-03-09T17:59:35.540114+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T17:59:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:36 vm00 bash[28333]: cluster 2026-03-09T17:59:35.106828+0000 mgr.y (mgr.14505) 1245 : cluster [DBG] pgmap v1696: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:59:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:36 vm00 bash[28333]: cluster 2026-03-09T17:59:35.106828+0000 mgr.y (mgr.14505) 1245 : cluster [DBG] pgmap v1696: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:59:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:36 vm00 bash[28333]: cluster 2026-03-09T17:59:35.540114+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T17:59:37.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:36 vm00 bash[28333]: cluster 2026-03-09T17:59:35.540114+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T17:59:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:36 vm00 bash[20770]: cluster 2026-03-09T17:59:35.106828+0000 mgr.y (mgr.14505) 1245 : cluster [DBG] pgmap v1696: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:59:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:36 vm00 bash[20770]: cluster 2026-03-09T17:59:35.106828+0000 mgr.y (mgr.14505) 1245 : cluster [DBG] pgmap v1696: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 4 op/s 2026-03-09T17:59:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:36 vm00 bash[20770]: cluster 2026-03-09T17:59:35.540114+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T17:59:37.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:36 vm00 bash[20770]: cluster 2026-03-09T17:59:35.540114+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T17:59:37.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:37 vm02 bash[23351]: cluster 2026-03-09T17:59:36.545772+0000 mon.a (mon.0) 3613 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T17:59:37.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:37 vm02 bash[23351]: cluster 2026-03-09T17:59:36.545772+0000 mon.a (mon.0) 3613 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T17:59:37.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:37 vm02 bash[23351]: cluster 2026-03-09T17:59:37.550500+0000 mon.a (mon.0) 3614 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T17:59:37.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:37 vm02 bash[23351]: cluster 2026-03-09T17:59:37.550500+0000 mon.a (mon.0) 3614 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T17:59:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:37 vm00 bash[28333]: cluster 2026-03-09T17:59:36.545772+0000 mon.a (mon.0) 3613 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T17:59:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:37 vm00 bash[28333]: cluster 2026-03-09T17:59:36.545772+0000 mon.a (mon.0) 3613 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T17:59:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:37 vm00 bash[28333]: cluster 2026-03-09T17:59:37.550500+0000 mon.a (mon.0) 3614 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T17:59:38.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:37 vm00 bash[28333]: cluster 2026-03-09T17:59:37.550500+0000 mon.a (mon.0) 3614 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T17:59:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:37 vm00 bash[20770]: cluster 2026-03-09T17:59:36.545772+0000 mon.a (mon.0) 3613 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T17:59:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:37 vm00 bash[20770]: cluster 2026-03-09T17:59:36.545772+0000 mon.a (mon.0) 3613 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T17:59:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:37 vm00 bash[20770]: cluster 2026-03-09T17:59:37.550500+0000 mon.a (mon.0) 3614 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T17:59:38.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:37 vm00 bash[20770]: cluster 2026-03-09T17:59:37.550500+0000 mon.a (mon.0) 3614 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T17:59:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:38 vm02 bash[23351]: cluster 2026-03-09T17:59:37.107154+0000 mgr.y (mgr.14505) 1246 : cluster [DBG] pgmap v1699: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:38.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:38 vm02 bash[23351]: cluster 2026-03-09T17:59:37.107154+0000 mgr.y (mgr.14505) 1246 : cluster [DBG] pgmap v1699: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:38 vm00 bash[28333]: cluster 2026-03-09T17:59:37.107154+0000 mgr.y (mgr.14505) 1246 : cluster [DBG] pgmap v1699: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:38 vm00 bash[28333]: cluster 2026-03-09T17:59:37.107154+0000 mgr.y (mgr.14505) 1246 : cluster [DBG] pgmap v1699: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:38 vm00 bash[20770]: cluster 2026-03-09T17:59:37.107154+0000 mgr.y (mgr.14505) 1246 : cluster [DBG] pgmap v1699: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:38 vm00 bash[20770]: cluster 2026-03-09T17:59:37.107154+0000 mgr.y (mgr.14505) 1246 : cluster [DBG] pgmap v1699: 196 pgs: 196 active+clean; 456 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:39 vm02 bash[23351]: cluster 2026-03-09T17:59:38.570501+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T17:59:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:39 vm02 bash[23351]: cluster 2026-03-09T17:59:38.570501+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T17:59:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:39 vm02 bash[23351]: cluster 2026-03-09T17:59:39.107517+0000 mgr.y (mgr.14505) 1247 : cluster [DBG] pgmap v1702: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:39 vm02 bash[23351]: cluster 2026-03-09T17:59:39.107517+0000 mgr.y (mgr.14505) 1247 : cluster [DBG] pgmap v1702: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:39 vm02 bash[23351]: cluster 2026-03-09T17:59:39.577286+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T17:59:39.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:39 vm02 bash[23351]: cluster 2026-03-09T17:59:39.577286+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:39 vm00 bash[28333]: cluster 2026-03-09T17:59:38.570501+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:39 vm00 bash[28333]: cluster 2026-03-09T17:59:38.570501+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:39 vm00 bash[28333]: cluster 2026-03-09T17:59:39.107517+0000 mgr.y (mgr.14505) 1247 : cluster [DBG] pgmap v1702: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:39 vm00 bash[28333]: cluster 2026-03-09T17:59:39.107517+0000 mgr.y (mgr.14505) 1247 : cluster [DBG] pgmap v1702: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:39 vm00 bash[28333]: cluster 2026-03-09T17:59:39.577286+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:39 vm00 bash[28333]: cluster 2026-03-09T17:59:39.577286+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:39 vm00 bash[20770]: cluster 2026-03-09T17:59:38.570501+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:39 vm00 bash[20770]: cluster 2026-03-09T17:59:38.570501+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:39 vm00 bash[20770]: cluster 2026-03-09T17:59:39.107517+0000 mgr.y (mgr.14505) 1247 : cluster [DBG] pgmap v1702: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:39 vm00 bash[20770]: cluster 2026-03-09T17:59:39.107517+0000 mgr.y (mgr.14505) 1247 : cluster [DBG] pgmap v1702: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:39 vm00 bash[20770]: cluster 2026-03-09T17:59:39.577286+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T17:59:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:39 vm00 bash[20770]: cluster 2026-03-09T17:59:39.577286+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T17:59:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:41 vm02 bash[23351]: cluster 2026-03-09T17:59:40.578988+0000 mon.a (mon.0) 3617 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T17:59:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:41 vm02 bash[23351]: cluster 2026-03-09T17:59:40.578988+0000 mon.a (mon.0) 3617 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T17:59:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:41 vm02 bash[23351]: cluster 2026-03-09T17:59:41.107932+0000 mgr.y (mgr.14505) 1248 : cluster [DBG] pgmap v1705: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:41.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:41 vm02 bash[23351]: cluster 2026-03-09T17:59:41.107932+0000 mgr.y (mgr.14505) 1248 : cluster [DBG] pgmap v1705: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:41 vm00 bash[28333]: cluster 2026-03-09T17:59:40.578988+0000 mon.a (mon.0) 3617 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T17:59:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:41 vm00 bash[28333]: cluster 2026-03-09T17:59:40.578988+0000 mon.a (mon.0) 3617 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T17:59:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:41 vm00 bash[28333]: cluster 2026-03-09T17:59:41.107932+0000 mgr.y (mgr.14505) 1248 : cluster [DBG] pgmap v1705: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:42.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:41 vm00 bash[28333]: cluster 2026-03-09T17:59:41.107932+0000 mgr.y (mgr.14505) 1248 : cluster [DBG] pgmap v1705: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:41 vm00 bash[20770]: cluster 2026-03-09T17:59:40.578988+0000 mon.a (mon.0) 3617 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T17:59:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:41 vm00 bash[20770]: cluster 2026-03-09T17:59:40.578988+0000 mon.a (mon.0) 3617 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T17:59:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:41 vm00 bash[20770]: cluster 2026-03-09T17:59:41.107932+0000 mgr.y (mgr.14505) 1248 : cluster [DBG] pgmap v1705: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:42.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:41 vm00 bash[20770]: cluster 2026-03-09T17:59:41.107932+0000 mgr.y (mgr.14505) 1248 : cluster [DBG] pgmap v1705: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T17:59:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:42 vm00 bash[28333]: cluster 2026-03-09T17:59:41.632924+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-09T17:59:43.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:42 vm00 bash[28333]: cluster 2026-03-09T17:59:41.632924+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-09T17:59:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:42 vm00 bash[20770]: cluster 2026-03-09T17:59:41.632924+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-09T17:59:43.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:42 vm00 bash[20770]: cluster 2026-03-09T17:59:41.632924+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-09T17:59:43.060 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:42 vm02 bash[23351]: cluster 2026-03-09T17:59:41.632924+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-09T17:59:43.060 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:42 vm02 bash[23351]: cluster 2026-03-09T17:59:41.632924+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-09T17:59:43.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:59:43 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:59:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:43 vm00 bash[28333]: cluster 2026-03-09T17:59:42.634400+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-09T17:59:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:43 vm00 bash[28333]: cluster 2026-03-09T17:59:42.634400+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-09T17:59:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:43 vm00 bash[28333]: audit 2026-03-09T17:59:43.059817+0000 mgr.y (mgr.14505) 1249 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:43 vm00 bash[28333]: audit 2026-03-09T17:59:43.059817+0000 mgr.y (mgr.14505) 1249 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:43 vm00 bash[28333]: cluster 2026-03-09T17:59:43.108808+0000 mgr.y (mgr.14505) 1250 : cluster [DBG] pgmap v1708: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 1 op/s 2026-03-09T17:59:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:43 vm00 bash[28333]: cluster 2026-03-09T17:59:43.108808+0000 mgr.y (mgr.14505) 1250 : cluster [DBG] pgmap v1708: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 1 op/s 2026-03-09T17:59:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:43 vm00 bash[28333]: cluster 2026-03-09T17:59:43.639825+0000 mon.a (mon.0) 3620 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-09T17:59:44.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:43 vm00 bash[28333]: cluster 2026-03-09T17:59:43.639825+0000 mon.a (mon.0) 3620 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-09T17:59:44.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:43 vm00 bash[20770]: cluster 2026-03-09T17:59:42.634400+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-09T17:59:44.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:43 vm00 bash[20770]: cluster 2026-03-09T17:59:42.634400+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-09T17:59:44.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:43 vm00 bash[20770]: audit 2026-03-09T17:59:43.059817+0000 mgr.y (mgr.14505) 1249 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:44.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:43 vm00 bash[20770]: audit 2026-03-09T17:59:43.059817+0000 mgr.y (mgr.14505) 1249 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:44.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:43 vm00 bash[20770]: cluster 2026-03-09T17:59:43.108808+0000 mgr.y (mgr.14505) 1250 : cluster [DBG] pgmap v1708: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 1 op/s 2026-03-09T17:59:44.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:43 vm00 bash[20770]: cluster 2026-03-09T17:59:43.108808+0000 mgr.y (mgr.14505) 1250 : cluster [DBG] pgmap v1708: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 1 op/s 2026-03-09T17:59:44.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:43 vm00 bash[20770]: cluster 2026-03-09T17:59:43.639825+0000 mon.a (mon.0) 3620 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-09T17:59:44.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:43 vm00 bash[20770]: cluster 2026-03-09T17:59:43.639825+0000 mon.a (mon.0) 3620 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-09T17:59:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:43 vm02 bash[23351]: cluster 2026-03-09T17:59:42.634400+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-09T17:59:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:43 vm02 bash[23351]: cluster 2026-03-09T17:59:42.634400+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-09T17:59:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:43 vm02 bash[23351]: audit 2026-03-09T17:59:43.059817+0000 mgr.y (mgr.14505) 1249 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:43 vm02 bash[23351]: audit 2026-03-09T17:59:43.059817+0000 mgr.y (mgr.14505) 1249 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:43 vm02 bash[23351]: cluster 2026-03-09T17:59:43.108808+0000 mgr.y (mgr.14505) 1250 : cluster [DBG] pgmap v1708: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 1 op/s 2026-03-09T17:59:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:43 vm02 bash[23351]: cluster 2026-03-09T17:59:43.108808+0000 mgr.y (mgr.14505) 1250 : cluster [DBG] pgmap v1708: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 1 op/s 2026-03-09T17:59:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:43 vm02 bash[23351]: cluster 2026-03-09T17:59:43.639825+0000 mon.a (mon.0) 3620 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-09T17:59:44.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:43 vm02 bash[23351]: cluster 2026-03-09T17:59:43.639825+0000 mon.a (mon.0) 3620 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-09T17:59:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:44 vm00 bash[28333]: audit 2026-03-09T17:59:43.927823+0000 mon.c (mon.2) 937 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:44 vm00 bash[28333]: audit 2026-03-09T17:59:43.927823+0000 mon.c (mon.2) 937 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:44 vm00 bash[28333]: cluster 2026-03-09T17:59:44.667673+0000 mon.a (mon.0) 3621 : cluster [DBG] osdmap e789: 8 total, 8 up, 8 in 2026-03-09T17:59:45.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:44 vm00 bash[28333]: cluster 2026-03-09T17:59:44.667673+0000 mon.a (mon.0) 3621 : cluster [DBG] osdmap e789: 8 total, 8 up, 8 in 2026-03-09T17:59:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:44 vm00 bash[20770]: audit 2026-03-09T17:59:43.927823+0000 mon.c (mon.2) 937 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:44 vm00 bash[20770]: audit 2026-03-09T17:59:43.927823+0000 mon.c (mon.2) 937 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:44 vm00 bash[20770]: cluster 2026-03-09T17:59:44.667673+0000 mon.a (mon.0) 3621 : cluster [DBG] osdmap e789: 8 total, 8 up, 8 in 2026-03-09T17:59:45.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:44 vm00 bash[20770]: cluster 2026-03-09T17:59:44.667673+0000 mon.a (mon.0) 3621 : cluster [DBG] osdmap e789: 8 total, 8 up, 8 in 2026-03-09T17:59:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:44 vm02 bash[23351]: audit 2026-03-09T17:59:43.927823+0000 mon.c (mon.2) 937 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:44 vm02 bash[23351]: audit 2026-03-09T17:59:43.927823+0000 mon.c (mon.2) 937 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:44 vm02 bash[23351]: cluster 2026-03-09T17:59:44.667673+0000 mon.a (mon.0) 3621 : cluster [DBG] osdmap e789: 8 total, 8 up, 8 in 2026-03-09T17:59:45.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:44 vm02 bash[23351]: cluster 2026-03-09T17:59:44.667673+0000 mon.a (mon.0) 3621 : cluster [DBG] osdmap e789: 8 total, 8 up, 8 in 2026-03-09T17:59:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:45 vm00 bash[28333]: cluster 2026-03-09T17:59:45.109236+0000 mgr.y (mgr.14505) 1251 : cluster [DBG] pgmap v1711: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:46.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:45 vm00 bash[28333]: cluster 2026-03-09T17:59:45.109236+0000 mgr.y (mgr.14505) 1251 : cluster [DBG] pgmap v1711: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:45 vm00 bash[20770]: cluster 2026-03-09T17:59:45.109236+0000 mgr.y (mgr.14505) 1251 : cluster [DBG] pgmap v1711: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:46.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:45 vm00 bash[20770]: cluster 2026-03-09T17:59:45.109236+0000 mgr.y (mgr.14505) 1251 : cluster [DBG] pgmap v1711: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:45 vm02 bash[23351]: cluster 2026-03-09T17:59:45.109236+0000 mgr.y (mgr.14505) 1251 : cluster [DBG] pgmap v1711: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:46.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:45 vm02 bash[23351]: cluster 2026-03-09T17:59:45.109236+0000 mgr.y (mgr.14505) 1251 : cluster [DBG] pgmap v1711: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:59:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:59:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:59:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:47 vm02 bash[23351]: cluster 2026-03-09T17:59:45.729801+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e790: 8 total, 8 up, 8 in 2026-03-09T17:59:47.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:47 vm02 bash[23351]: cluster 2026-03-09T17:59:45.729801+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e790: 8 total, 8 up, 8 in 2026-03-09T17:59:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:47 vm00 bash[28333]: cluster 2026-03-09T17:59:45.729801+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e790: 8 total, 8 up, 8 in 2026-03-09T17:59:47.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:47 vm00 bash[28333]: cluster 2026-03-09T17:59:45.729801+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e790: 8 total, 8 up, 8 in 2026-03-09T17:59:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:47 vm00 bash[20770]: cluster 2026-03-09T17:59:45.729801+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e790: 8 total, 8 up, 8 in 2026-03-09T17:59:47.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:47 vm00 bash[20770]: cluster 2026-03-09T17:59:45.729801+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e790: 8 total, 8 up, 8 in 2026-03-09T17:59:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:48 vm02 bash[23351]: cluster 2026-03-09T17:59:47.054517+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e791: 8 total, 8 up, 8 in 2026-03-09T17:59:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:48 vm02 bash[23351]: cluster 2026-03-09T17:59:47.054517+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e791: 8 total, 8 up, 8 in 2026-03-09T17:59:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:48 vm02 bash[23351]: cluster 2026-03-09T17:59:47.109521+0000 mgr.y (mgr.14505) 1252 : cluster [DBG] pgmap v1714: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:48 vm02 bash[23351]: cluster 2026-03-09T17:59:47.109521+0000 mgr.y (mgr.14505) 1252 : cluster [DBG] pgmap v1714: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:48 vm00 bash[28333]: cluster 2026-03-09T17:59:47.054517+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e791: 8 total, 8 up, 8 in 2026-03-09T17:59:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:48 vm00 bash[28333]: cluster 2026-03-09T17:59:47.054517+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e791: 8 total, 8 up, 8 in 2026-03-09T17:59:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:48 vm00 bash[28333]: cluster 2026-03-09T17:59:47.109521+0000 mgr.y (mgr.14505) 1252 : cluster [DBG] pgmap v1714: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:48 vm00 bash[28333]: cluster 2026-03-09T17:59:47.109521+0000 mgr.y (mgr.14505) 1252 : cluster [DBG] pgmap v1714: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:48 vm00 bash[20770]: cluster 2026-03-09T17:59:47.054517+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e791: 8 total, 8 up, 8 in 2026-03-09T17:59:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:48 vm00 bash[20770]: cluster 2026-03-09T17:59:47.054517+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e791: 8 total, 8 up, 8 in 2026-03-09T17:59:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:48 vm00 bash[20770]: cluster 2026-03-09T17:59:47.109521+0000 mgr.y (mgr.14505) 1252 : cluster [DBG] pgmap v1714: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:48 vm00 bash[20770]: cluster 2026-03-09T17:59:47.109521+0000 mgr.y (mgr.14505) 1252 : cluster [DBG] pgmap v1714: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T17:59:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:49 vm00 bash[28333]: cluster 2026-03-09T17:59:48.126379+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e792: 8 total, 8 up, 8 in 2026-03-09T17:59:49.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:49 vm00 bash[28333]: cluster 2026-03-09T17:59:48.126379+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e792: 8 total, 8 up, 8 in 2026-03-09T17:59:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:49 vm00 bash[20770]: cluster 2026-03-09T17:59:48.126379+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e792: 8 total, 8 up, 8 in 2026-03-09T17:59:49.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:49 vm00 bash[20770]: cluster 2026-03-09T17:59:48.126379+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e792: 8 total, 8 up, 8 in 2026-03-09T17:59:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:49 vm02 bash[23351]: cluster 2026-03-09T17:59:48.126379+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e792: 8 total, 8 up, 8 in 2026-03-09T17:59:49.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:49 vm02 bash[23351]: cluster 2026-03-09T17:59:48.126379+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e792: 8 total, 8 up, 8 in 2026-03-09T17:59:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:50 vm00 bash[28333]: cluster 2026-03-09T17:59:49.109965+0000 mgr.y (mgr.14505) 1253 : cluster [DBG] pgmap v1716: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:59:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:50 vm00 bash[28333]: cluster 2026-03-09T17:59:49.109965+0000 mgr.y (mgr.14505) 1253 : cluster [DBG] pgmap v1716: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:59:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:50 vm00 bash[28333]: cluster 2026-03-09T17:59:49.167447+0000 mon.a (mon.0) 3625 : cluster [DBG] osdmap e793: 8 total, 8 up, 8 in 2026-03-09T17:59:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:50 vm00 bash[28333]: cluster 2026-03-09T17:59:49.167447+0000 mon.a (mon.0) 3625 : cluster [DBG] osdmap e793: 8 total, 8 up, 8 in 2026-03-09T17:59:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:50 vm00 bash[28333]: cluster 2026-03-09T17:59:49.203771+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:50 vm00 bash[28333]: cluster 2026-03-09T17:59:49.203771+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:50.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:50 vm00 bash[20770]: cluster 2026-03-09T17:59:49.109965+0000 mgr.y (mgr.14505) 1253 : cluster [DBG] pgmap v1716: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:59:50.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:50 vm00 bash[20770]: cluster 2026-03-09T17:59:49.109965+0000 mgr.y (mgr.14505) 1253 : cluster [DBG] pgmap v1716: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:59:50.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:50 vm00 bash[20770]: cluster 2026-03-09T17:59:49.167447+0000 mon.a (mon.0) 3625 : cluster [DBG] osdmap e793: 8 total, 8 up, 8 in 2026-03-09T17:59:50.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:50 vm00 bash[20770]: cluster 2026-03-09T17:59:49.167447+0000 mon.a (mon.0) 3625 : cluster [DBG] osdmap e793: 8 total, 8 up, 8 in 2026-03-09T17:59:50.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:50 vm00 bash[20770]: cluster 2026-03-09T17:59:49.203771+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:50.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:50 vm00 bash[20770]: cluster 2026-03-09T17:59:49.203771+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:50 vm02 bash[23351]: cluster 2026-03-09T17:59:49.109965+0000 mgr.y (mgr.14505) 1253 : cluster [DBG] pgmap v1716: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:59:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:50 vm02 bash[23351]: cluster 2026-03-09T17:59:49.109965+0000 mgr.y (mgr.14505) 1253 : cluster [DBG] pgmap v1716: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T17:59:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:50 vm02 bash[23351]: cluster 2026-03-09T17:59:49.167447+0000 mon.a (mon.0) 3625 : cluster [DBG] osdmap e793: 8 total, 8 up, 8 in 2026-03-09T17:59:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:50 vm02 bash[23351]: cluster 2026-03-09T17:59:49.167447+0000 mon.a (mon.0) 3625 : cluster [DBG] osdmap e793: 8 total, 8 up, 8 in 2026-03-09T17:59:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:50 vm02 bash[23351]: cluster 2026-03-09T17:59:49.203771+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:50 vm02 bash[23351]: cluster 2026-03-09T17:59:49.203771+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:51 vm00 bash[28333]: cluster 2026-03-09T17:59:50.192176+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e794: 8 total, 8 up, 8 in 2026-03-09T17:59:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:51 vm00 bash[28333]: cluster 2026-03-09T17:59:50.192176+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e794: 8 total, 8 up, 8 in 2026-03-09T17:59:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:51 vm00 bash[20770]: cluster 2026-03-09T17:59:50.192176+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e794: 8 total, 8 up, 8 in 2026-03-09T17:59:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:51 vm00 bash[20770]: cluster 2026-03-09T17:59:50.192176+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e794: 8 total, 8 up, 8 in 2026-03-09T17:59:51.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:51 vm02 bash[23351]: cluster 2026-03-09T17:59:50.192176+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e794: 8 total, 8 up, 8 in 2026-03-09T17:59:51.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:51 vm02 bash[23351]: cluster 2026-03-09T17:59:50.192176+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e794: 8 total, 8 up, 8 in 2026-03-09T17:59:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:52 vm00 bash[28333]: cluster 2026-03-09T17:59:51.110392+0000 mgr.y (mgr.14505) 1254 : cluster [DBG] pgmap v1719: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:52 vm00 bash[28333]: cluster 2026-03-09T17:59:51.110392+0000 mgr.y (mgr.14505) 1254 : cluster [DBG] pgmap v1719: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:52 vm00 bash[28333]: cluster 2026-03-09T17:59:51.201617+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e795: 8 total, 8 up, 8 in 2026-03-09T17:59:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:52 vm00 bash[28333]: cluster 2026-03-09T17:59:51.201617+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e795: 8 total, 8 up, 8 in 2026-03-09T17:59:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:52 vm00 bash[20770]: cluster 2026-03-09T17:59:51.110392+0000 mgr.y (mgr.14505) 1254 : cluster [DBG] pgmap v1719: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:52 vm00 bash[20770]: cluster 2026-03-09T17:59:51.110392+0000 mgr.y (mgr.14505) 1254 : cluster [DBG] pgmap v1719: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:52 vm00 bash[20770]: cluster 2026-03-09T17:59:51.201617+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e795: 8 total, 8 up, 8 in 2026-03-09T17:59:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:52 vm00 bash[20770]: cluster 2026-03-09T17:59:51.201617+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e795: 8 total, 8 up, 8 in 2026-03-09T17:59:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:52 vm02 bash[23351]: cluster 2026-03-09T17:59:51.110392+0000 mgr.y (mgr.14505) 1254 : cluster [DBG] pgmap v1719: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:52 vm02 bash[23351]: cluster 2026-03-09T17:59:51.110392+0000 mgr.y (mgr.14505) 1254 : cluster [DBG] pgmap v1719: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T17:59:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:52 vm02 bash[23351]: cluster 2026-03-09T17:59:51.201617+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e795: 8 total, 8 up, 8 in 2026-03-09T17:59:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:52 vm02 bash[23351]: cluster 2026-03-09T17:59:51.201617+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e795: 8 total, 8 up, 8 in 2026-03-09T17:59:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:53 vm02 bash[23351]: cluster 2026-03-09T17:59:52.210330+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e796: 8 total, 8 up, 8 in 2026-03-09T17:59:53.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:53 vm02 bash[23351]: cluster 2026-03-09T17:59:52.210330+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e796: 8 total, 8 up, 8 in 2026-03-09T17:59:53.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 17:59:53 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T17:59:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:53 vm00 bash[28333]: cluster 2026-03-09T17:59:52.210330+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e796: 8 total, 8 up, 8 in 2026-03-09T17:59:53.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:53 vm00 bash[28333]: cluster 2026-03-09T17:59:52.210330+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e796: 8 total, 8 up, 8 in 2026-03-09T17:59:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:53 vm00 bash[20770]: cluster 2026-03-09T17:59:52.210330+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e796: 8 total, 8 up, 8 in 2026-03-09T17:59:53.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:53 vm00 bash[20770]: cluster 2026-03-09T17:59:52.210330+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e796: 8 total, 8 up, 8 in 2026-03-09T17:59:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:54 vm00 bash[28333]: audit 2026-03-09T17:59:53.070123+0000 mgr.y (mgr.14505) 1255 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:54 vm00 bash[28333]: audit 2026-03-09T17:59:53.070123+0000 mgr.y (mgr.14505) 1255 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:54 vm00 bash[28333]: cluster 2026-03-09T17:59:53.111099+0000 mgr.y (mgr.14505) 1256 : cluster [DBG] pgmap v1722: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:59:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:54 vm00 bash[28333]: cluster 2026-03-09T17:59:53.111099+0000 mgr.y (mgr.14505) 1256 : cluster [DBG] pgmap v1722: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:59:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:54 vm00 bash[20770]: audit 2026-03-09T17:59:53.070123+0000 mgr.y (mgr.14505) 1255 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:54 vm00 bash[20770]: audit 2026-03-09T17:59:53.070123+0000 mgr.y (mgr.14505) 1255 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:54 vm00 bash[20770]: cluster 2026-03-09T17:59:53.111099+0000 mgr.y (mgr.14505) 1256 : cluster [DBG] pgmap v1722: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:59:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:54 vm00 bash[20770]: cluster 2026-03-09T17:59:53.111099+0000 mgr.y (mgr.14505) 1256 : cluster [DBG] pgmap v1722: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:59:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:54 vm02 bash[23351]: audit 2026-03-09T17:59:53.070123+0000 mgr.y (mgr.14505) 1255 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:54 vm02 bash[23351]: audit 2026-03-09T17:59:53.070123+0000 mgr.y (mgr.14505) 1255 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T17:59:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:54 vm02 bash[23351]: cluster 2026-03-09T17:59:53.111099+0000 mgr.y (mgr.14505) 1256 : cluster [DBG] pgmap v1722: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:59:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:54 vm02 bash[23351]: cluster 2026-03-09T17:59:53.111099+0000 mgr.y (mgr.14505) 1256 : cluster [DBG] pgmap v1722: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T17:59:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:56 vm00 bash[28333]: cluster 2026-03-09T17:59:55.111540+0000 mgr.y (mgr.14505) 1257 : cluster [DBG] pgmap v1723: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 861 B/s rd, 172 B/s wr, 1 op/s 2026-03-09T17:59:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:56 vm00 bash[28333]: cluster 2026-03-09T17:59:55.111540+0000 mgr.y (mgr.14505) 1257 : cluster [DBG] pgmap v1723: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 861 B/s rd, 172 B/s wr, 1 op/s 2026-03-09T17:59:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:56 vm00 bash[20770]: cluster 2026-03-09T17:59:55.111540+0000 mgr.y (mgr.14505) 1257 : cluster [DBG] pgmap v1723: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 861 B/s rd, 172 B/s wr, 1 op/s 2026-03-09T17:59:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:56 vm00 bash[20770]: cluster 2026-03-09T17:59:55.111540+0000 mgr.y (mgr.14505) 1257 : cluster [DBG] pgmap v1723: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 861 B/s rd, 172 B/s wr, 1 op/s 2026-03-09T17:59:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 17:59:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:17:59:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T17:59:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:56 vm02 bash[23351]: cluster 2026-03-09T17:59:55.111540+0000 mgr.y (mgr.14505) 1257 : cluster [DBG] pgmap v1723: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 861 B/s rd, 172 B/s wr, 1 op/s 2026-03-09T17:59:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:56 vm02 bash[23351]: cluster 2026-03-09T17:59:55.111540+0000 mgr.y (mgr.14505) 1257 : cluster [DBG] pgmap v1723: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 861 B/s rd, 172 B/s wr, 1 op/s 2026-03-09T17:59:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:57 vm00 bash[28333]: cluster 2026-03-09T17:59:56.959201+0000 mon.a (mon.0) 3630 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:57 vm00 bash[28333]: cluster 2026-03-09T17:59:56.959201+0000 mon.a (mon.0) 3630 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:57 vm00 bash[28333]: cluster 2026-03-09T17:59:56.975748+0000 mon.a (mon.0) 3631 : cluster [DBG] osdmap e797: 8 total, 8 up, 8 in 2026-03-09T17:59:57.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:57 vm00 bash[28333]: cluster 2026-03-09T17:59:56.975748+0000 mon.a (mon.0) 3631 : cluster [DBG] osdmap e797: 8 total, 8 up, 8 in 2026-03-09T17:59:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:57 vm00 bash[20770]: cluster 2026-03-09T17:59:56.959201+0000 mon.a (mon.0) 3630 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:57 vm00 bash[20770]: cluster 2026-03-09T17:59:56.959201+0000 mon.a (mon.0) 3630 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:57 vm00 bash[20770]: cluster 2026-03-09T17:59:56.975748+0000 mon.a (mon.0) 3631 : cluster [DBG] osdmap e797: 8 total, 8 up, 8 in 2026-03-09T17:59:57.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:57 vm00 bash[20770]: cluster 2026-03-09T17:59:56.975748+0000 mon.a (mon.0) 3631 : cluster [DBG] osdmap e797: 8 total, 8 up, 8 in 2026-03-09T17:59:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:57 vm02 bash[23351]: cluster 2026-03-09T17:59:56.959201+0000 mon.a (mon.0) 3630 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:57 vm02 bash[23351]: cluster 2026-03-09T17:59:56.959201+0000 mon.a (mon.0) 3630 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T17:59:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:57 vm02 bash[23351]: cluster 2026-03-09T17:59:56.975748+0000 mon.a (mon.0) 3631 : cluster [DBG] osdmap e797: 8 total, 8 up, 8 in 2026-03-09T17:59:57.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:57 vm02 bash[23351]: cluster 2026-03-09T17:59:56.975748+0000 mon.a (mon.0) 3631 : cluster [DBG] osdmap e797: 8 total, 8 up, 8 in 2026-03-09T17:59:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:58 vm00 bash[28333]: cluster 2026-03-09T17:59:57.111987+0000 mgr.y (mgr.14505) 1258 : cluster [DBG] pgmap v1725: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T17:59:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:58 vm00 bash[28333]: cluster 2026-03-09T17:59:57.111987+0000 mgr.y (mgr.14505) 1258 : cluster [DBG] pgmap v1725: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T17:59:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:58 vm00 bash[20770]: cluster 2026-03-09T17:59:57.111987+0000 mgr.y (mgr.14505) 1258 : cluster [DBG] pgmap v1725: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T17:59:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:58 vm00 bash[20770]: cluster 2026-03-09T17:59:57.111987+0000 mgr.y (mgr.14505) 1258 : cluster [DBG] pgmap v1725: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T17:59:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:58 vm02 bash[23351]: cluster 2026-03-09T17:59:57.111987+0000 mgr.y (mgr.14505) 1258 : cluster [DBG] pgmap v1725: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T17:59:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:58 vm02 bash[23351]: cluster 2026-03-09T17:59:57.111987+0000 mgr.y (mgr.14505) 1258 : cluster [DBG] pgmap v1725: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T17:59:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:59 vm00 bash[28333]: audit 2026-03-09T17:59:58.934408+0000 mon.c (mon.2) 938 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:59.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 17:59:59 vm00 bash[28333]: audit 2026-03-09T17:59:58.934408+0000 mon.c (mon.2) 938 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:59 vm00 bash[20770]: audit 2026-03-09T17:59:58.934408+0000 mon.c (mon.2) 938 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:59.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 17:59:59 vm00 bash[20770]: audit 2026-03-09T17:59:58.934408+0000 mon.c (mon.2) 938 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:59 vm02 bash[23351]: audit 2026-03-09T17:59:58.934408+0000 mon.c (mon.2) 938 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T17:59:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 17:59:59 vm02 bash[23351]: audit 2026-03-09T17:59:58.934408+0000 mon.c (mon.2) 938 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T17:59:59.112590+0000 mgr.y (mgr.14505) 1259 : cluster [DBG] pgmap v1726: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 129 B/s wr, 1 op/s 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T17:59:59.112590+0000 mgr.y (mgr.14505) 1259 : cluster [DBG] pgmap v1726: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 129 B/s wr, 1 op/s 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000110+0000 mon.a (mon.0) 3632 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000110+0000 mon.a (mon.0) 3632 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000130+0000 mon.a (mon.0) 3633 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000130+0000 mon.a (mon.0) 3633 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000139+0000 mon.a (mon.0) 3634 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000139+0000 mon.a (mon.0) 3634 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000149+0000 mon.a (mon.0) 3635 : cluster [WRN] application not enabled on pool 'ReusePurgedSnapvm00-60924-11' 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000149+0000 mon.a (mon.0) 3635 : cluster [WRN] application not enabled on pool 'ReusePurgedSnapvm00-60924-11' 2026-03-09T18:00:00.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000174+0000 mon.a (mon.0) 3636 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:00 vm00 bash[28333]: cluster 2026-03-09T18:00:00.000174+0000 mon.a (mon.0) 3636 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T17:59:59.112590+0000 mgr.y (mgr.14505) 1259 : cluster [DBG] pgmap v1726: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 129 B/s wr, 1 op/s 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T17:59:59.112590+0000 mgr.y (mgr.14505) 1259 : cluster [DBG] pgmap v1726: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 129 B/s wr, 1 op/s 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000110+0000 mon.a (mon.0) 3632 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000110+0000 mon.a (mon.0) 3632 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000130+0000 mon.a (mon.0) 3633 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000130+0000 mon.a (mon.0) 3633 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000139+0000 mon.a (mon.0) 3634 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000139+0000 mon.a (mon.0) 3634 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000149+0000 mon.a (mon.0) 3635 : cluster [WRN] application not enabled on pool 'ReusePurgedSnapvm00-60924-11' 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000149+0000 mon.a (mon.0) 3635 : cluster [WRN] application not enabled on pool 'ReusePurgedSnapvm00-60924-11' 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000174+0000 mon.a (mon.0) 3636 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T18:00:00.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:00 vm00 bash[20770]: cluster 2026-03-09T18:00:00.000174+0000 mon.a (mon.0) 3636 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T17:59:59.112590+0000 mgr.y (mgr.14505) 1259 : cluster [DBG] pgmap v1726: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 129 B/s wr, 1 op/s 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T17:59:59.112590+0000 mgr.y (mgr.14505) 1259 : cluster [DBG] pgmap v1726: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 129 B/s wr, 1 op/s 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000110+0000 mon.a (mon.0) 3632 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000110+0000 mon.a (mon.0) 3632 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000130+0000 mon.a (mon.0) 3633 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000130+0000 mon.a (mon.0) 3633 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000139+0000 mon.a (mon.0) 3634 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000139+0000 mon.a (mon.0) 3634 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000149+0000 mon.a (mon.0) 3635 : cluster [WRN] application not enabled on pool 'ReusePurgedSnapvm00-60924-11' 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000149+0000 mon.a (mon.0) 3635 : cluster [WRN] application not enabled on pool 'ReusePurgedSnapvm00-60924-11' 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000174+0000 mon.a (mon.0) 3636 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T18:00:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:00 vm02 bash[23351]: cluster 2026-03-09T18:00:00.000174+0000 mon.a (mon.0) 3636 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T18:00:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:02 vm00 bash[28333]: cluster 2026-03-09T18:00:01.112979+0000 mgr.y (mgr.14505) 1260 : cluster [DBG] pgmap v1727: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 114 B/s wr, 1 op/s 2026-03-09T18:00:02.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:02 vm00 bash[28333]: cluster 2026-03-09T18:00:01.112979+0000 mgr.y (mgr.14505) 1260 : cluster [DBG] pgmap v1727: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 114 B/s wr, 1 op/s 2026-03-09T18:00:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:02 vm00 bash[20770]: cluster 2026-03-09T18:00:01.112979+0000 mgr.y (mgr.14505) 1260 : cluster [DBG] pgmap v1727: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 114 B/s wr, 1 op/s 2026-03-09T18:00:02.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:02 vm00 bash[20770]: cluster 2026-03-09T18:00:01.112979+0000 mgr.y (mgr.14505) 1260 : cluster [DBG] pgmap v1727: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 114 B/s wr, 1 op/s 2026-03-09T18:00:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:02 vm02 bash[23351]: cluster 2026-03-09T18:00:01.112979+0000 mgr.y (mgr.14505) 1260 : cluster [DBG] pgmap v1727: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 114 B/s wr, 1 op/s 2026-03-09T18:00:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:02 vm02 bash[23351]: cluster 2026-03-09T18:00:01.112979+0000 mgr.y (mgr.14505) 1260 : cluster [DBG] pgmap v1727: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 114 B/s wr, 1 op/s 2026-03-09T18:00:03.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:00:03 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:00:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:04 vm00 bash[28333]: audit 2026-03-09T18:00:03.075148+0000 mgr.y (mgr.14505) 1261 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:04 vm00 bash[28333]: audit 2026-03-09T18:00:03.075148+0000 mgr.y (mgr.14505) 1261 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:04 vm00 bash[28333]: cluster 2026-03-09T18:00:03.113548+0000 mgr.y (mgr.14505) 1262 : cluster [DBG] pgmap v1728: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:04 vm00 bash[28333]: cluster 2026-03-09T18:00:03.113548+0000 mgr.y (mgr.14505) 1262 : cluster [DBG] pgmap v1728: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:04 vm00 bash[20770]: audit 2026-03-09T18:00:03.075148+0000 mgr.y (mgr.14505) 1261 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:04 vm00 bash[20770]: audit 2026-03-09T18:00:03.075148+0000 mgr.y (mgr.14505) 1261 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:04 vm00 bash[20770]: cluster 2026-03-09T18:00:03.113548+0000 mgr.y (mgr.14505) 1262 : cluster [DBG] pgmap v1728: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:04 vm00 bash[20770]: cluster 2026-03-09T18:00:03.113548+0000 mgr.y (mgr.14505) 1262 : cluster [DBG] pgmap v1728: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:04.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:04 vm02 bash[23351]: audit 2026-03-09T18:00:03.075148+0000 mgr.y (mgr.14505) 1261 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:04 vm02 bash[23351]: audit 2026-03-09T18:00:03.075148+0000 mgr.y (mgr.14505) 1261 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:04 vm02 bash[23351]: cluster 2026-03-09T18:00:03.113548+0000 mgr.y (mgr.14505) 1262 : cluster [DBG] pgmap v1728: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:04 vm02 bash[23351]: cluster 2026-03-09T18:00:03.113548+0000 mgr.y (mgr.14505) 1262 : cluster [DBG] pgmap v1728: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:06 vm02 bash[23351]: cluster 2026-03-09T18:00:05.113865+0000 mgr.y (mgr.14505) 1263 : cluster [DBG] pgmap v1729: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:06 vm02 bash[23351]: cluster 2026-03-09T18:00:05.113865+0000 mgr.y (mgr.14505) 1263 : cluster [DBG] pgmap v1729: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:06 vm00 bash[28333]: cluster 2026-03-09T18:00:05.113865+0000 mgr.y (mgr.14505) 1263 : cluster [DBG] pgmap v1729: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:06 vm00 bash[28333]: cluster 2026-03-09T18:00:05.113865+0000 mgr.y (mgr.14505) 1263 : cluster [DBG] pgmap v1729: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:06 vm00 bash[20770]: cluster 2026-03-09T18:00:05.113865+0000 mgr.y (mgr.14505) 1263 : cluster [DBG] pgmap v1729: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:06 vm00 bash[20770]: cluster 2026-03-09T18:00:05.113865+0000 mgr.y (mgr.14505) 1263 : cluster [DBG] pgmap v1729: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:00:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:00:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:00:07.392 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: Running main() from gmock_main.cc 2026-03-09T18:00:07.392 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [==========] Running 11 tests from 2 test suites. 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] Global test environment set-up. 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapList 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapList (1803736 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapRemove 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapRemove (5052 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.Rollback 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.Rollback (4075 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapGetName 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapGetName (5244 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapCreateRemove 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapCreateRemove (7192 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots (1825299 ms total) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Snap 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Snap (5210 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Rollback 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Rollback (6107 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.SnapOverlap 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.SnapOverlap (8072 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Bug11677 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Bug11677 (6087 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.OrderSnap 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.OrderSnap (4490 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.ReusePurgedSnap 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: Deleting snap 3 in pool ReusePurgedSnapvm00-60924-11. 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: Waiting for snaps to purge. 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.ReusePurgedSnap (19255 ms) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps (49221 ms total) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] Global test environment tear-down 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [==========] 11 tests from 2 test suites ran. (1874526 ms total) 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ PASSED ] 11 tests. 2026-03-09T18:00:07.393 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60374 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60374 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59903 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59903 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59994 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59994 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60335 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60335 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60057 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60057 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60398 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60398 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60422 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60422 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60759 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60759 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60938 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60938 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ exit 0 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ cleanup 2026-03-09T18:00:07.394 INFO:tasks.workunit.client.0.vm00.stderr:+ pkill -P 59897 2026-03-09T18:00:07.397 INFO:tasks.workunit.client.0.vm00.stderr:+ true 2026-03-09T18:00:07.398 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T18:00:07.398 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T18:00:07.408 INFO:tasks.workunit:Running workunits matching rados/test_pool_quota.sh on client.0... 2026-03-09T18:00:07.423 INFO:tasks.workunit:Running workunit rados/test_pool_quota.sh... 2026-03-09T18:00:07.423 DEBUG:teuthology.orchestra.run.vm00:workunit test rados/test_pool_quota.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_pool_quota.sh 2026-03-09T18:00:07.458 INFO:tasks.workunit.client.0.vm00.stderr:+ uuidgen 2026-03-09T18:00:07.459 INFO:tasks.workunit.client.0.vm00.stderr:+ p=bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 2026-03-09T18:00:07.459 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool create bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 12 2026-03-09T18:00:07.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 -- 192.168.123.100:0/2844209534 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 msgr2=0x7f5104111990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:07.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 --2- 192.168.123.100:0/2844209534 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 0x7f5104111990 secure :-1 s=READY pgs=3110 cs=0 l=1 rev1=1 crypto rx=0x7f50f400b0d0 tx=0x7f50f401ca70 comp rx=0 tx=0).stop 2026-03-09T18:00:07.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 -- 192.168.123.100:0/2844209534 shutdown_connections 2026-03-09T18:00:07.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 --2- 192.168.123.100:0/2844209534 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 0x7f5104111990 unknown :-1 s=CLOSED pgs=3110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:07.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 --2- 192.168.123.100:0/2844209534 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5104100f20 0x7f5104101360 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:07.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 --2- 192.168.123.100:0/2844209534 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5104103530 0x7f5104103910 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:07.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 -- 192.168.123.100:0/2844209534 >> 192.168.123.100:0/2844209534 conn(0x7f5104078100 msgr2=0x7f5104078510 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:07.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 -- 192.168.123.100:0/2844209534 shutdown_connections 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 -- 192.168.123.100:0/2844209534 wait complete. 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 Processor -- start 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.529+0000 7f5103577640 1 -- start start 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5104100f20 0x7f510419f090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 0x7f510419f5d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5104103530 0x7f51041a3960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f5104116c20 con 0x7f51041018a0 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f5104116aa0 con 0x7f5104103530 2026-03-09T18:00:07.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f5104116da0 con 0x7f5104100f20 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 0x7f510419f5d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 0x7f510419f5d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:49442/0 (socket says 192.168.123.100:49442) 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 -- 192.168.123.100:0/1658291374 learned_addr learned my addr 192.168.123.100:0/1658291374 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5102d76640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5104103530 0x7f51041a3960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5102575640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5104100f20 0x7f510419f090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 -- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5104100f20 msgr2=0x7f510419f090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5104100f20 0x7f510419f090 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 -- 192.168.123.100:0/1658291374 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5104103530 msgr2=0x7f51041a3960 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5104103530 0x7f51041a3960 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 -- 192.168.123.100:0/1658291374 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f51041a4040 con 0x7f51041018a0 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5102575640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5104100f20 0x7f510419f090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5102d76640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5104103530 0x7f51041a3960 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T18:00:07.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5101d74640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 0x7f510419f5d0 secure :-1 s=READY pgs=3111 cs=0 l=1 rev1=1 crypto rx=0x7f50ec0027c0 tx=0x7f50ec002c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:07.538 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f50eb7fe640 1 -- 192.168.123.100:0/1658291374 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f50ec00ed40 con 0x7f51041018a0 2026-03-09T18:00:07.538 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f50eb7fe640 1 -- 192.168.123.100:0/1658291374 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f50ec0108a0 con 0x7f51041018a0 2026-03-09T18:00:07.538 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f50eb7fe640 1 -- 192.168.123.100:0/1658291374 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f50ec00f6c0 con 0x7f51041018a0 2026-03-09T18:00:07.538 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f51041a4330 con 0x7f51041018a0 2026-03-09T18:00:07.538 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f51041a4790 con 0x7f51041018a0 2026-03-09T18:00:07.542 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f510406aa30 con 0x7f51041018a0 2026-03-09T18:00:07.542 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f50eb7fe640 1 -- 192.168.123.100:0/1658291374 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f50ec010430 con 0x7f51041018a0 2026-03-09T18:00:07.543 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f50eb7fe640 1 --2- 192.168.123.100:0/1658291374 >> v2:192.168.123.100:6800/2673235927 conn(0x7f50d40777a0 0x7f50d4079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:07.543 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.533+0000 7f50eb7fe640 1 -- 192.168.123.100:0/1658291374 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(798..798 src has 1..798) ==== 7348+0+0 (secure 0 0 0) 0x7f50ec09a490 con 0x7f51041018a0 2026-03-09T18:00:07.543 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.537+0000 7f5102575640 1 --2- 192.168.123.100:0/1658291374 >> v2:192.168.123.100:6800/2673235927 conn(0x7f50d40777a0 0x7f50d4079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:07.543 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.537+0000 7f50eb7fe640 1 -- 192.168.123.100:0/1658291374 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f51041a4790 con 0x7f51041018a0 2026-03-09T18:00:07.543 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.537+0000 7f5102575640 1 --2- 192.168.123.100:0/1658291374 >> v2:192.168.123.100:6800/2673235927 conn(0x7f50d40777a0 0x7f50d4079c60 secure :-1 s=READY pgs=4252 cs=0 l=1 rev1=1 crypto rx=0x7f50f8008b10 tx=0x7f50f8005e30 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:07.638 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:07.633+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12} v 0) -- 0x7f5104101360 con 0x7f51041018a0 2026-03-09T18:00:08.391 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.385+0000 7f50eb7fe640 1 -- 192.168.123.100:0/1658291374 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]=0 pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' created v799) ==== 176+0+0 (secure 0 0 0) 0x7f50ec014030 con 0x7f51041018a0 2026-03-09T18:00:08.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.445+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12} v 0) -- 0x7f510410cac0 con 0x7f51041018a0 2026-03-09T18:00:08.451 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.445+0000 7f50eb7fe640 1 -- 192.168.123.100:0/1658291374 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]=0 pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' already exists v799) ==== 183+0+0 (secure 0 0 0) 0x7f50ec066e60 con 0x7f51041018a0 2026-03-09T18:00:08.451 INFO:tasks.workunit.client.0.vm00.stderr:pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' already exists 2026-03-09T18:00:08.453 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 >> v2:192.168.123.100:6800/2673235927 conn(0x7f50d40777a0 msgr2=0x7f50d4079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:08.453 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 --2- 192.168.123.100:0/1658291374 >> v2:192.168.123.100:6800/2673235927 conn(0x7f50d40777a0 0x7f50d4079c60 secure :-1 s=READY pgs=4252 cs=0 l=1 rev1=1 crypto rx=0x7f50f8008b10 tx=0x7f50f8005e30 comp rx=0 tx=0).stop 2026-03-09T18:00:08.453 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 msgr2=0x7f510419f5d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:08.453 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 0x7f510419f5d0 secure :-1 s=READY pgs=3111 cs=0 l=1 rev1=1 crypto rx=0x7f50ec0027c0 tx=0x7f50ec002c80 comp rx=0 tx=0).stop 2026-03-09T18:00:08.453 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 shutdown_connections 2026-03-09T18:00:08.453 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 --2- 192.168.123.100:0/1658291374 >> v2:192.168.123.100:6800/2673235927 conn(0x7f50d40777a0 0x7f50d4079c60 unknown :-1 s=CLOSED pgs=4252 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.453 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5104103530 0x7f51041a3960 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.454 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f51041018a0 0x7f510419f5d0 unknown :-1 s=CLOSED pgs=3111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.454 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 --2- 192.168.123.100:0/1658291374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f5104100f20 0x7f510419f090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.454 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 >> 192.168.123.100:0/1658291374 conn(0x7f5104078100 msgr2=0x7f51040fef70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:08.454 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 shutdown_connections 2026-03-09T18:00:08.454 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.449+0000 7f5103577640 1 -- 192.168.123.100:0/1658291374 wait complete. 2026-03-09T18:00:08.467 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c max_objects 10 2026-03-09T18:00:08.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- 192.168.123.100:0/2359248212 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 msgr2=0x7fe228109850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:08.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/2359248212 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 0x7fe228109850 secure :-1 s=READY pgs=3112 cs=0 l=1 rev1=1 crypto rx=0x7fe21c009a80 tx=0x7fe21c01c960 comp rx=0 tx=0).stop 2026-03-09T18:00:08.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- 192.168.123.100:0/2359248212 shutdown_connections 2026-03-09T18:00:08.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/2359248212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe228109f80 0x7fe228111b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/2359248212 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 0x7fe228109850 unknown :-1 s=CLOSED pgs=3112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/2359248212 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe228104e50 0x7fe228105230 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- 192.168.123.100:0/2359248212 >> 192.168.123.100:0/2359248212 conn(0x7fe2281008f0 msgr2=0x7fe228102d10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:08.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- 192.168.123.100:0/2359248212 shutdown_connections 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- 192.168.123.100:0/2359248212 wait complete. 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 Processor -- start 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- start start 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe228104e50 0x7fe22819f300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 0x7fe22819f840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe228109f80 0x7fe2281a3bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe228116d10 con 0x7fe228105800 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fe228116b90 con 0x7fe228109f80 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fe228116e90 con 0x7fe228104e50 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe228109f80 0x7fe2281a3bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe228109f80 0x7fe2281a3bd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:41660/0 (socket says 192.168.123.100:41660) 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 -- 192.168.123.100:0/213674825 learned_addr learned my addr 192.168.123.100:0/213674825 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2267fc640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 0x7fe22819f840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 -- 192.168.123.100:0/213674825 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe228104e50 msgr2=0x7fe22819f300 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe228104e50 0x7fe22819f300 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.533 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 -- 192.168.123.100:0/213674825 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 msgr2=0x7fe22819f840 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:08.534 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 0x7fe22819f840 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:08.534 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 -- 192.168.123.100:0/213674825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe2281a4350 con 0x7fe228109f80 2026-03-09T18:00:08.534 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2267fc640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 0x7fe22819f840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T18:00:08.534 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe2277fe640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe228109f80 0x7fe2281a3bd0 secure :-1 s=READY pgs=2864 cs=0 l=1 rev1=1 crypto rx=0x7fe21800ef30 tx=0x7fe21800c550 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:08.534 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22c8c7640 1 -- 192.168.123.100:0/213674825 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe218019070 con 0x7fe228109f80 2026-03-09T18:00:08.534 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe2281a45e0 con 0x7fe228109f80 2026-03-09T18:00:08.534 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe2281abe80 con 0x7fe228109f80 2026-03-09T18:00:08.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22c8c7640 1 -- 192.168.123.100:0/213674825 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fe2180092d0 con 0x7fe228109f80 2026-03-09T18:00:08.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe1ec005190 con 0x7fe228109f80 2026-03-09T18:00:08.535 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.529+0000 7fe22c8c7640 1 -- 192.168.123.100:0/213674825 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe218004800 con 0x7fe228109f80 2026-03-09T18:00:08.536 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.533+0000 7fe22c8c7640 1 -- 192.168.123.100:0/213674825 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fe218005ce0 con 0x7fe228109f80 2026-03-09T18:00:08.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.533+0000 7fe22c8c7640 1 --2- 192.168.123.100:0/213674825 >> v2:192.168.123.100:6800/2673235927 conn(0x7fe1fc077640 0x7fe1fc079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:08.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.533+0000 7fe22c8c7640 1 -- 192.168.123.100:0/213674825 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(799..799 src has 1..799) ==== 7723+0+0 (secure 0 0 0) 0x7fe21809a1d0 con 0x7fe228109f80 2026-03-09T18:00:08.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.533+0000 7fe226ffd640 1 --2- 192.168.123.100:0/213674825 >> v2:192.168.123.100:6800/2673235927 conn(0x7fe1fc077640 0x7fe1fc079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:08.537 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.533+0000 7fe226ffd640 1 --2- 192.168.123.100:0/213674825 >> v2:192.168.123.100:6800/2673235927 conn(0x7fe1fc077640 0x7fe1fc079b00 secure :-1 s=READY pgs=4253 cs=0 l=1 rev1=1 crypto rx=0x7fe210006fd0 tx=0x7fe210006ce0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:08.540 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.537+0000 7fe22c8c7640 1 -- 192.168.123.100:0/213674825 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe218010040 con 0x7fe228109f80 2026-03-09T18:00:08.630 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:08.625+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"} v 0) -- 0x7fe1ec005480 con 0x7fe228109f80 2026-03-09T18:00:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:08 vm02 bash[23351]: cluster 2026-03-09T18:00:07.114206+0000 mgr.y (mgr.14505) 1264 : cluster [DBG] pgmap v1730: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 0 op/s 2026-03-09T18:00:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:08 vm02 bash[23351]: cluster 2026-03-09T18:00:07.114206+0000 mgr.y (mgr.14505) 1264 : cluster [DBG] pgmap v1730: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 0 op/s 2026-03-09T18:00:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:08 vm02 bash[23351]: cluster 2026-03-09T18:00:07.387612+0000 mon.a (mon.0) 3637 : cluster [DBG] osdmap e798: 8 total, 8 up, 8 in 2026-03-09T18:00:08.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:08 vm02 bash[23351]: cluster 2026-03-09T18:00:07.387612+0000 mon.a (mon.0) 3637 : cluster [DBG] osdmap e798: 8 total, 8 up, 8 in 2026-03-09T18:00:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:08 vm02 bash[23351]: audit 2026-03-09T18:00:07.638536+0000 mon.a (mon.0) 3638 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:08.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:08 vm02 bash[23351]: audit 2026-03-09T18:00:07.638536+0000 mon.a (mon.0) 3638 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:08 vm00 bash[28333]: cluster 2026-03-09T18:00:07.114206+0000 mgr.y (mgr.14505) 1264 : cluster [DBG] pgmap v1730: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 0 op/s 2026-03-09T18:00:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:08 vm00 bash[28333]: cluster 2026-03-09T18:00:07.114206+0000 mgr.y (mgr.14505) 1264 : cluster [DBG] pgmap v1730: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 0 op/s 2026-03-09T18:00:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:08 vm00 bash[28333]: cluster 2026-03-09T18:00:07.387612+0000 mon.a (mon.0) 3637 : cluster [DBG] osdmap e798: 8 total, 8 up, 8 in 2026-03-09T18:00:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:08 vm00 bash[28333]: cluster 2026-03-09T18:00:07.387612+0000 mon.a (mon.0) 3637 : cluster [DBG] osdmap e798: 8 total, 8 up, 8 in 2026-03-09T18:00:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:08 vm00 bash[28333]: audit 2026-03-09T18:00:07.638536+0000 mon.a (mon.0) 3638 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:08 vm00 bash[28333]: audit 2026-03-09T18:00:07.638536+0000 mon.a (mon.0) 3638 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:08 vm00 bash[20770]: cluster 2026-03-09T18:00:07.114206+0000 mgr.y (mgr.14505) 1264 : cluster [DBG] pgmap v1730: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 0 op/s 2026-03-09T18:00:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:08 vm00 bash[20770]: cluster 2026-03-09T18:00:07.114206+0000 mgr.y (mgr.14505) 1264 : cluster [DBG] pgmap v1730: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 0 op/s 2026-03-09T18:00:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:08 vm00 bash[20770]: cluster 2026-03-09T18:00:07.387612+0000 mon.a (mon.0) 3637 : cluster [DBG] osdmap e798: 8 total, 8 up, 8 in 2026-03-09T18:00:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:08 vm00 bash[20770]: cluster 2026-03-09T18:00:07.387612+0000 mon.a (mon.0) 3637 : cluster [DBG] osdmap e798: 8 total, 8 up, 8 in 2026-03-09T18:00:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:08 vm00 bash[20770]: audit 2026-03-09T18:00:07.638536+0000 mon.a (mon.0) 3638 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:08.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:08 vm00 bash[20770]: audit 2026-03-09T18:00:07.638536+0000 mon.a (mon.0) 3638 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:09.407 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:09.401+0000 7fe22c8c7640 1 -- 192.168.123.100:0/213674825 <== mon.1 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v800) ==== 223+0+0 (secure 0 0 0) 0x7fe218066a30 con 0x7fe228109f80 2026-03-09T18:00:09.475 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:09.469+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"} v 0) -- 0x7fe1ec005c30 con 0x7fe228109f80 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: audit 2026-03-09T18:00:08.391122+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]': finished 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: audit 2026-03-09T18:00:08.391122+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]': finished 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: cluster 2026-03-09T18:00:08.407670+0000 mon.a (mon.0) 3640 : cluster [DBG] osdmap e799: 8 total, 8 up, 8 in 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: cluster 2026-03-09T18:00:08.407670+0000 mon.a (mon.0) 3640 : cluster [DBG] osdmap e799: 8 total, 8 up, 8 in 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: audit 2026-03-09T18:00:08.451134+0000 mon.a (mon.0) 3641 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: audit 2026-03-09T18:00:08.451134+0000 mon.a (mon.0) 3641 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: audit 2026-03-09T18:00:08.630417+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: audit 2026-03-09T18:00:08.630417+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: audit 2026-03-09T18:00:08.631353+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:09 vm00 bash[28333]: audit 2026-03-09T18:00:08.631353+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: audit 2026-03-09T18:00:08.391122+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]': finished 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: audit 2026-03-09T18:00:08.391122+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]': finished 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: cluster 2026-03-09T18:00:08.407670+0000 mon.a (mon.0) 3640 : cluster [DBG] osdmap e799: 8 total, 8 up, 8 in 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: cluster 2026-03-09T18:00:08.407670+0000 mon.a (mon.0) 3640 : cluster [DBG] osdmap e799: 8 total, 8 up, 8 in 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: audit 2026-03-09T18:00:08.451134+0000 mon.a (mon.0) 3641 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: audit 2026-03-09T18:00:08.451134+0000 mon.a (mon.0) 3641 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: audit 2026-03-09T18:00:08.630417+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: audit 2026-03-09T18:00:08.630417+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: audit 2026-03-09T18:00:08.631353+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:09 vm00 bash[20770]: audit 2026-03-09T18:00:08.631353+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: audit 2026-03-09T18:00:08.391122+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]': finished 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: audit 2026-03-09T18:00:08.391122+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]': finished 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: cluster 2026-03-09T18:00:08.407670+0000 mon.a (mon.0) 3640 : cluster [DBG] osdmap e799: 8 total, 8 up, 8 in 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: cluster 2026-03-09T18:00:08.407670+0000 mon.a (mon.0) 3640 : cluster [DBG] osdmap e799: 8 total, 8 up, 8 in 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: audit 2026-03-09T18:00:08.451134+0000 mon.a (mon.0) 3641 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: audit 2026-03-09T18:00:08.451134+0000 mon.a (mon.0) 3641 : audit [INF] from='client.? 192.168.123.100:0/1658291374' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pg_num": 12}]: dispatch 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: audit 2026-03-09T18:00:08.630417+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: audit 2026-03-09T18:00:08.630417+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: audit 2026-03-09T18:00:08.631353+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:09.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:09 vm02 bash[23351]: audit 2026-03-09T18:00:08.631353+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.405 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22c8c7640 1 -- 192.168.123.100:0/213674825 <== mon.1 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v801) ==== 223+0+0 (secure 0 0 0) 0x7fe21806b8e0 con 0x7fe228109f80 2026-03-09T18:00:10.405 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 10 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 2026-03-09T18:00:10.407 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 >> v2:192.168.123.100:6800/2673235927 conn(0x7fe1fc077640 msgr2=0x7fe1fc079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/213674825 >> v2:192.168.123.100:6800/2673235927 conn(0x7fe1fc077640 0x7fe1fc079b00 secure :-1 s=READY pgs=4253 cs=0 l=1 rev1=1 crypto rx=0x7fe210006fd0 tx=0x7fe210006ce0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe228109f80 msgr2=0x7fe2281a3bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe228109f80 0x7fe2281a3bd0 secure :-1 s=READY pgs=2864 cs=0 l=1 rev1=1 crypto rx=0x7fe21800ef30 tx=0x7fe21800c550 comp rx=0 tx=0).stop 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 shutdown_connections 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/213674825 >> v2:192.168.123.100:6800/2673235927 conn(0x7fe1fc077640 0x7fe1fc079b00 unknown :-1 s=CLOSED pgs=4253 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe228109f80 0x7fe2281a3bd0 unknown :-1 s=CLOSED pgs=2864 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe228105800 0x7fe22819f840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 --2- 192.168.123.100:0/213674825 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe228104e50 0x7fe22819f300 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 >> 192.168.123.100:0/213674825 conn(0x7fe2281008f0 msgr2=0x7fe228100ed0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.401+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 shutdown_connections 2026-03-09T18:00:10.408 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.405+0000 7fe22d8c9640 1 -- 192.168.123.100:0/213674825 wait complete. 2026-03-09T18:00:10.420 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool application enable bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c rados 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- 192.168.123.100:0/3105659064 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a8103b50 msgr2=0x7f45a8103f30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 --2- 192.168.123.100:0/3105659064 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a8103b50 0x7f45a8103f30 secure :-1 s=READY pgs=3113 cs=0 l=1 rev1=1 crypto rx=0x7f459c009a30 tx=0x7f459c01c990 comp rx=0 tx=0).stop 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- 192.168.123.100:0/3105659064 shutdown_connections 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 --2- 192.168.123.100:0/3105659064 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f45a810cf30 0x7f45a810f3d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 --2- 192.168.123.100:0/3105659064 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f45a8104470 0x7f45a810c960 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 --2- 192.168.123.100:0/3105659064 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a8103b50 0x7f45a8103f30 unknown :-1 s=CLOSED pgs=3113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- 192.168.123.100:0/3105659064 >> 192.168.123.100:0/3105659064 conn(0x7f45a80fd530 msgr2=0x7f45a80ff950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- 192.168.123.100:0/3105659064 shutdown_connections 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- 192.168.123.100:0/3105659064 wait complete. 2026-03-09T18:00:10.494 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 Processor -- start 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- start start 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f45a8103b50 0x7f45a819cee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f45a8104470 0x7f45a819d420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a810cf30 0x7f45a81a17b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f45a8114990 con 0x7f45a810cf30 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f45a8114810 con 0x7f45a8104470 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45af56f640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f45a8114b10 con 0x7f45a8103b50 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45adae5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a810cf30 0x7f45a81a17b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45ad2e4640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f45a8103b50 0x7f45a819cee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45ad2e4640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f45a8103b50 0x7f45a819cee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:54250/0 (socket says 192.168.123.100:54250) 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45adae5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a810cf30 0x7f45a81a17b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:39618/0 (socket says 192.168.123.100:39618) 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45ad2e4640 1 -- 192.168.123.100:0/794540549 learned_addr learned my addr 192.168.123.100:0/794540549 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45acae3640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f45a8104470 0x7f45a819d420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45ad2e4640 1 -- 192.168.123.100:0/794540549 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f45a8104470 msgr2=0x7f45a819d420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45ad2e4640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f45a8104470 0x7f45a819d420 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45ad2e4640 1 -- 192.168.123.100:0/794540549 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a810cf30 msgr2=0x7f45a81a17b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:10.495 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45ad2e4640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a810cf30 0x7f45a81a17b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:10.496 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45ad2e4640 1 -- 192.168.123.100:0/794540549 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f45a81a1e90 con 0x7f45a8103b50 2026-03-09T18:00:10.496 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.489+0000 7f45adae5640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a810cf30 0x7f45a81a17b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T18:00:10.496 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45ad2e4640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f45a8103b50 0x7f45a819cee0 secure :-1 s=READY pgs=3184 cs=0 l=1 rev1=1 crypto rx=0x7f459c01ce70 tx=0x7f459c0ae630 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:10.496 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45967fc640 1 -- 192.168.123.100:0/794540549 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f459c005e60 con 0x7f45a8103b50 2026-03-09T18:00:10.496 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f45a81a2120 con 0x7f45a8103b50 2026-03-09T18:00:10.496 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f45a8101660 con 0x7f45a8103b50 2026-03-09T18:00:10.497 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45967fc640 1 -- 192.168.123.100:0/794540549 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f459c004540 con 0x7f45a8103b50 2026-03-09T18:00:10.497 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45967fc640 1 -- 192.168.123.100:0/794540549 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f459c0aec30 con 0x7f45a8103b50 2026-03-09T18:00:10.497 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45967fc640 1 -- 192.168.123.100:0/794540549 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f459c002b80 con 0x7f45a8103b50 2026-03-09T18:00:10.498 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45967fc640 1 --2- 192.168.123.100:0/794540549 >> v2:192.168.123.100:6800/2673235927 conn(0x7f45780777a0 0x7f4578079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:10.498 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45acae3640 1 --2- 192.168.123.100:0/794540549 >> v2:192.168.123.100:6800/2673235927 conn(0x7f45780777a0 0x7f4578079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:10.498 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45967fc640 1 -- 192.168.123.100:0/794540549 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(801..801 src has 1..801) ==== 7723+0+0 (secure 0 0 0) 0x7f459c134130 con 0x7f45a8103b50 2026-03-09T18:00:10.498 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45acae3640 1 --2- 192.168.123.100:0/794540549 >> v2:192.168.123.100:6800/2673235927 conn(0x7f45780777a0 0x7f4578079c60 secure :-1 s=READY pgs=4254 cs=0 l=1 rev1=1 crypto rx=0x7f45a80ff3e0 tx=0x7f459800a400 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:10.498 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.493+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4570005190 con 0x7f45a8103b50 2026-03-09T18:00:10.502 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.497+0000 7f45967fc640 1 -- 192.168.123.100:0/794540549 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f459c0bd050 con 0x7f45a8103b50 2026-03-09T18:00:10.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:10.585+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"} v 0) -- 0x7f4570005480 con 0x7f45a8103b50 2026-03-09T18:00:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: cluster 2026-03-09T18:00:09.114535+0000 mgr.y (mgr.14505) 1265 : cluster [DBG] pgmap v1733: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: cluster 2026-03-09T18:00:09.114535+0000 mgr.y (mgr.14505) 1265 : cluster [DBG] pgmap v1733: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: audit 2026-03-09T18:00:09.397892+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: audit 2026-03-09T18:00:09.397892+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: cluster 2026-03-09T18:00:09.406887+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e800: 8 total, 8 up, 8 in 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: cluster 2026-03-09T18:00:09.406887+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e800: 8 total, 8 up, 8 in 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: audit 2026-03-09T18:00:09.475674+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: audit 2026-03-09T18:00:09.475674+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: audit 2026-03-09T18:00:09.476366+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: audit 2026-03-09T18:00:09.476366+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: audit 2026-03-09T18:00:10.401044+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: audit 2026-03-09T18:00:10.401044+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: cluster 2026-03-09T18:00:10.403828+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e801: 8 total, 8 up, 8 in 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:10 vm00 bash[28333]: cluster 2026-03-09T18:00:10.403828+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e801: 8 total, 8 up, 8 in 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: cluster 2026-03-09T18:00:09.114535+0000 mgr.y (mgr.14505) 1265 : cluster [DBG] pgmap v1733: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: cluster 2026-03-09T18:00:09.114535+0000 mgr.y (mgr.14505) 1265 : cluster [DBG] pgmap v1733: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: audit 2026-03-09T18:00:09.397892+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: audit 2026-03-09T18:00:09.397892+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: cluster 2026-03-09T18:00:09.406887+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e800: 8 total, 8 up, 8 in 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: cluster 2026-03-09T18:00:09.406887+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e800: 8 total, 8 up, 8 in 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: audit 2026-03-09T18:00:09.475674+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: audit 2026-03-09T18:00:09.475674+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: audit 2026-03-09T18:00:09.476366+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: audit 2026-03-09T18:00:09.476366+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: audit 2026-03-09T18:00:10.401044+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: audit 2026-03-09T18:00:10.401044+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: cluster 2026-03-09T18:00:10.403828+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e801: 8 total, 8 up, 8 in 2026-03-09T18:00:10.790 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:10 vm00 bash[20770]: cluster 2026-03-09T18:00:10.403828+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e801: 8 total, 8 up, 8 in 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: cluster 2026-03-09T18:00:09.114535+0000 mgr.y (mgr.14505) 1265 : cluster [DBG] pgmap v1733: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: cluster 2026-03-09T18:00:09.114535+0000 mgr.y (mgr.14505) 1265 : cluster [DBG] pgmap v1733: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: audit 2026-03-09T18:00:09.397892+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: audit 2026-03-09T18:00:09.397892+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: cluster 2026-03-09T18:00:09.406887+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e800: 8 total, 8 up, 8 in 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: cluster 2026-03-09T18:00:09.406887+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e800: 8 total, 8 up, 8 in 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: audit 2026-03-09T18:00:09.475674+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: audit 2026-03-09T18:00:09.475674+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/213674825' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: audit 2026-03-09T18:00:09.476366+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: audit 2026-03-09T18:00:09.476366+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: audit 2026-03-09T18:00:10.401044+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: audit 2026-03-09T18:00:10.401044+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: cluster 2026-03-09T18:00:10.403828+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e801: 8 total, 8 up, 8 in 2026-03-09T18:00:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:10 vm02 bash[23351]: cluster 2026-03-09T18:00:10.403828+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e801: 8 total, 8 up, 8 in 2026-03-09T18:00:11.462 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:11.457+0000 7f45967fc640 1 -- 192.168.123.100:0/794540549 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]=0 enabled application 'rados' on pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' v802) ==== 213+0+0 (secure 0 0 0) 0x7f459c016610 con 0x7f45a8103b50 2026-03-09T18:00:11.515 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:11.509+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"} v 0) -- 0x7f4570004820 con 0x7f45a8103b50 2026-03-09T18:00:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:11 vm00 bash[28333]: audit 2026-03-09T18:00:10.589807+0000 mon.c (mon.2) 939 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:11 vm00 bash[28333]: audit 2026-03-09T18:00:10.589807+0000 mon.c (mon.2) 939 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:11 vm00 bash[28333]: audit 2026-03-09T18:00:10.590167+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:11 vm00 bash[28333]: audit 2026-03-09T18:00:10.590167+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:11 vm00 bash[20770]: audit 2026-03-09T18:00:10.589807+0000 mon.c (mon.2) 939 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:11 vm00 bash[20770]: audit 2026-03-09T18:00:10.589807+0000 mon.c (mon.2) 939 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:11 vm00 bash[20770]: audit 2026-03-09T18:00:10.590167+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:11 vm00 bash[20770]: audit 2026-03-09T18:00:10.590167+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:11 vm02 bash[23351]: audit 2026-03-09T18:00:10.589807+0000 mon.c (mon.2) 939 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:11 vm02 bash[23351]: audit 2026-03-09T18:00:10.589807+0000 mon.c (mon.2) 939 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:11 vm02 bash[23351]: audit 2026-03-09T18:00:10.590167+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:11.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:11 vm02 bash[23351]: audit 2026-03-09T18:00:10.590167+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.473+0000 7f45967fc640 1 -- 192.168.123.100:0/794540549 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]=0 enabled application 'rados' on pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' v803) ==== 213+0+0 (secure 0 0 0) 0x7f459c100990 con 0x7f45a8103b50 2026-03-09T18:00:12.478 INFO:tasks.workunit.client.0.vm00.stderr:enabled application 'rados' on pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' 2026-03-09T18:00:12.480 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 >> v2:192.168.123.100:6800/2673235927 conn(0x7f45780777a0 msgr2=0x7f4578079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:12.480 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 --2- 192.168.123.100:0/794540549 >> v2:192.168.123.100:6800/2673235927 conn(0x7f45780777a0 0x7f4578079c60 secure :-1 s=READY pgs=4254 cs=0 l=1 rev1=1 crypto rx=0x7f45a80ff3e0 tx=0x7f459800a400 comp rx=0 tx=0).stop 2026-03-09T18:00:12.480 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f45a8103b50 msgr2=0x7f45a819cee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:12.480 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f45a8103b50 0x7f45a819cee0 secure :-1 s=READY pgs=3184 cs=0 l=1 rev1=1 crypto rx=0x7f459c01ce70 tx=0x7f459c0ae630 comp rx=0 tx=0).stop 2026-03-09T18:00:12.481 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 shutdown_connections 2026-03-09T18:00:12.481 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 --2- 192.168.123.100:0/794540549 >> v2:192.168.123.100:6800/2673235927 conn(0x7f45780777a0 0x7f4578079c60 unknown :-1 s=CLOSED pgs=4254 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:12.481 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f45a810cf30 0x7f45a81a17b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:12.481 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f45a8104470 0x7f45a819d420 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:12.481 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 --2- 192.168.123.100:0/794540549 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f45a8103b50 0x7f45a819cee0 unknown :-1 s=CLOSED pgs=3184 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:12.481 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 >> 192.168.123.100:0/794540549 conn(0x7f45a80fd530 msgr2=0x7f45a80fe390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:12.481 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 shutdown_connections 2026-03-09T18:00:12.481 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:12.477+0000 7f45af56f640 1 -- 192.168.123.100:0/794540549 wait complete. 2026-03-09T18:00:12.494 INFO:tasks.workunit.client.0.vm00.stderr:+ seq 1 10 2026-03-09T18:00:12.495 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj1 /etc/passwd 2026-03-09T18:00:12.538 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj2 /etc/passwd 2026-03-09T18:00:12.567 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj3 /etc/passwd 2026-03-09T18:00:12.602 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj4 /etc/passwd 2026-03-09T18:00:12.633 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj5 /etc/passwd 2026-03-09T18:00:12.663 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj6 /etc/passwd 2026-03-09T18:00:12.690 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj7 /etc/passwd 2026-03-09T18:00:12.722 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj8 /etc/passwd 2026-03-09T18:00:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: cluster 2026-03-09T18:00:11.114897+0000 mgr.y (mgr.14505) 1266 : cluster [DBG] pgmap v1736: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: cluster 2026-03-09T18:00:11.114897+0000 mgr.y (mgr.14505) 1266 : cluster [DBG] pgmap v1736: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: audit 2026-03-09T18:00:11.448708+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: audit 2026-03-09T18:00:11.448708+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: cluster 2026-03-09T18:00:11.457130+0000 mon.a (mon.0) 3650 : cluster [DBG] osdmap e802: 8 total, 8 up, 8 in 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: cluster 2026-03-09T18:00:11.457130+0000 mon.a (mon.0) 3650 : cluster [DBG] osdmap e802: 8 total, 8 up, 8 in 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: audit 2026-03-09T18:00:11.516121+0000 mon.c (mon.2) 940 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: audit 2026-03-09T18:00:11.516121+0000 mon.c (mon.2) 940 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: audit 2026-03-09T18:00:11.516571+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:12 vm00 bash[20770]: audit 2026-03-09T18:00:11.516571+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: cluster 2026-03-09T18:00:11.114897+0000 mgr.y (mgr.14505) 1266 : cluster [DBG] pgmap v1736: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: cluster 2026-03-09T18:00:11.114897+0000 mgr.y (mgr.14505) 1266 : cluster [DBG] pgmap v1736: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: audit 2026-03-09T18:00:11.448708+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: audit 2026-03-09T18:00:11.448708+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: cluster 2026-03-09T18:00:11.457130+0000 mon.a (mon.0) 3650 : cluster [DBG] osdmap e802: 8 total, 8 up, 8 in 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: cluster 2026-03-09T18:00:11.457130+0000 mon.a (mon.0) 3650 : cluster [DBG] osdmap e802: 8 total, 8 up, 8 in 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: audit 2026-03-09T18:00:11.516121+0000 mon.c (mon.2) 940 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: audit 2026-03-09T18:00:11.516121+0000 mon.c (mon.2) 940 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: audit 2026-03-09T18:00:11.516571+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:12 vm00 bash[28333]: audit 2026-03-09T18:00:11.516571+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.792 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj9 /etc/passwd 2026-03-09T18:00:12.822 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put obj10 /etc/passwd 2026-03-09T18:00:12.854 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: cluster 2026-03-09T18:00:11.114897+0000 mgr.y (mgr.14505) 1266 : cluster [DBG] pgmap v1736: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: cluster 2026-03-09T18:00:11.114897+0000 mgr.y (mgr.14505) 1266 : cluster [DBG] pgmap v1736: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: audit 2026-03-09T18:00:11.448708+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: audit 2026-03-09T18:00:11.448708+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: cluster 2026-03-09T18:00:11.457130+0000 mon.a (mon.0) 3650 : cluster [DBG] osdmap e802: 8 total, 8 up, 8 in 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: cluster 2026-03-09T18:00:11.457130+0000 mon.a (mon.0) 3650 : cluster [DBG] osdmap e802: 8 total, 8 up, 8 in 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: audit 2026-03-09T18:00:11.516121+0000 mon.c (mon.2) 940 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: audit 2026-03-09T18:00:11.516121+0000 mon.c (mon.2) 940 : audit [INF] from='client.? 192.168.123.100:0/794540549' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: audit 2026-03-09T18:00:11.516571+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:12 vm02 bash[23351]: audit 2026-03-09T18:00:11.516571+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]: dispatch 2026-03-09T18:00:13.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:00:13 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:00:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:13 vm00 bash[28333]: audit 2026-03-09T18:00:12.469592+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:13 vm00 bash[28333]: audit 2026-03-09T18:00:12.469592+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:13 vm00 bash[28333]: cluster 2026-03-09T18:00:12.481135+0000 mon.a (mon.0) 3653 : cluster [DBG] osdmap e803: 8 total, 8 up, 8 in 2026-03-09T18:00:13.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:13 vm00 bash[28333]: cluster 2026-03-09T18:00:12.481135+0000 mon.a (mon.0) 3653 : cluster [DBG] osdmap e803: 8 total, 8 up, 8 in 2026-03-09T18:00:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:13 vm00 bash[20770]: audit 2026-03-09T18:00:12.469592+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:13 vm00 bash[20770]: audit 2026-03-09T18:00:12.469592+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:13 vm00 bash[20770]: cluster 2026-03-09T18:00:12.481135+0000 mon.a (mon.0) 3653 : cluster [DBG] osdmap e803: 8 total, 8 up, 8 in 2026-03-09T18:00:13.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:13 vm00 bash[20770]: cluster 2026-03-09T18:00:12.481135+0000 mon.a (mon.0) 3653 : cluster [DBG] osdmap e803: 8 total, 8 up, 8 in 2026-03-09T18:00:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:13 vm02 bash[23351]: audit 2026-03-09T18:00:12.469592+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:13 vm02 bash[23351]: audit 2026-03-09T18:00:12.469592+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "app": "rados"}]': finished 2026-03-09T18:00:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:13 vm02 bash[23351]: cluster 2026-03-09T18:00:12.481135+0000 mon.a (mon.0) 3653 : cluster [DBG] osdmap e803: 8 total, 8 up, 8 in 2026-03-09T18:00:13.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:13 vm02 bash[23351]: cluster 2026-03-09T18:00:12.481135+0000 mon.a (mon.0) 3653 : cluster [DBG] osdmap e803: 8 total, 8 up, 8 in 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:14 vm00 bash[28333]: audit 2026-03-09T18:00:13.086007+0000 mgr.y (mgr.14505) 1267 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:14 vm00 bash[28333]: audit 2026-03-09T18:00:13.086007+0000 mgr.y (mgr.14505) 1267 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:14 vm00 bash[28333]: cluster 2026-03-09T18:00:13.115496+0000 mgr.y (mgr.14505) 1268 : cluster [DBG] pgmap v1739: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 2 op/s 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:14 vm00 bash[28333]: cluster 2026-03-09T18:00:13.115496+0000 mgr.y (mgr.14505) 1268 : cluster [DBG] pgmap v1739: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 2 op/s 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:14 vm00 bash[28333]: cluster 2026-03-09T18:00:13.469628+0000 mon.a (mon.0) 3654 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:14 vm00 bash[28333]: cluster 2026-03-09T18:00:13.469628+0000 mon.a (mon.0) 3654 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:14 vm00 bash[28333]: audit 2026-03-09T18:00:13.941067+0000 mon.c (mon.2) 941 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:14 vm00 bash[28333]: audit 2026-03-09T18:00:13.941067+0000 mon.c (mon.2) 941 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:14 vm00 bash[20770]: audit 2026-03-09T18:00:13.086007+0000 mgr.y (mgr.14505) 1267 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:14 vm00 bash[20770]: audit 2026-03-09T18:00:13.086007+0000 mgr.y (mgr.14505) 1267 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:14 vm00 bash[20770]: cluster 2026-03-09T18:00:13.115496+0000 mgr.y (mgr.14505) 1268 : cluster [DBG] pgmap v1739: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 2 op/s 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:14 vm00 bash[20770]: cluster 2026-03-09T18:00:13.115496+0000 mgr.y (mgr.14505) 1268 : cluster [DBG] pgmap v1739: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 2 op/s 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:14 vm00 bash[20770]: cluster 2026-03-09T18:00:13.469628+0000 mon.a (mon.0) 3654 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:00:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:14 vm00 bash[20770]: cluster 2026-03-09T18:00:13.469628+0000 mon.a (mon.0) 3654 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:00:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:14 vm00 bash[20770]: audit 2026-03-09T18:00:13.941067+0000 mon.c (mon.2) 941 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:14.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:14 vm00 bash[20770]: audit 2026-03-09T18:00:13.941067+0000 mon.c (mon.2) 941 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:14 vm02 bash[23351]: audit 2026-03-09T18:00:13.086007+0000 mgr.y (mgr.14505) 1267 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:14 vm02 bash[23351]: audit 2026-03-09T18:00:13.086007+0000 mgr.y (mgr.14505) 1267 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:14 vm02 bash[23351]: cluster 2026-03-09T18:00:13.115496+0000 mgr.y (mgr.14505) 1268 : cluster [DBG] pgmap v1739: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 2 op/s 2026-03-09T18:00:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:14 vm02 bash[23351]: cluster 2026-03-09T18:00:13.115496+0000 mgr.y (mgr.14505) 1268 : cluster [DBG] pgmap v1739: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 2 op/s 2026-03-09T18:00:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:14 vm02 bash[23351]: cluster 2026-03-09T18:00:13.469628+0000 mon.a (mon.0) 3654 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:00:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:14 vm02 bash[23351]: cluster 2026-03-09T18:00:13.469628+0000 mon.a (mon.0) 3654 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:00:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:14 vm02 bash[23351]: audit 2026-03-09T18:00:13.941067+0000 mon.c (mon.2) 941 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:14 vm02 bash[23351]: audit 2026-03-09T18:00:13.941067+0000 mon.c (mon.2) 941 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:16 vm00 bash[28333]: cluster 2026-03-09T18:00:15.115817+0000 mgr.y (mgr.14505) 1269 : cluster [DBG] pgmap v1740: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:16 vm00 bash[28333]: cluster 2026-03-09T18:00:15.115817+0000 mgr.y (mgr.14505) 1269 : cluster [DBG] pgmap v1740: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:16 vm00 bash[20770]: cluster 2026-03-09T18:00:15.115817+0000 mgr.y (mgr.14505) 1269 : cluster [DBG] pgmap v1740: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:16 vm00 bash[20770]: cluster 2026-03-09T18:00:15.115817+0000 mgr.y (mgr.14505) 1269 : cluster [DBG] pgmap v1740: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:00:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:00:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:00:16.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:16 vm02 bash[23351]: cluster 2026-03-09T18:00:15.115817+0000 mgr.y (mgr.14505) 1269 : cluster [DBG] pgmap v1740: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:16 vm02 bash[23351]: cluster 2026-03-09T18:00:15.115817+0000 mgr.y (mgr.14505) 1269 : cluster [DBG] pgmap v1740: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:18 vm00 bash[28333]: cluster 2026-03-09T18:00:17.116113+0000 mgr.y (mgr.14505) 1270 : cluster [DBG] pgmap v1741: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 762 B/s rd, 3.1 KiB/s wr, 1 op/s 2026-03-09T18:00:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:18 vm00 bash[28333]: cluster 2026-03-09T18:00:17.116113+0000 mgr.y (mgr.14505) 1270 : cluster [DBG] pgmap v1741: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 762 B/s rd, 3.1 KiB/s wr, 1 op/s 2026-03-09T18:00:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:18 vm00 bash[20770]: cluster 2026-03-09T18:00:17.116113+0000 mgr.y (mgr.14505) 1270 : cluster [DBG] pgmap v1741: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 762 B/s rd, 3.1 KiB/s wr, 1 op/s 2026-03-09T18:00:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:18 vm00 bash[20770]: cluster 2026-03-09T18:00:17.116113+0000 mgr.y (mgr.14505) 1270 : cluster [DBG] pgmap v1741: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 762 B/s rd, 3.1 KiB/s wr, 1 op/s 2026-03-09T18:00:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:18 vm02 bash[23351]: cluster 2026-03-09T18:00:17.116113+0000 mgr.y (mgr.14505) 1270 : cluster [DBG] pgmap v1741: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 762 B/s rd, 3.1 KiB/s wr, 1 op/s 2026-03-09T18:00:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:18 vm02 bash[23351]: cluster 2026-03-09T18:00:17.116113+0000 mgr.y (mgr.14505) 1270 : cluster [DBG] pgmap v1741: 176 pgs: 176 active+clean; 469 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 762 B/s rd, 3.1 KiB/s wr, 1 op/s 2026-03-09T18:00:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:20 vm00 bash[28333]: cluster 2026-03-09T18:00:19.116790+0000 mgr.y (mgr.14505) 1271 : cluster [DBG] pgmap v1742: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:20 vm00 bash[28333]: cluster 2026-03-09T18:00:19.116790+0000 mgr.y (mgr.14505) 1271 : cluster [DBG] pgmap v1742: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:20 vm00 bash[20770]: cluster 2026-03-09T18:00:19.116790+0000 mgr.y (mgr.14505) 1271 : cluster [DBG] pgmap v1742: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:20 vm00 bash[20770]: cluster 2026-03-09T18:00:19.116790+0000 mgr.y (mgr.14505) 1271 : cluster [DBG] pgmap v1742: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:20 vm02 bash[23351]: cluster 2026-03-09T18:00:19.116790+0000 mgr.y (mgr.14505) 1271 : cluster [DBG] pgmap v1742: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:20 vm02 bash[23351]: cluster 2026-03-09T18:00:19.116790+0000 mgr.y (mgr.14505) 1271 : cluster [DBG] pgmap v1742: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-09T18:00:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:22 vm02 bash[23351]: cluster 2026-03-09T18:00:21.117062+0000 mgr.y (mgr.14505) 1272 : cluster [DBG] pgmap v1743: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T18:00:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:22 vm02 bash[23351]: cluster 2026-03-09T18:00:21.117062+0000 mgr.y (mgr.14505) 1272 : cluster [DBG] pgmap v1743: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T18:00:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:22 vm02 bash[23351]: cluster 2026-03-09T18:00:21.963737+0000 mon.a (mon.0) 3655 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_objects: 10) 2026-03-09T18:00:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:22 vm02 bash[23351]: cluster 2026-03-09T18:00:21.963737+0000 mon.a (mon.0) 3655 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_objects: 10) 2026-03-09T18:00:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:22 vm02 bash[23351]: cluster 2026-03-09T18:00:21.963904+0000 mon.a (mon.0) 3656 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:22 vm02 bash[23351]: cluster 2026-03-09T18:00:21.963904+0000 mon.a (mon.0) 3656 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:22 vm02 bash[23351]: cluster 2026-03-09T18:00:21.976704+0000 mon.a (mon.0) 3657 : cluster [DBG] osdmap e804: 8 total, 8 up, 8 in 2026-03-09T18:00:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:22 vm02 bash[23351]: cluster 2026-03-09T18:00:21.976704+0000 mon.a (mon.0) 3657 : cluster [DBG] osdmap e804: 8 total, 8 up, 8 in 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:22 vm00 bash[28333]: cluster 2026-03-09T18:00:21.117062+0000 mgr.y (mgr.14505) 1272 : cluster [DBG] pgmap v1743: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:22 vm00 bash[28333]: cluster 2026-03-09T18:00:21.117062+0000 mgr.y (mgr.14505) 1272 : cluster [DBG] pgmap v1743: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:22 vm00 bash[28333]: cluster 2026-03-09T18:00:21.963737+0000 mon.a (mon.0) 3655 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_objects: 10) 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:22 vm00 bash[28333]: cluster 2026-03-09T18:00:21.963737+0000 mon.a (mon.0) 3655 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_objects: 10) 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:22 vm00 bash[28333]: cluster 2026-03-09T18:00:21.963904+0000 mon.a (mon.0) 3656 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:22 vm00 bash[28333]: cluster 2026-03-09T18:00:21.963904+0000 mon.a (mon.0) 3656 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:22 vm00 bash[28333]: cluster 2026-03-09T18:00:21.976704+0000 mon.a (mon.0) 3657 : cluster [DBG] osdmap e804: 8 total, 8 up, 8 in 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:22 vm00 bash[28333]: cluster 2026-03-09T18:00:21.976704+0000 mon.a (mon.0) 3657 : cluster [DBG] osdmap e804: 8 total, 8 up, 8 in 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:22 vm00 bash[20770]: cluster 2026-03-09T18:00:21.117062+0000 mgr.y (mgr.14505) 1272 : cluster [DBG] pgmap v1743: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:22 vm00 bash[20770]: cluster 2026-03-09T18:00:21.117062+0000 mgr.y (mgr.14505) 1272 : cluster [DBG] pgmap v1743: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:22 vm00 bash[20770]: cluster 2026-03-09T18:00:21.963737+0000 mon.a (mon.0) 3655 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_objects: 10) 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:22 vm00 bash[20770]: cluster 2026-03-09T18:00:21.963737+0000 mon.a (mon.0) 3655 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_objects: 10) 2026-03-09T18:00:23.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:22 vm00 bash[20770]: cluster 2026-03-09T18:00:21.963904+0000 mon.a (mon.0) 3656 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:22 vm00 bash[20770]: cluster 2026-03-09T18:00:21.963904+0000 mon.a (mon.0) 3656 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:22 vm00 bash[20770]: cluster 2026-03-09T18:00:21.976704+0000 mon.a (mon.0) 3657 : cluster [DBG] osdmap e804: 8 total, 8 up, 8 in 2026-03-09T18:00:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:22 vm00 bash[20770]: cluster 2026-03-09T18:00:21.976704+0000 mon.a (mon.0) 3657 : cluster [DBG] osdmap e804: 8 total, 8 up, 8 in 2026-03-09T18:00:23.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:00:23 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:00:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:24 vm02 bash[23351]: audit 2026-03-09T18:00:23.093679+0000 mgr.y (mgr.14505) 1273 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:24 vm02 bash[23351]: audit 2026-03-09T18:00:23.093679+0000 mgr.y (mgr.14505) 1273 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:24 vm02 bash[23351]: cluster 2026-03-09T18:00:23.117894+0000 mgr.y (mgr.14505) 1274 : cluster [DBG] pgmap v1745: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:24 vm02 bash[23351]: cluster 2026-03-09T18:00:23.117894+0000 mgr.y (mgr.14505) 1274 : cluster [DBG] pgmap v1745: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:24 vm00 bash[28333]: audit 2026-03-09T18:00:23.093679+0000 mgr.y (mgr.14505) 1273 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:24 vm00 bash[28333]: audit 2026-03-09T18:00:23.093679+0000 mgr.y (mgr.14505) 1273 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:24 vm00 bash[28333]: cluster 2026-03-09T18:00:23.117894+0000 mgr.y (mgr.14505) 1274 : cluster [DBG] pgmap v1745: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:25.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:24 vm00 bash[28333]: cluster 2026-03-09T18:00:23.117894+0000 mgr.y (mgr.14505) 1274 : cluster [DBG] pgmap v1745: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:24 vm00 bash[20770]: audit 2026-03-09T18:00:23.093679+0000 mgr.y (mgr.14505) 1273 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:24 vm00 bash[20770]: audit 2026-03-09T18:00:23.093679+0000 mgr.y (mgr.14505) 1273 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:24 vm00 bash[20770]: cluster 2026-03-09T18:00:23.117894+0000 mgr.y (mgr.14505) 1274 : cluster [DBG] pgmap v1745: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:25.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:24 vm00 bash[20770]: cluster 2026-03-09T18:00:23.117894+0000 mgr.y (mgr.14505) 1274 : cluster [DBG] pgmap v1745: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:26.566 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:00:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:00:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:00:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:26 vm02 bash[23351]: cluster 2026-03-09T18:00:25.118217+0000 mgr.y (mgr.14505) 1275 : cluster [DBG] pgmap v1746: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:26 vm02 bash[23351]: cluster 2026-03-09T18:00:25.118217+0000 mgr.y (mgr.14505) 1275 : cluster [DBG] pgmap v1746: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:26 vm00 bash[28333]: cluster 2026-03-09T18:00:25.118217+0000 mgr.y (mgr.14505) 1275 : cluster [DBG] pgmap v1746: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:26 vm00 bash[28333]: cluster 2026-03-09T18:00:25.118217+0000 mgr.y (mgr.14505) 1275 : cluster [DBG] pgmap v1746: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:26 vm00 bash[20770]: cluster 2026-03-09T18:00:25.118217+0000 mgr.y (mgr.14505) 1275 : cluster [DBG] pgmap v1746: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:26 vm00 bash[20770]: cluster 2026-03-09T18:00:25.118217+0000 mgr.y (mgr.14505) 1275 : cluster [DBG] pgmap v1746: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:27 vm02 bash[23351]: audit 2026-03-09T18:00:27.106662+0000 mon.c (mon.2) 942 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:00:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:27 vm02 bash[23351]: audit 2026-03-09T18:00:27.106662+0000 mon.c (mon.2) 942 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:00:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:27 vm02 bash[23351]: audit 2026-03-09T18:00:27.452192+0000 mon.c (mon.2) 943 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:00:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:27 vm02 bash[23351]: audit 2026-03-09T18:00:27.452192+0000 mon.c (mon.2) 943 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:00:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:27 vm02 bash[23351]: audit 2026-03-09T18:00:27.453582+0000 mon.c (mon.2) 944 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:00:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:27 vm02 bash[23351]: audit 2026-03-09T18:00:27.453582+0000 mon.c (mon.2) 944 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:00:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:27 vm02 bash[23351]: audit 2026-03-09T18:00:27.488914+0000 mon.a (mon.0) 3658 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:27.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:27 vm02 bash[23351]: audit 2026-03-09T18:00:27.488914+0000 mon.a (mon.0) 3658 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:27 vm00 bash[28333]: audit 2026-03-09T18:00:27.106662+0000 mon.c (mon.2) 942 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:27 vm00 bash[28333]: audit 2026-03-09T18:00:27.106662+0000 mon.c (mon.2) 942 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:27 vm00 bash[28333]: audit 2026-03-09T18:00:27.452192+0000 mon.c (mon.2) 943 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:27 vm00 bash[28333]: audit 2026-03-09T18:00:27.452192+0000 mon.c (mon.2) 943 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:27 vm00 bash[28333]: audit 2026-03-09T18:00:27.453582+0000 mon.c (mon.2) 944 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:27 vm00 bash[28333]: audit 2026-03-09T18:00:27.453582+0000 mon.c (mon.2) 944 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:27 vm00 bash[28333]: audit 2026-03-09T18:00:27.488914+0000 mon.a (mon.0) 3658 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:27 vm00 bash[28333]: audit 2026-03-09T18:00:27.488914+0000 mon.a (mon.0) 3658 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:27 vm00 bash[20770]: audit 2026-03-09T18:00:27.106662+0000 mon.c (mon.2) 942 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:27 vm00 bash[20770]: audit 2026-03-09T18:00:27.106662+0000 mon.c (mon.2) 942 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:27 vm00 bash[20770]: audit 2026-03-09T18:00:27.452192+0000 mon.c (mon.2) 943 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:27 vm00 bash[20770]: audit 2026-03-09T18:00:27.452192+0000 mon.c (mon.2) 943 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:27 vm00 bash[20770]: audit 2026-03-09T18:00:27.453582+0000 mon.c (mon.2) 944 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:27 vm00 bash[20770]: audit 2026-03-09T18:00:27.453582+0000 mon.c (mon.2) 944 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:27 vm00 bash[20770]: audit 2026-03-09T18:00:27.488914+0000 mon.a (mon.0) 3658 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:28.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:27 vm00 bash[20770]: audit 2026-03-09T18:00:27.488914+0000 mon.a (mon.0) 3658 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:28 vm02 bash[23351]: cluster 2026-03-09T18:00:27.119211+0000 mgr.y (mgr.14505) 1276 : cluster [DBG] pgmap v1747: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:28.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:28 vm02 bash[23351]: cluster 2026-03-09T18:00:27.119211+0000 mgr.y (mgr.14505) 1276 : cluster [DBG] pgmap v1747: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:29.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:28 vm00 bash[28333]: cluster 2026-03-09T18:00:27.119211+0000 mgr.y (mgr.14505) 1276 : cluster [DBG] pgmap v1747: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:29.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:28 vm00 bash[28333]: cluster 2026-03-09T18:00:27.119211+0000 mgr.y (mgr.14505) 1276 : cluster [DBG] pgmap v1747: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:29.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:28 vm00 bash[20770]: cluster 2026-03-09T18:00:27.119211+0000 mgr.y (mgr.14505) 1276 : cluster [DBG] pgmap v1747: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:29.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:28 vm00 bash[20770]: cluster 2026-03-09T18:00:27.119211+0000 mgr.y (mgr.14505) 1276 : cluster [DBG] pgmap v1747: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 921 B/s wr, 1 op/s 2026-03-09T18:00:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:29 vm00 bash[28333]: audit 2026-03-09T18:00:28.952149+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:29 vm00 bash[28333]: audit 2026-03-09T18:00:28.952149+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:29 vm00 bash[28333]: audit 2026-03-09T18:00:28.954643+0000 mon.c (mon.2) 945 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:29 vm00 bash[28333]: audit 2026-03-09T18:00:28.954643+0000 mon.c (mon.2) 945 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:29 vm00 bash[28333]: cluster 2026-03-09T18:00:29.119714+0000 mgr.y (mgr.14505) 1277 : cluster [DBG] pgmap v1748: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:30.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:29 vm00 bash[28333]: cluster 2026-03-09T18:00:29.119714+0000 mgr.y (mgr.14505) 1277 : cluster [DBG] pgmap v1748: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:29 vm00 bash[20770]: audit 2026-03-09T18:00:28.952149+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:30.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:29 vm00 bash[20770]: audit 2026-03-09T18:00:28.952149+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:29 vm00 bash[20770]: audit 2026-03-09T18:00:28.954643+0000 mon.c (mon.2) 945 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:29 vm00 bash[20770]: audit 2026-03-09T18:00:28.954643+0000 mon.c (mon.2) 945 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:29 vm00 bash[20770]: cluster 2026-03-09T18:00:29.119714+0000 mgr.y (mgr.14505) 1277 : cluster [DBG] pgmap v1748: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:30.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:29 vm00 bash[20770]: cluster 2026-03-09T18:00:29.119714+0000 mgr.y (mgr.14505) 1277 : cluster [DBG] pgmap v1748: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:29 vm02 bash[23351]: audit 2026-03-09T18:00:28.952149+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:29 vm02 bash[23351]: audit 2026-03-09T18:00:28.952149+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:00:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:29 vm02 bash[23351]: audit 2026-03-09T18:00:28.954643+0000 mon.c (mon.2) 945 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:29 vm02 bash[23351]: audit 2026-03-09T18:00:28.954643+0000 mon.c (mon.2) 945 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:29 vm02 bash[23351]: cluster 2026-03-09T18:00:29.119714+0000 mgr.y (mgr.14505) 1277 : cluster [DBG] pgmap v1748: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:29 vm02 bash[23351]: cluster 2026-03-09T18:00:29.119714+0000 mgr.y (mgr.14505) 1277 : cluster [DBG] pgmap v1748: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:32 vm00 bash[28333]: cluster 2026-03-09T18:00:31.119989+0000 mgr.y (mgr.14505) 1278 : cluster [DBG] pgmap v1749: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:32 vm00 bash[28333]: cluster 2026-03-09T18:00:31.119989+0000 mgr.y (mgr.14505) 1278 : cluster [DBG] pgmap v1749: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:32 vm00 bash[20770]: cluster 2026-03-09T18:00:31.119989+0000 mgr.y (mgr.14505) 1278 : cluster [DBG] pgmap v1749: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:32 vm00 bash[20770]: cluster 2026-03-09T18:00:31.119989+0000 mgr.y (mgr.14505) 1278 : cluster [DBG] pgmap v1749: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:32 vm02 bash[23351]: cluster 2026-03-09T18:00:31.119989+0000 mgr.y (mgr.14505) 1278 : cluster [DBG] pgmap v1749: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:32 vm02 bash[23351]: cluster 2026-03-09T18:00:31.119989+0000 mgr.y (mgr.14505) 1278 : cluster [DBG] pgmap v1749: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:00:33.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:00:33 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:00:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:34 vm00 bash[28333]: audit 2026-03-09T18:00:33.104207+0000 mgr.y (mgr.14505) 1279 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:34 vm00 bash[28333]: audit 2026-03-09T18:00:33.104207+0000 mgr.y (mgr.14505) 1279 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:34 vm00 bash[28333]: cluster 2026-03-09T18:00:33.120470+0000 mgr.y (mgr.14505) 1280 : cluster [DBG] pgmap v1750: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:00:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:34 vm00 bash[28333]: cluster 2026-03-09T18:00:33.120470+0000 mgr.y (mgr.14505) 1280 : cluster [DBG] pgmap v1750: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:00:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:34 vm00 bash[20770]: audit 2026-03-09T18:00:33.104207+0000 mgr.y (mgr.14505) 1279 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:34 vm00 bash[20770]: audit 2026-03-09T18:00:33.104207+0000 mgr.y (mgr.14505) 1279 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:34 vm00 bash[20770]: cluster 2026-03-09T18:00:33.120470+0000 mgr.y (mgr.14505) 1280 : cluster [DBG] pgmap v1750: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:00:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:34 vm00 bash[20770]: cluster 2026-03-09T18:00:33.120470+0000 mgr.y (mgr.14505) 1280 : cluster [DBG] pgmap v1750: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:00:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:34 vm02 bash[23351]: audit 2026-03-09T18:00:33.104207+0000 mgr.y (mgr.14505) 1279 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:34 vm02 bash[23351]: audit 2026-03-09T18:00:33.104207+0000 mgr.y (mgr.14505) 1279 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:34 vm02 bash[23351]: cluster 2026-03-09T18:00:33.120470+0000 mgr.y (mgr.14505) 1280 : cluster [DBG] pgmap v1750: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:00:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:34 vm02 bash[23351]: cluster 2026-03-09T18:00:33.120470+0000 mgr.y (mgr.14505) 1280 : cluster [DBG] pgmap v1750: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:00:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:36 vm00 bash[28333]: cluster 2026-03-09T18:00:35.120865+0000 mgr.y (mgr.14505) 1281 : cluster [DBG] pgmap v1751: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:36 vm00 bash[28333]: cluster 2026-03-09T18:00:35.120865+0000 mgr.y (mgr.14505) 1281 : cluster [DBG] pgmap v1751: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:36 vm00 bash[20770]: cluster 2026-03-09T18:00:35.120865+0000 mgr.y (mgr.14505) 1281 : cluster [DBG] pgmap v1751: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:36 vm00 bash[20770]: cluster 2026-03-09T18:00:35.120865+0000 mgr.y (mgr.14505) 1281 : cluster [DBG] pgmap v1751: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:00:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:00:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:00:36.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:36 vm02 bash[23351]: cluster 2026-03-09T18:00:35.120865+0000 mgr.y (mgr.14505) 1281 : cluster [DBG] pgmap v1751: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:36 vm02 bash[23351]: cluster 2026-03-09T18:00:35.120865+0000 mgr.y (mgr.14505) 1281 : cluster [DBG] pgmap v1751: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:38 vm00 bash[28333]: cluster 2026-03-09T18:00:37.121260+0000 mgr.y (mgr.14505) 1282 : cluster [DBG] pgmap v1752: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:38 vm00 bash[28333]: cluster 2026-03-09T18:00:37.121260+0000 mgr.y (mgr.14505) 1282 : cluster [DBG] pgmap v1752: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:38 vm00 bash[20770]: cluster 2026-03-09T18:00:37.121260+0000 mgr.y (mgr.14505) 1282 : cluster [DBG] pgmap v1752: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:38 vm00 bash[20770]: cluster 2026-03-09T18:00:37.121260+0000 mgr.y (mgr.14505) 1282 : cluster [DBG] pgmap v1752: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:38 vm02 bash[23351]: cluster 2026-03-09T18:00:37.121260+0000 mgr.y (mgr.14505) 1282 : cluster [DBG] pgmap v1752: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:38.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:38 vm02 bash[23351]: cluster 2026-03-09T18:00:37.121260+0000 mgr.y (mgr.14505) 1282 : cluster [DBG] pgmap v1752: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:40 vm02 bash[23351]: cluster 2026-03-09T18:00:39.122002+0000 mgr.y (mgr.14505) 1283 : cluster [DBG] pgmap v1753: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:40 vm02 bash[23351]: cluster 2026-03-09T18:00:39.122002+0000 mgr.y (mgr.14505) 1283 : cluster [DBG] pgmap v1753: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:40 vm00 bash[28333]: cluster 2026-03-09T18:00:39.122002+0000 mgr.y (mgr.14505) 1283 : cluster [DBG] pgmap v1753: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:40.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:40 vm00 bash[28333]: cluster 2026-03-09T18:00:39.122002+0000 mgr.y (mgr.14505) 1283 : cluster [DBG] pgmap v1753: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:40 vm00 bash[20770]: cluster 2026-03-09T18:00:39.122002+0000 mgr.y (mgr.14505) 1283 : cluster [DBG] pgmap v1753: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:40.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:40 vm00 bash[20770]: cluster 2026-03-09T18:00:39.122002+0000 mgr.y (mgr.14505) 1283 : cluster [DBG] pgmap v1753: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:42 vm02 bash[23351]: cluster 2026-03-09T18:00:41.122311+0000 mgr.y (mgr.14505) 1284 : cluster [DBG] pgmap v1754: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:42 vm02 bash[23351]: cluster 2026-03-09T18:00:41.122311+0000 mgr.y (mgr.14505) 1284 : cluster [DBG] pgmap v1754: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:42 vm00 bash[28333]: cluster 2026-03-09T18:00:41.122311+0000 mgr.y (mgr.14505) 1284 : cluster [DBG] pgmap v1754: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:42.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:42 vm00 bash[28333]: cluster 2026-03-09T18:00:41.122311+0000 mgr.y (mgr.14505) 1284 : cluster [DBG] pgmap v1754: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:42 vm00 bash[20770]: cluster 2026-03-09T18:00:41.122311+0000 mgr.y (mgr.14505) 1284 : cluster [DBG] pgmap v1754: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:42.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:42 vm00 bash[20770]: cluster 2026-03-09T18:00:41.122311+0000 mgr.y (mgr.14505) 1284 : cluster [DBG] pgmap v1754: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:42.855 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=131723 2026-03-09T18:00:42.855 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c max_objects 100 2026-03-09T18:00:42.855 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put onemore /etc/passwd 2026-03-09T18:00:42.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- 192.168.123.100:0/8678050 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 msgr2=0x7f3240100ff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:42.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 --2- 192.168.123.100:0/8678050 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 0x7f3240100ff0 secure :-1 s=READY pgs=3134 cs=0 l=1 rev1=1 crypto rx=0x7f3230009a30 tx=0x7f323001c9b0 comp rx=0 tx=0).stop 2026-03-09T18:00:42.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- 192.168.123.100:0/8678050 shutdown_connections 2026-03-09T18:00:42.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 --2- 192.168.123.100:0/8678050 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f324010f200 0x7f3240111690 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:42.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 --2- 192.168.123.100:0/8678050 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3240101530 0x7f324010eb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:42.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 --2- 192.168.123.100:0/8678050 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 0x7f3240100ff0 unknown :-1 s=CLOSED pgs=3134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:42.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- 192.168.123.100:0/8678050 >> 192.168.123.100:0/8678050 conn(0x7f32400fc820 msgr2=0x7f32400fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:42.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- 192.168.123.100:0/8678050 shutdown_connections 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- 192.168.123.100:0/8678050 wait complete. 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 Processor -- start 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- start start 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 0x7f3240104710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3240101530 0x7f3240104c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f324010f200 0x7f3240105190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3240116c90 con 0x7f3240100c10 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f3240116b10 con 0x7f3240101530 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f3240116e10 con 0x7f324010f200 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 0x7f3240104710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245c03640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f324010f200 0x7f3240105190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 0x7f3240104710 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:50070/0 (socket says 192.168.123.100:50070) 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 -- 192.168.123.100:0/3562869086 learned_addr learned my addr 192.168.123.100:0/3562869086 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:00:42.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 -- 192.168.123.100:0/3562869086 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f324010f200 msgr2=0x7f3240105190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:42.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3244c01640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3240101530 0x7f3240104c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:42.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f324010f200 0x7f3240105190 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:42.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 -- 192.168.123.100:0/3562869086 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3240101530 msgr2=0x7f3240104c50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:42.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3240101530 0x7f3240104c50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:42.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 -- 192.168.123.100:0/3562869086 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f32401afa60 con 0x7f3240100c10 2026-03-09T18:00:42.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245c03640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f324010f200 0x7f3240105190 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T18:00:42.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f3245402640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 0x7f3240104710 secure :-1 s=READY pgs=3135 cs=0 l=1 rev1=1 crypto rx=0x7f323001c7e0 tx=0x7f3230005b80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:42.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3230004280 con 0x7f3240100c10 2026-03-09T18:00:42.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f3230002d10 con 0x7f3240100c10 2026-03-09T18:00:42.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f32300ae9e0 con 0x7f3240100c10 2026-03-09T18:00:42.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f32401afcf0 con 0x7f3240100c10 2026-03-09T18:00:42.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.917+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f32401b0180 con 0x7f3240100c10 2026-03-09T18:00:42.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.921+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3208005190 con 0x7f3240100c10 2026-03-09T18:00:42.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.925+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f32300028a0 con 0x7f3240100c10 2026-03-09T18:00:42.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.925+0000 7f322e7fc640 1 --2- 192.168.123.100:0/3562869086 >> v2:192.168.123.100:6800/2673235927 conn(0x7f321c0777a0 0x7f321c079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:42.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.925+0000 7f3244c01640 1 --2- 192.168.123.100:0/3562869086 >> v2:192.168.123.100:6800/2673235927 conn(0x7f321c0777a0 0x7f321c079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:42.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.925+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(804..804 src has 1..804) ==== 7736+0+0 (secure 0 0 0) 0x7f3230133d80 con 0x7f3240100c10 2026-03-09T18:00:42.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.925+0000 7f3244c01640 1 --2- 192.168.123.100:0/3562869086 >> v2:192.168.123.100:6800/2673235927 conn(0x7f321c0777a0 0x7f321c079c60 secure :-1 s=READY pgs=4266 cs=0 l=1 rev1=1 crypto rx=0x7f3234005e10 tx=0x7f3234005da0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:42.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.925+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=805}) -- 0x7f321c082d70 con 0x7f3240100c10 2026-03-09T18:00:42.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:42.925+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3230016630 con 0x7f3240100c10 2026-03-09T18:00:43.023 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:43.017+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"} v 0) -- 0x7f3208005480 con 0x7f3240100c10 2026-03-09T18:00:43.376 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:00:43 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:00:43.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:43.377+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v805) ==== 225+0+0 (secure 0 0 0) 0x7f32301005d0 con 0x7f3240100c10 2026-03-09T18:00:43.393 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:43.389+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(805..805 src has 1..805) ==== 628+0+0 (secure 0 0 0) 0x7f32300f85c0 con 0x7f3240100c10 2026-03-09T18:00:43.394 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:43.389+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=806}) -- 0x7f321c083b40 con 0x7f3240100c10 2026-03-09T18:00:43.440 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:43.437+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"} v 0) -- 0x7f3208005bc0 con 0x7f3240100c10 2026-03-09T18:00:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:43 vm02 bash[23351]: audit 2026-03-09T18:00:43.023154+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:43.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:43 vm02 bash[23351]: audit 2026-03-09T18:00:43.023154+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:43 vm00 bash[28333]: audit 2026-03-09T18:00:43.023154+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:43.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:43 vm00 bash[28333]: audit 2026-03-09T18:00:43.023154+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:43 vm00 bash[20770]: audit 2026-03-09T18:00:43.023154+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:43.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:43 vm00 bash[20770]: audit 2026-03-09T18:00:43.023154+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:44.394 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.389+0000 7f322e7fc640 1 -- 192.168.123.100:0/3562869086 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v806) ==== 225+0+0 (secure 0 0 0) 0x7f3230105480 con 0x7f3240100c10 2026-03-09T18:00:44.394 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 100 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 >> v2:192.168.123.100:6800/2673235927 conn(0x7f321c0777a0 msgr2=0x7f321c079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 --2- 192.168.123.100:0/3562869086 >> v2:192.168.123.100:6800/2673235927 conn(0x7f321c0777a0 0x7f321c079c60 secure :-1 s=READY pgs=4266 cs=0 l=1 rev1=1 crypto rx=0x7f3234005e10 tx=0x7f3234005da0 comp rx=0 tx=0).stop 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 msgr2=0x7f3240104710 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 0x7f3240104710 secure :-1 s=READY pgs=3135 cs=0 l=1 rev1=1 crypto rx=0x7f323001c7e0 tx=0x7f3230005b80 comp rx=0 tx=0).stop 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 shutdown_connections 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 --2- 192.168.123.100:0/3562869086 >> v2:192.168.123.100:6800/2673235927 conn(0x7f321c0777a0 0x7f321c079c60 unknown :-1 s=CLOSED pgs=4266 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f324010f200 0x7f3240105190 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3240101530 0x7f3240104c50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 --2- 192.168.123.100:0/3562869086 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3240100c10 0x7f3240104710 unknown :-1 s=CLOSED pgs=3135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 >> 192.168.123.100:0/3562869086 conn(0x7f32400fc820 msgr2=0x7f32400fec10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 shutdown_connections 2026-03-09T18:00:44.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:44.393+0000 7f324768d640 1 -- 192.168.123.100:0/3562869086 wait complete. 2026-03-09T18:00:44.414 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 131723 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: audit 2026-03-09T18:00:43.114715+0000 mgr.y (mgr.14505) 1285 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: audit 2026-03-09T18:00:43.114715+0000 mgr.y (mgr.14505) 1285 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: cluster 2026-03-09T18:00:43.122860+0000 mgr.y (mgr.14505) 1286 : cluster [DBG] pgmap v1755: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: cluster 2026-03-09T18:00:43.122860+0000 mgr.y (mgr.14505) 1286 : cluster [DBG] pgmap v1755: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: audit 2026-03-09T18:00:43.383255+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: audit 2026-03-09T18:00:43.383255+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: cluster 2026-03-09T18:00:43.386997+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e805: 8 total, 8 up, 8 in 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: cluster 2026-03-09T18:00:43.386997+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e805: 8 total, 8 up, 8 in 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: audit 2026-03-09T18:00:43.440895+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: audit 2026-03-09T18:00:43.440895+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: audit 2026-03-09T18:00:43.961082+0000 mon.c (mon.2) 946 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:44 vm00 bash[28333]: audit 2026-03-09T18:00:43.961082+0000 mon.c (mon.2) 946 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: audit 2026-03-09T18:00:43.114715+0000 mgr.y (mgr.14505) 1285 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: audit 2026-03-09T18:00:43.114715+0000 mgr.y (mgr.14505) 1285 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: cluster 2026-03-09T18:00:43.122860+0000 mgr.y (mgr.14505) 1286 : cluster [DBG] pgmap v1755: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: cluster 2026-03-09T18:00:43.122860+0000 mgr.y (mgr.14505) 1286 : cluster [DBG] pgmap v1755: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: audit 2026-03-09T18:00:43.383255+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: audit 2026-03-09T18:00:43.383255+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: cluster 2026-03-09T18:00:43.386997+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e805: 8 total, 8 up, 8 in 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: cluster 2026-03-09T18:00:43.386997+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e805: 8 total, 8 up, 8 in 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: audit 2026-03-09T18:00:43.440895+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: audit 2026-03-09T18:00:43.440895+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: audit 2026-03-09T18:00:43.961082+0000 mon.c (mon.2) 946 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:44.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:44 vm00 bash[20770]: audit 2026-03-09T18:00:43.961082+0000 mon.c (mon.2) 946 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:44.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: audit 2026-03-09T18:00:43.114715+0000 mgr.y (mgr.14505) 1285 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: audit 2026-03-09T18:00:43.114715+0000 mgr.y (mgr.14505) 1285 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: cluster 2026-03-09T18:00:43.122860+0000 mgr.y (mgr.14505) 1286 : cluster [DBG] pgmap v1755: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: cluster 2026-03-09T18:00:43.122860+0000 mgr.y (mgr.14505) 1286 : cluster [DBG] pgmap v1755: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: audit 2026-03-09T18:00:43.383255+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: audit 2026-03-09T18:00:43.383255+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: cluster 2026-03-09T18:00:43.386997+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e805: 8 total, 8 up, 8 in 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: cluster 2026-03-09T18:00:43.386997+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e805: 8 total, 8 up, 8 in 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: audit 2026-03-09T18:00:43.440895+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: audit 2026-03-09T18:00:43.440895+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: audit 2026-03-09T18:00:43.961082+0000 mon.c (mon.2) 946 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:44.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:44 vm02 bash[23351]: audit 2026-03-09T18:00:43.961082+0000 mon.c (mon.2) 946 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:45 vm00 bash[28333]: audit 2026-03-09T18:00:44.394024+0000 mon.a (mon.0) 3664 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:45 vm00 bash[28333]: audit 2026-03-09T18:00:44.394024+0000 mon.a (mon.0) 3664 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:45 vm00 bash[28333]: cluster 2026-03-09T18:00:44.396188+0000 mon.a (mon.0) 3665 : cluster [DBG] osdmap e806: 8 total, 8 up, 8 in 2026-03-09T18:00:45.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:45 vm00 bash[28333]: cluster 2026-03-09T18:00:44.396188+0000 mon.a (mon.0) 3665 : cluster [DBG] osdmap e806: 8 total, 8 up, 8 in 2026-03-09T18:00:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:45 vm00 bash[20770]: audit 2026-03-09T18:00:44.394024+0000 mon.a (mon.0) 3664 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:45 vm00 bash[20770]: audit 2026-03-09T18:00:44.394024+0000 mon.a (mon.0) 3664 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:45 vm00 bash[20770]: cluster 2026-03-09T18:00:44.396188+0000 mon.a (mon.0) 3665 : cluster [DBG] osdmap e806: 8 total, 8 up, 8 in 2026-03-09T18:00:45.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:45 vm00 bash[20770]: cluster 2026-03-09T18:00:44.396188+0000 mon.a (mon.0) 3665 : cluster [DBG] osdmap e806: 8 total, 8 up, 8 in 2026-03-09T18:00:45.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:45 vm02 bash[23351]: audit 2026-03-09T18:00:44.394024+0000 mon.a (mon.0) 3664 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:45 vm02 bash[23351]: audit 2026-03-09T18:00:44.394024+0000 mon.a (mon.0) 3664 : audit [INF] from='client.? 192.168.123.100:0/3562869086' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "100"}]': finished 2026-03-09T18:00:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:45 vm02 bash[23351]: cluster 2026-03-09T18:00:44.396188+0000 mon.a (mon.0) 3665 : cluster [DBG] osdmap e806: 8 total, 8 up, 8 in 2026-03-09T18:00:45.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:45 vm02 bash[23351]: cluster 2026-03-09T18:00:44.396188+0000 mon.a (mon.0) 3665 : cluster [DBG] osdmap e806: 8 total, 8 up, 8 in 2026-03-09T18:00:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:46 vm00 bash[28333]: cluster 2026-03-09T18:00:45.123117+0000 mgr.y (mgr.14505) 1287 : cluster [DBG] pgmap v1758: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:46.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:46 vm00 bash[28333]: cluster 2026-03-09T18:00:45.123117+0000 mgr.y (mgr.14505) 1287 : cluster [DBG] pgmap v1758: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:46 vm00 bash[20770]: cluster 2026-03-09T18:00:45.123117+0000 mgr.y (mgr.14505) 1287 : cluster [DBG] pgmap v1758: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:46.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:46 vm00 bash[20770]: cluster 2026-03-09T18:00:45.123117+0000 mgr.y (mgr.14505) 1287 : cluster [DBG] pgmap v1758: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:46.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:00:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:00:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:00:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:46 vm02 bash[23351]: cluster 2026-03-09T18:00:45.123117+0000 mgr.y (mgr.14505) 1287 : cluster [DBG] pgmap v1758: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:46.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:46 vm02 bash[23351]: cluster 2026-03-09T18:00:45.123117+0000 mgr.y (mgr.14505) 1287 : cluster [DBG] pgmap v1758: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:00:47.000 INFO:tasks.workunit.client.0.vm00.stderr:+ [ 0 -ne 0 ] 2026-03-09T18:00:47.000 INFO:tasks.workunit.client.0.vm00.stderr:+ true 2026-03-09T18:00:47.000 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put twomore /etc/passwd 2026-03-09T18:00:47.028 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c max_bytes 100 2026-03-09T18:00:47.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- 192.168.123.100:0/2404663152 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 msgr2=0x7f68f810ee30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:47.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 --2- 192.168.123.100:0/2404663152 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 0x7f68f810ee30 secure :-1 s=READY pgs=3136 cs=0 l=1 rev1=1 crypto rx=0x7f68e4009a80 tx=0x7f68e401c960 comp rx=0 tx=0).stop 2026-03-09T18:00:47.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- 192.168.123.100:0/2404663152 shutdown_connections 2026-03-09T18:00:47.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 --2- 192.168.123.100:0/2404663152 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f68f810f370 0x7f68f8111760 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:47.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 --2- 192.168.123.100:0/2404663152 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 0x7f68f810ee30 unknown :-1 s=CLOSED pgs=3136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:47.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 --2- 192.168.123.100:0/2404663152 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f68f8101640 0x7f68f8101a20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:47.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- 192.168.123.100:0/2404663152 >> 192.168.123.100:0/2404663152 conn(0x7f68f80fd4f0 msgr2=0x7f68f80ff910 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:47.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- 192.168.123.100:0/2404663152 shutdown_connections 2026-03-09T18:00:47.089 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- 192.168.123.100:0/2404663152 wait complete. 2026-03-09T18:00:47.089 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 Processor -- start 2026-03-09T18:00:47.089 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- start start 2026-03-09T18:00:47.089 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f68f8101640 0x7f68f819f360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 0x7f68f819f8a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f68f810f370 0x7f68f81a3c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f68f8116d20 con 0x7f68f8101f60 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f68f8116ba0 con 0x7f68f8101640 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f68f8116ea0 con 0x7f68f810f370 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f68f810f370 0x7f68f81a3c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f6ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 0x7f68f819f8a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f68f810f370 0x7f68f81a3c30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:49854/0 (socket says 192.168.123.100:49854) 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 -- 192.168.123.100:0/962602904 learned_addr learned my addr 192.168.123.100:0/962602904 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f77fe640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f68f8101640 0x7f68f819f360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 -- 192.168.123.100:0/962602904 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f68f8101640 msgr2=0x7f68f819f360 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f68f8101640 0x7f68f819f360 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 -- 192.168.123.100:0/962602904 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 msgr2=0x7f68f819f8a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 0x7f68f819f8a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 -- 192.168.123.100:0/962602904 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f68f81a43b0 con 0x7f68f810f370 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f7fff640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f68f810f370 0x7f68f81a3c30 secure :-1 s=READY pgs=3200 cs=0 l=1 rev1=1 crypto rx=0x7f68ec00ae30 tx=0x7f68ec00c420 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f4ff9640 1 -- 192.168.123.100:0/962602904 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f68ec017020 con 0x7f68f810f370 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f6ffd640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 0x7f68f819f8a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T18:00:47.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f4ff9640 1 -- 192.168.123.100:0/962602904 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f68ec00a0d0 con 0x7f68f810f370 2026-03-09T18:00:47.091 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68f4ff9640 1 -- 192.168.123.100:0/962602904 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f68ec013e60 con 0x7f68f810f370 2026-03-09T18:00:47.091 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f68f81a46a0 con 0x7f68f810f370 2026-03-09T18:00:47.092 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.085+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f68f81abee0 con 0x7f68f810f370 2026-03-09T18:00:47.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.089+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f68bc005190 con 0x7f68f810f370 2026-03-09T18:00:47.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.089+0000 7f68f4ff9640 1 -- 192.168.123.100:0/962602904 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f68ec012070 con 0x7f68f810f370 2026-03-09T18:00:47.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.089+0000 7f68f4ff9640 1 --2- 192.168.123.100:0/962602904 >> v2:192.168.123.100:6800/2673235927 conn(0x7f68cc077640 0x7f68cc079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:00:47.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.089+0000 7f68f4ff9640 1 -- 192.168.123.100:0/962602904 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(807..807 src has 1..807) ==== 7736+0+0 (secure 0 0 0) 0x7f68ec0999b0 con 0x7f68f810f370 2026-03-09T18:00:47.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.089+0000 7f68f77fe640 1 --2- 192.168.123.100:0/962602904 >> v2:192.168.123.100:6800/2673235927 conn(0x7f68cc077640 0x7f68cc079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:00:47.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.089+0000 7f68f4ff9640 1 -- 192.168.123.100:0/962602904 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f68ec066200 con 0x7f68f810f370 2026-03-09T18:00:47.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.093+0000 7f68f77fe640 1 --2- 192.168.123.100:0/962602904 >> v2:192.168.123.100:6800/2673235927 conn(0x7f68cc077640 0x7f68cc079b00 secure :-1 s=READY pgs=4268 cs=0 l=1 rev1=1 crypto rx=0x7f68e8004660 tx=0x7f68e80091c0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:00:47.192 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.185+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"} v 0) -- 0x7f68bc005480 con 0x7f68f810f370 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: cluster 2026-03-09T18:00:46.967913+0000 mon.a (mon.0) 3666 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: cluster 2026-03-09T18:00:46.967913+0000 mon.a (mon.0) 3666 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: cluster 2026-03-09T18:00:46.968089+0000 mon.a (mon.0) 3667 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: cluster 2026-03-09T18:00:46.968089+0000 mon.a (mon.0) 3667 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: cluster 2026-03-09T18:00:46.994654+0000 mon.a (mon.0) 3668 : cluster [DBG] osdmap e807: 8 total, 8 up, 8 in 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: cluster 2026-03-09T18:00:46.994654+0000 mon.a (mon.0) 3668 : cluster [DBG] osdmap e807: 8 total, 8 up, 8 in 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: audit 2026-03-09T18:00:47.192544+0000 mon.c (mon.2) 947 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: audit 2026-03-09T18:00:47.192544+0000 mon.c (mon.2) 947 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: audit 2026-03-09T18:00:47.192984+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:47 vm00 bash[28333]: audit 2026-03-09T18:00:47.192984+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: cluster 2026-03-09T18:00:46.967913+0000 mon.a (mon.0) 3666 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: cluster 2026-03-09T18:00:46.967913+0000 mon.a (mon.0) 3666 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: cluster 2026-03-09T18:00:46.968089+0000 mon.a (mon.0) 3667 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: cluster 2026-03-09T18:00:46.968089+0000 mon.a (mon.0) 3667 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: cluster 2026-03-09T18:00:46.994654+0000 mon.a (mon.0) 3668 : cluster [DBG] osdmap e807: 8 total, 8 up, 8 in 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: cluster 2026-03-09T18:00:46.994654+0000 mon.a (mon.0) 3668 : cluster [DBG] osdmap e807: 8 total, 8 up, 8 in 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: audit 2026-03-09T18:00:47.192544+0000 mon.c (mon.2) 947 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: audit 2026-03-09T18:00:47.192544+0000 mon.c (mon.2) 947 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: audit 2026-03-09T18:00:47.192984+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:47 vm00 bash[20770]: audit 2026-03-09T18:00:47.192984+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: cluster 2026-03-09T18:00:46.967913+0000 mon.a (mon.0) 3666 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: cluster 2026-03-09T18:00:46.967913+0000 mon.a (mon.0) 3666 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: cluster 2026-03-09T18:00:46.968089+0000 mon.a (mon.0) 3667 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: cluster 2026-03-09T18:00:46.968089+0000 mon.a (mon.0) 3667 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: cluster 2026-03-09T18:00:46.994654+0000 mon.a (mon.0) 3668 : cluster [DBG] osdmap e807: 8 total, 8 up, 8 in 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: cluster 2026-03-09T18:00:46.994654+0000 mon.a (mon.0) 3668 : cluster [DBG] osdmap e807: 8 total, 8 up, 8 in 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: audit 2026-03-09T18:00:47.192544+0000 mon.c (mon.2) 947 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: audit 2026-03-09T18:00:47.192544+0000 mon.c (mon.2) 947 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: audit 2026-03-09T18:00:47.192984+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:47 vm02 bash[23351]: audit 2026-03-09T18:00:47.192984+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:47.999 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:47.993+0000 7f68f4ff9640 1 -- 192.168.123.100:0/962602904 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v808) ==== 221+0+0 (secure 0 0 0) 0x7f68ec06b0b0 con 0x7f68f810f370 2026-03-09T18:00:48.057 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.053+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"} v 0) -- 0x7f68bc004910 con 0x7f68f810f370 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: cluster 2026-03-09T18:00:47.123423+0000 mgr.y (mgr.14505) 1288 : cluster [DBG] pgmap v1760: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: cluster 2026-03-09T18:00:47.123423+0000 mgr.y (mgr.14505) 1288 : cluster [DBG] pgmap v1760: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: audit 2026-03-09T18:00:47.990896+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: audit 2026-03-09T18:00:47.990896+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: cluster 2026-03-09T18:00:48.000647+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e808: 8 total, 8 up, 8 in 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: cluster 2026-03-09T18:00:48.000647+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e808: 8 total, 8 up, 8 in 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: audit 2026-03-09T18:00:48.057531+0000 mon.c (mon.2) 948 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: audit 2026-03-09T18:00:48.057531+0000 mon.c (mon.2) 948 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: audit 2026-03-09T18:00:48.057789+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:48 vm00 bash[28333]: audit 2026-03-09T18:00:48.057789+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: cluster 2026-03-09T18:00:47.123423+0000 mgr.y (mgr.14505) 1288 : cluster [DBG] pgmap v1760: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: cluster 2026-03-09T18:00:47.123423+0000 mgr.y (mgr.14505) 1288 : cluster [DBG] pgmap v1760: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: audit 2026-03-09T18:00:47.990896+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: audit 2026-03-09T18:00:47.990896+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: cluster 2026-03-09T18:00:48.000647+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e808: 8 total, 8 up, 8 in 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: cluster 2026-03-09T18:00:48.000647+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e808: 8 total, 8 up, 8 in 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: audit 2026-03-09T18:00:48.057531+0000 mon.c (mon.2) 948 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: audit 2026-03-09T18:00:48.057531+0000 mon.c (mon.2) 948 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: audit 2026-03-09T18:00:48.057789+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:48 vm00 bash[20770]: audit 2026-03-09T18:00:48.057789+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: cluster 2026-03-09T18:00:47.123423+0000 mgr.y (mgr.14505) 1288 : cluster [DBG] pgmap v1760: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: cluster 2026-03-09T18:00:47.123423+0000 mgr.y (mgr.14505) 1288 : cluster [DBG] pgmap v1760: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: audit 2026-03-09T18:00:47.990896+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: audit 2026-03-09T18:00:47.990896+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: cluster 2026-03-09T18:00:48.000647+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e808: 8 total, 8 up, 8 in 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: cluster 2026-03-09T18:00:48.000647+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e808: 8 total, 8 up, 8 in 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: audit 2026-03-09T18:00:48.057531+0000 mon.c (mon.2) 948 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: audit 2026-03-09T18:00:48.057531+0000 mon.c (mon.2) 948 : audit [INF] from='client.? 192.168.123.100:0/962602904' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: audit 2026-03-09T18:00:48.057789+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:48 vm02 bash[23351]: audit 2026-03-09T18:00:48.057789+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T18:00:48.999 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.993+0000 7f68f4ff9640 1 -- 192.168.123.100:0/962602904 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v809) ==== 221+0+0 (secure 0 0 0) 0x7f68ec05e2a0 con 0x7f68f810f370 2026-03-09T18:00:48.999 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_bytes = 100 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 >> v2:192.168.123.100:6800/2673235927 conn(0x7f68cc077640 msgr2=0x7f68cc079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 --2- 192.168.123.100:0/962602904 >> v2:192.168.123.100:6800/2673235927 conn(0x7f68cc077640 0x7f68cc079b00 secure :-1 s=READY pgs=4268 cs=0 l=1 rev1=1 crypto rx=0x7f68e8004660 tx=0x7f68e80091c0 comp rx=0 tx=0).stop 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f68f810f370 msgr2=0x7f68f81a3c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f68f810f370 0x7f68f81a3c30 secure :-1 s=READY pgs=3200 cs=0 l=1 rev1=1 crypto rx=0x7f68ec00ae30 tx=0x7f68ec00c420 comp rx=0 tx=0).stop 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 shutdown_connections 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 --2- 192.168.123.100:0/962602904 >> v2:192.168.123.100:6800/2673235927 conn(0x7f68cc077640 0x7f68cc079b00 unknown :-1 s=CLOSED pgs=4268 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f68f810f370 0x7f68f81a3c30 unknown :-1 s=CLOSED pgs=3200 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f68f8101f60 0x7f68f819f8a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 --2- 192.168.123.100:0/962602904 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f68f8101640 0x7f68f819f360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 >> 192.168.123.100:0/962602904 conn(0x7f68f80fd4f0 msgr2=0x7f68f810fa60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 shutdown_connections 2026-03-09T18:00:49.001 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:00:48.997+0000 7f68fe218640 1 -- 192.168.123.100:0/962602904 wait complete. 2026-03-09T18:00:49.014 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:50 vm00 bash[28333]: audit 2026-03-09T18:00:48.994481+0000 mon.a (mon.0) 3673 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:50 vm00 bash[28333]: audit 2026-03-09T18:00:48.994481+0000 mon.a (mon.0) 3673 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:50 vm00 bash[28333]: cluster 2026-03-09T18:00:48.998345+0000 mon.a (mon.0) 3674 : cluster [DBG] osdmap e809: 8 total, 8 up, 8 in 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:50 vm00 bash[28333]: cluster 2026-03-09T18:00:48.998345+0000 mon.a (mon.0) 3674 : cluster [DBG] osdmap e809: 8 total, 8 up, 8 in 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:50 vm00 bash[28333]: cluster 2026-03-09T18:00:49.123825+0000 mgr.y (mgr.14505) 1289 : cluster [DBG] pgmap v1763: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:50 vm00 bash[28333]: cluster 2026-03-09T18:00:49.123825+0000 mgr.y (mgr.14505) 1289 : cluster [DBG] pgmap v1763: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:50 vm00 bash[20770]: audit 2026-03-09T18:00:48.994481+0000 mon.a (mon.0) 3673 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:50 vm00 bash[20770]: audit 2026-03-09T18:00:48.994481+0000 mon.a (mon.0) 3673 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:50 vm00 bash[20770]: cluster 2026-03-09T18:00:48.998345+0000 mon.a (mon.0) 3674 : cluster [DBG] osdmap e809: 8 total, 8 up, 8 in 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:50 vm00 bash[20770]: cluster 2026-03-09T18:00:48.998345+0000 mon.a (mon.0) 3674 : cluster [DBG] osdmap e809: 8 total, 8 up, 8 in 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:50 vm00 bash[20770]: cluster 2026-03-09T18:00:49.123825+0000 mgr.y (mgr.14505) 1289 : cluster [DBG] pgmap v1763: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-09T18:00:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:50 vm00 bash[20770]: cluster 2026-03-09T18:00:49.123825+0000 mgr.y (mgr.14505) 1289 : cluster [DBG] pgmap v1763: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-09T18:00:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:50 vm02 bash[23351]: audit 2026-03-09T18:00:48.994481+0000 mon.a (mon.0) 3673 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:50 vm02 bash[23351]: audit 2026-03-09T18:00:48.994481+0000 mon.a (mon.0) 3673 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T18:00:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:50 vm02 bash[23351]: cluster 2026-03-09T18:00:48.998345+0000 mon.a (mon.0) 3674 : cluster [DBG] osdmap e809: 8 total, 8 up, 8 in 2026-03-09T18:00:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:50 vm02 bash[23351]: cluster 2026-03-09T18:00:48.998345+0000 mon.a (mon.0) 3674 : cluster [DBG] osdmap e809: 8 total, 8 up, 8 in 2026-03-09T18:00:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:50 vm02 bash[23351]: cluster 2026-03-09T18:00:49.123825+0000 mgr.y (mgr.14505) 1289 : cluster [DBG] pgmap v1763: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-09T18:00:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:50 vm02 bash[23351]: cluster 2026-03-09T18:00:49.123825+0000 mgr.y (mgr.14505) 1289 : cluster [DBG] pgmap v1763: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:52 vm00 bash[28333]: cluster 2026-03-09T18:00:51.124134+0000 mgr.y (mgr.14505) 1290 : cluster [DBG] pgmap v1764: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:52 vm00 bash[28333]: cluster 2026-03-09T18:00:51.124134+0000 mgr.y (mgr.14505) 1290 : cluster [DBG] pgmap v1764: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:52 vm00 bash[28333]: cluster 2026-03-09T18:00:51.969723+0000 mon.a (mon.0) 3675 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_bytes: 100 B) 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:52 vm00 bash[28333]: cluster 2026-03-09T18:00:51.969723+0000 mon.a (mon.0) 3675 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_bytes: 100 B) 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:52 vm00 bash[28333]: cluster 2026-03-09T18:00:51.969893+0000 mon.a (mon.0) 3676 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:52 vm00 bash[28333]: cluster 2026-03-09T18:00:51.969893+0000 mon.a (mon.0) 3676 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:52 vm00 bash[28333]: cluster 2026-03-09T18:00:51.976941+0000 mon.a (mon.0) 3677 : cluster [DBG] osdmap e810: 8 total, 8 up, 8 in 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:52 vm00 bash[28333]: cluster 2026-03-09T18:00:51.976941+0000 mon.a (mon.0) 3677 : cluster [DBG] osdmap e810: 8 total, 8 up, 8 in 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:52 vm00 bash[20770]: cluster 2026-03-09T18:00:51.124134+0000 mgr.y (mgr.14505) 1290 : cluster [DBG] pgmap v1764: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:52 vm00 bash[20770]: cluster 2026-03-09T18:00:51.124134+0000 mgr.y (mgr.14505) 1290 : cluster [DBG] pgmap v1764: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:52 vm00 bash[20770]: cluster 2026-03-09T18:00:51.969723+0000 mon.a (mon.0) 3675 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_bytes: 100 B) 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:52 vm00 bash[20770]: cluster 2026-03-09T18:00:51.969723+0000 mon.a (mon.0) 3675 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_bytes: 100 B) 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:52 vm00 bash[20770]: cluster 2026-03-09T18:00:51.969893+0000 mon.a (mon.0) 3676 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:52 vm00 bash[20770]: cluster 2026-03-09T18:00:51.969893+0000 mon.a (mon.0) 3676 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:52 vm00 bash[20770]: cluster 2026-03-09T18:00:51.976941+0000 mon.a (mon.0) 3677 : cluster [DBG] osdmap e810: 8 total, 8 up, 8 in 2026-03-09T18:00:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:52 vm00 bash[20770]: cluster 2026-03-09T18:00:51.976941+0000 mon.a (mon.0) 3677 : cluster [DBG] osdmap e810: 8 total, 8 up, 8 in 2026-03-09T18:00:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:52 vm02 bash[23351]: cluster 2026-03-09T18:00:51.124134+0000 mgr.y (mgr.14505) 1290 : cluster [DBG] pgmap v1764: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:52 vm02 bash[23351]: cluster 2026-03-09T18:00:51.124134+0000 mgr.y (mgr.14505) 1290 : cluster [DBG] pgmap v1764: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:52 vm02 bash[23351]: cluster 2026-03-09T18:00:51.969723+0000 mon.a (mon.0) 3675 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_bytes: 100 B) 2026-03-09T18:00:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:52 vm02 bash[23351]: cluster 2026-03-09T18:00:51.969723+0000 mon.a (mon.0) 3675 : cluster [WRN] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' is full (reached quota's max_bytes: 100 B) 2026-03-09T18:00:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:52 vm02 bash[23351]: cluster 2026-03-09T18:00:51.969893+0000 mon.a (mon.0) 3676 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:52 vm02 bash[23351]: cluster 2026-03-09T18:00:51.969893+0000 mon.a (mon.0) 3676 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:00:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:52 vm02 bash[23351]: cluster 2026-03-09T18:00:51.976941+0000 mon.a (mon.0) 3677 : cluster [DBG] osdmap e810: 8 total, 8 up, 8 in 2026-03-09T18:00:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:52 vm02 bash[23351]: cluster 2026-03-09T18:00:51.976941+0000 mon.a (mon.0) 3677 : cluster [DBG] osdmap e810: 8 total, 8 up, 8 in 2026-03-09T18:00:53.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:00:53 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:00:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:54 vm00 bash[28333]: cluster 2026-03-09T18:00:53.124662+0000 mgr.y (mgr.14505) 1291 : cluster [DBG] pgmap v1766: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:54 vm00 bash[28333]: cluster 2026-03-09T18:00:53.124662+0000 mgr.y (mgr.14505) 1291 : cluster [DBG] pgmap v1766: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:54 vm00 bash[28333]: audit 2026-03-09T18:00:53.125219+0000 mgr.y (mgr.14505) 1292 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:54 vm00 bash[28333]: audit 2026-03-09T18:00:53.125219+0000 mgr.y (mgr.14505) 1292 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:54 vm00 bash[20770]: cluster 2026-03-09T18:00:53.124662+0000 mgr.y (mgr.14505) 1291 : cluster [DBG] pgmap v1766: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:54 vm00 bash[20770]: cluster 2026-03-09T18:00:53.124662+0000 mgr.y (mgr.14505) 1291 : cluster [DBG] pgmap v1766: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:54 vm00 bash[20770]: audit 2026-03-09T18:00:53.125219+0000 mgr.y (mgr.14505) 1292 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:54 vm00 bash[20770]: audit 2026-03-09T18:00:53.125219+0000 mgr.y (mgr.14505) 1292 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:54 vm02 bash[23351]: cluster 2026-03-09T18:00:53.124662+0000 mgr.y (mgr.14505) 1291 : cluster [DBG] pgmap v1766: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:54 vm02 bash[23351]: cluster 2026-03-09T18:00:53.124662+0000 mgr.y (mgr.14505) 1291 : cluster [DBG] pgmap v1766: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T18:00:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:54 vm02 bash[23351]: audit 2026-03-09T18:00:53.125219+0000 mgr.y (mgr.14505) 1292 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:54 vm02 bash[23351]: audit 2026-03-09T18:00:53.125219+0000 mgr.y (mgr.14505) 1292 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:00:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:56 vm00 bash[28333]: cluster 2026-03-09T18:00:55.124929+0000 mgr.y (mgr.14505) 1293 : cluster [DBG] pgmap v1767: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T18:00:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:56 vm00 bash[28333]: cluster 2026-03-09T18:00:55.124929+0000 mgr.y (mgr.14505) 1293 : cluster [DBG] pgmap v1767: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T18:00:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:56 vm00 bash[20770]: cluster 2026-03-09T18:00:55.124929+0000 mgr.y (mgr.14505) 1293 : cluster [DBG] pgmap v1767: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T18:00:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:56 vm00 bash[20770]: cluster 2026-03-09T18:00:55.124929+0000 mgr.y (mgr.14505) 1293 : cluster [DBG] pgmap v1767: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T18:00:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:00:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:00:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:00:56.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:56 vm02 bash[23351]: cluster 2026-03-09T18:00:55.124929+0000 mgr.y (mgr.14505) 1293 : cluster [DBG] pgmap v1767: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T18:00:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:56 vm02 bash[23351]: cluster 2026-03-09T18:00:55.124929+0000 mgr.y (mgr.14505) 1293 : cluster [DBG] pgmap v1767: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 718 B/s rd, 0 op/s 2026-03-09T18:00:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:58 vm00 bash[28333]: cluster 2026-03-09T18:00:57.125184+0000 mgr.y (mgr.14505) 1294 : cluster [DBG] pgmap v1768: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 630 B/s rd, 0 op/s 2026-03-09T18:00:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:58 vm00 bash[28333]: cluster 2026-03-09T18:00:57.125184+0000 mgr.y (mgr.14505) 1294 : cluster [DBG] pgmap v1768: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 630 B/s rd, 0 op/s 2026-03-09T18:00:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:58 vm00 bash[20770]: cluster 2026-03-09T18:00:57.125184+0000 mgr.y (mgr.14505) 1294 : cluster [DBG] pgmap v1768: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 630 B/s rd, 0 op/s 2026-03-09T18:00:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:58 vm00 bash[20770]: cluster 2026-03-09T18:00:57.125184+0000 mgr.y (mgr.14505) 1294 : cluster [DBG] pgmap v1768: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 630 B/s rd, 0 op/s 2026-03-09T18:00:58.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:58 vm02 bash[23351]: cluster 2026-03-09T18:00:57.125184+0000 mgr.y (mgr.14505) 1294 : cluster [DBG] pgmap v1768: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 630 B/s rd, 0 op/s 2026-03-09T18:00:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:58 vm02 bash[23351]: cluster 2026-03-09T18:00:57.125184+0000 mgr.y (mgr.14505) 1294 : cluster [DBG] pgmap v1768: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 630 B/s rd, 0 op/s 2026-03-09T18:00:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:59 vm02 bash[23351]: audit 2026-03-09T18:00:58.967788+0000 mon.c (mon.2) 949 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:00:59 vm02 bash[23351]: audit 2026-03-09T18:00:58.967788+0000 mon.c (mon.2) 949 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:59 vm00 bash[28333]: audit 2026-03-09T18:00:58.967788+0000 mon.c (mon.2) 949 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:00:59 vm00 bash[28333]: audit 2026-03-09T18:00:58.967788+0000 mon.c (mon.2) 949 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:59 vm00 bash[20770]: audit 2026-03-09T18:00:58.967788+0000 mon.c (mon.2) 949 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:00:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:00:59 vm00 bash[20770]: audit 2026-03-09T18:00:58.967788+0000 mon.c (mon.2) 949 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:00 vm02 bash[23351]: cluster 2026-03-09T18:00:59.125915+0000 mgr.y (mgr.14505) 1295 : cluster [DBG] pgmap v1769: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:00 vm02 bash[23351]: cluster 2026-03-09T18:00:59.125915+0000 mgr.y (mgr.14505) 1295 : cluster [DBG] pgmap v1769: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:00 vm00 bash[28333]: cluster 2026-03-09T18:00:59.125915+0000 mgr.y (mgr.14505) 1295 : cluster [DBG] pgmap v1769: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:00 vm00 bash[28333]: cluster 2026-03-09T18:00:59.125915+0000 mgr.y (mgr.14505) 1295 : cluster [DBG] pgmap v1769: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:00 vm00 bash[20770]: cluster 2026-03-09T18:00:59.125915+0000 mgr.y (mgr.14505) 1295 : cluster [DBG] pgmap v1769: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:00 vm00 bash[20770]: cluster 2026-03-09T18:00:59.125915+0000 mgr.y (mgr.14505) 1295 : cluster [DBG] pgmap v1769: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:02 vm02 bash[23351]: cluster 2026-03-09T18:01:01.126235+0000 mgr.y (mgr.14505) 1296 : cluster [DBG] pgmap v1770: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:02 vm02 bash[23351]: cluster 2026-03-09T18:01:01.126235+0000 mgr.y (mgr.14505) 1296 : cluster [DBG] pgmap v1770: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:02 vm00 bash[28333]: cluster 2026-03-09T18:01:01.126235+0000 mgr.y (mgr.14505) 1296 : cluster [DBG] pgmap v1770: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:02 vm00 bash[28333]: cluster 2026-03-09T18:01:01.126235+0000 mgr.y (mgr.14505) 1296 : cluster [DBG] pgmap v1770: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:02 vm00 bash[20770]: cluster 2026-03-09T18:01:01.126235+0000 mgr.y (mgr.14505) 1296 : cluster [DBG] pgmap v1770: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:02 vm00 bash[20770]: cluster 2026-03-09T18:01:01.126235+0000 mgr.y (mgr.14505) 1296 : cluster [DBG] pgmap v1770: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:01:03.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:01:03 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:01:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:04 vm02 bash[23351]: cluster 2026-03-09T18:01:03.126944+0000 mgr.y (mgr.14505) 1297 : cluster [DBG] pgmap v1771: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:01:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:04 vm02 bash[23351]: cluster 2026-03-09T18:01:03.126944+0000 mgr.y (mgr.14505) 1297 : cluster [DBG] pgmap v1771: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:01:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:04 vm02 bash[23351]: audit 2026-03-09T18:01:03.133565+0000 mgr.y (mgr.14505) 1298 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:04 vm02 bash[23351]: audit 2026-03-09T18:01:03.133565+0000 mgr.y (mgr.14505) 1298 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:04 vm00 bash[28333]: cluster 2026-03-09T18:01:03.126944+0000 mgr.y (mgr.14505) 1297 : cluster [DBG] pgmap v1771: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:01:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:04 vm00 bash[28333]: cluster 2026-03-09T18:01:03.126944+0000 mgr.y (mgr.14505) 1297 : cluster [DBG] pgmap v1771: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:01:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:04 vm00 bash[28333]: audit 2026-03-09T18:01:03.133565+0000 mgr.y (mgr.14505) 1298 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:04 vm00 bash[28333]: audit 2026-03-09T18:01:03.133565+0000 mgr.y (mgr.14505) 1298 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:04 vm00 bash[20770]: cluster 2026-03-09T18:01:03.126944+0000 mgr.y (mgr.14505) 1297 : cluster [DBG] pgmap v1771: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:01:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:04 vm00 bash[20770]: cluster 2026-03-09T18:01:03.126944+0000 mgr.y (mgr.14505) 1297 : cluster [DBG] pgmap v1771: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:01:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:04 vm00 bash[20770]: audit 2026-03-09T18:01:03.133565+0000 mgr.y (mgr.14505) 1298 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:04 vm00 bash[20770]: audit 2026-03-09T18:01:03.133565+0000 mgr.y (mgr.14505) 1298 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:06.557 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:01:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:01:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:01:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:06 vm00 bash[20770]: cluster 2026-03-09T18:01:05.127255+0000 mgr.y (mgr.14505) 1299 : cluster [DBG] pgmap v1772: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:07.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:06 vm00 bash[20770]: cluster 2026-03-09T18:01:05.127255+0000 mgr.y (mgr.14505) 1299 : cluster [DBG] pgmap v1772: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:06 vm00 bash[28333]: cluster 2026-03-09T18:01:05.127255+0000 mgr.y (mgr.14505) 1299 : cluster [DBG] pgmap v1772: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:07.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:06 vm00 bash[28333]: cluster 2026-03-09T18:01:05.127255+0000 mgr.y (mgr.14505) 1299 : cluster [DBG] pgmap v1772: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:06 vm02 bash[23351]: cluster 2026-03-09T18:01:05.127255+0000 mgr.y (mgr.14505) 1299 : cluster [DBG] pgmap v1772: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:07.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:06 vm02 bash[23351]: cluster 2026-03-09T18:01:05.127255+0000 mgr.y (mgr.14505) 1299 : cluster [DBG] pgmap v1772: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:07 vm00 bash[20770]: cluster 2026-03-09T18:01:07.127543+0000 mgr.y (mgr.14505) 1300 : cluster [DBG] pgmap v1773: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:08.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:07 vm00 bash[20770]: cluster 2026-03-09T18:01:07.127543+0000 mgr.y (mgr.14505) 1300 : cluster [DBG] pgmap v1773: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:07 vm00 bash[28333]: cluster 2026-03-09T18:01:07.127543+0000 mgr.y (mgr.14505) 1300 : cluster [DBG] pgmap v1773: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:08.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:07 vm00 bash[28333]: cluster 2026-03-09T18:01:07.127543+0000 mgr.y (mgr.14505) 1300 : cluster [DBG] pgmap v1773: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:07 vm02 bash[23351]: cluster 2026-03-09T18:01:07.127543+0000 mgr.y (mgr.14505) 1300 : cluster [DBG] pgmap v1773: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:08.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:07 vm02 bash[23351]: cluster 2026-03-09T18:01:07.127543+0000 mgr.y (mgr.14505) 1300 : cluster [DBG] pgmap v1773: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:10 vm00 bash[20770]: cluster 2026-03-09T18:01:09.128052+0000 mgr.y (mgr.14505) 1301 : cluster [DBG] pgmap v1774: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:10.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:10 vm00 bash[20770]: cluster 2026-03-09T18:01:09.128052+0000 mgr.y (mgr.14505) 1301 : cluster [DBG] pgmap v1774: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:10 vm00 bash[28333]: cluster 2026-03-09T18:01:09.128052+0000 mgr.y (mgr.14505) 1301 : cluster [DBG] pgmap v1774: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:10.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:10 vm00 bash[28333]: cluster 2026-03-09T18:01:09.128052+0000 mgr.y (mgr.14505) 1301 : cluster [DBG] pgmap v1774: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:10 vm02 bash[23351]: cluster 2026-03-09T18:01:09.128052+0000 mgr.y (mgr.14505) 1301 : cluster [DBG] pgmap v1774: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:10.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:10 vm02 bash[23351]: cluster 2026-03-09T18:01:09.128052+0000 mgr.y (mgr.14505) 1301 : cluster [DBG] pgmap v1774: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:12 vm00 bash[20770]: cluster 2026-03-09T18:01:11.128405+0000 mgr.y (mgr.14505) 1302 : cluster [DBG] pgmap v1775: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:12.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:12 vm00 bash[20770]: cluster 2026-03-09T18:01:11.128405+0000 mgr.y (mgr.14505) 1302 : cluster [DBG] pgmap v1775: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:12 vm00 bash[28333]: cluster 2026-03-09T18:01:11.128405+0000 mgr.y (mgr.14505) 1302 : cluster [DBG] pgmap v1775: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:12.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:12 vm00 bash[28333]: cluster 2026-03-09T18:01:11.128405+0000 mgr.y (mgr.14505) 1302 : cluster [DBG] pgmap v1775: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:12 vm02 bash[23351]: cluster 2026-03-09T18:01:11.128405+0000 mgr.y (mgr.14505) 1302 : cluster [DBG] pgmap v1775: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:12.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:12 vm02 bash[23351]: cluster 2026-03-09T18:01:11.128405+0000 mgr.y (mgr.14505) 1302 : cluster [DBG] pgmap v1775: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:13.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:01:13 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:01:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:14 vm02 bash[23351]: cluster 2026-03-09T18:01:13.128942+0000 mgr.y (mgr.14505) 1303 : cluster [DBG] pgmap v1776: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:14 vm02 bash[23351]: cluster 2026-03-09T18:01:13.128942+0000 mgr.y (mgr.14505) 1303 : cluster [DBG] pgmap v1776: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:14 vm02 bash[23351]: audit 2026-03-09T18:01:13.141355+0000 mgr.y (mgr.14505) 1304 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:14 vm02 bash[23351]: audit 2026-03-09T18:01:13.141355+0000 mgr.y (mgr.14505) 1304 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:14 vm02 bash[23351]: audit 2026-03-09T18:01:13.974465+0000 mon.c (mon.2) 950 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:14 vm02 bash[23351]: audit 2026-03-09T18:01:13.974465+0000 mon.c (mon.2) 950 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:14 vm00 bash[20770]: cluster 2026-03-09T18:01:13.128942+0000 mgr.y (mgr.14505) 1303 : cluster [DBG] pgmap v1776: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:14 vm00 bash[20770]: cluster 2026-03-09T18:01:13.128942+0000 mgr.y (mgr.14505) 1303 : cluster [DBG] pgmap v1776: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:14 vm00 bash[20770]: audit 2026-03-09T18:01:13.141355+0000 mgr.y (mgr.14505) 1304 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:14 vm00 bash[20770]: audit 2026-03-09T18:01:13.141355+0000 mgr.y (mgr.14505) 1304 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:14 vm00 bash[20770]: audit 2026-03-09T18:01:13.974465+0000 mon.c (mon.2) 950 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:14 vm00 bash[20770]: audit 2026-03-09T18:01:13.974465+0000 mon.c (mon.2) 950 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:14 vm00 bash[28333]: cluster 2026-03-09T18:01:13.128942+0000 mgr.y (mgr.14505) 1303 : cluster [DBG] pgmap v1776: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:14 vm00 bash[28333]: cluster 2026-03-09T18:01:13.128942+0000 mgr.y (mgr.14505) 1303 : cluster [DBG] pgmap v1776: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:14 vm00 bash[28333]: audit 2026-03-09T18:01:13.141355+0000 mgr.y (mgr.14505) 1304 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:14 vm00 bash[28333]: audit 2026-03-09T18:01:13.141355+0000 mgr.y (mgr.14505) 1304 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:14 vm00 bash[28333]: audit 2026-03-09T18:01:13.974465+0000 mon.c (mon.2) 950 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:15.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:14 vm00 bash[28333]: audit 2026-03-09T18:01:13.974465+0000 mon.c (mon.2) 950 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:15 vm02 bash[23351]: cluster 2026-03-09T18:01:15.129245+0000 mgr.y (mgr.14505) 1305 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:15.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:15 vm02 bash[23351]: cluster 2026-03-09T18:01:15.129245+0000 mgr.y (mgr.14505) 1305 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:15 vm00 bash[20770]: cluster 2026-03-09T18:01:15.129245+0000 mgr.y (mgr.14505) 1305 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:16.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:15 vm00 bash[20770]: cluster 2026-03-09T18:01:15.129245+0000 mgr.y (mgr.14505) 1305 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:15 vm00 bash[28333]: cluster 2026-03-09T18:01:15.129245+0000 mgr.y (mgr.14505) 1305 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:16.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:15 vm00 bash[28333]: cluster 2026-03-09T18:01:15.129245+0000 mgr.y (mgr.14505) 1305 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:01:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:01:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:01:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:18 vm00 bash[28333]: cluster 2026-03-09T18:01:17.129510+0000 mgr.y (mgr.14505) 1306 : cluster [DBG] pgmap v1778: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:18.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:18 vm00 bash[28333]: cluster 2026-03-09T18:01:17.129510+0000 mgr.y (mgr.14505) 1306 : cluster [DBG] pgmap v1778: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:18 vm00 bash[20770]: cluster 2026-03-09T18:01:17.129510+0000 mgr.y (mgr.14505) 1306 : cluster [DBG] pgmap v1778: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:18.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:18 vm00 bash[20770]: cluster 2026-03-09T18:01:17.129510+0000 mgr.y (mgr.14505) 1306 : cluster [DBG] pgmap v1778: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:18.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:18 vm02 bash[23351]: cluster 2026-03-09T18:01:17.129510+0000 mgr.y (mgr.14505) 1306 : cluster [DBG] pgmap v1778: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:18.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:18 vm02 bash[23351]: cluster 2026-03-09T18:01:17.129510+0000 mgr.y (mgr.14505) 1306 : cluster [DBG] pgmap v1778: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:19.015 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=131810 2026-03-09T18:01:19.015 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c max_bytes 0 2026-03-09T18:01:19.015 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put two /etc/passwd 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- 192.168.123.100:0/325704267 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 msgr2=0x7fc128075da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/325704267 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 0x7fc128075da0 secure :-1 s=READY pgs=3137 cs=0 l=1 rev1=1 crypto rx=0x7fc110009a30 tx=0x7fc11001c990 comp rx=0 tx=0).stop 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- 192.168.123.100:0/325704267 shutdown_connections 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/325704267 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc128113710 0x7fc128115b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/325704267 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 0x7fc128075da0 unknown :-1 s=CLOSED pgs=3137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/325704267 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc128106810 0x7fc128075420 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- 192.168.123.100:0/325704267 >> 192.168.123.100:0/325704267 conn(0x7fc1280fe640 msgr2=0x7fc128100a60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- 192.168.123.100:0/325704267 shutdown_connections 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- 192.168.123.100:0/325704267 wait complete. 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 Processor -- start 2026-03-09T18:01:19.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- start start 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 0x7fc1281a4350 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc128106810 0x7fc1281a4890 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc128113710 0x7fc1281a8c20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fc12811bd50 con 0x7fc128075960 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fc12811bbd0 con 0x7fc128106810 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fc12811bed0 con 0x7fc128113710 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc128106810 0x7fc1281a4890 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc128106810 0x7fc1281a4890 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.100:45728/0 (socket says 192.168.123.100:45728) 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 -- 192.168.123.100:0/109046873 learned_addr learned my addr 192.168.123.100:0/109046873 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12d742640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc128113710 0x7fc1281a8c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12cf41640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 0x7fc1281a4350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 -- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc128113710 msgr2=0x7fc1281a8c20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc128113710 0x7fc1281a8c20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 -- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 msgr2=0x7fc1281a4350 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 0x7fc1281a4350 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc1281a9300 con 0x7fc128106810 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12cf41640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 0x7fc1281a4350 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11ffff640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc128106810 0x7fc1281a4890 secure :-1 s=READY pgs=2867 cs=0 l=1 rev1=1 crypto rx=0x7fc11001ce70 tx=0x7fc110002760 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc110004280 con 0x7fc128106810 2026-03-09T18:01:19.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc1281a9590 con 0x7fc128106810 2026-03-09T18:01:19.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc1281b0ed0 con 0x7fc128106810 2026-03-09T18:01:19.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fc110002d10 con 0x7fc128106810 2026-03-09T18:01:19.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.077+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc1100b87c0 con 0x7fc128106810 2026-03-09T18:01:19.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.081+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fc1100b8960 con 0x7fc128106810 2026-03-09T18:01:19.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.081+0000 7fc11dffb640 1 --2- 192.168.123.100:0/109046873 >> v2:192.168.123.100:6800/2673235927 conn(0x7fc0fc0776d0 0x7fc0fc079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:19.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.081+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 5 ==== osd_map(810..810 src has 1..810) ==== 7736+0+0 (secure 0 0 0) 0x7fc1101342e0 con 0x7fc128106810 2026-03-09T18:01:19.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.081+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc128076b60 con 0x7fc128106810 2026-03-09T18:01:19.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.081+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=811}) -- 0x7fc0fc082c20 con 0x7fc128106810 2026-03-09T18:01:19.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.081+0000 7fc12cf41640 1 --2- 192.168.123.100:0/109046873 >> v2:192.168.123.100:6800/2673235927 conn(0x7fc0fc0776d0 0x7fc0fc079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:19.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.081+0000 7fc12cf41640 1 --2- 192.168.123.100:0/109046873 >> v2:192.168.123.100:6800/2673235927 conn(0x7fc0fc0776d0 0x7fc0fc079b90 secure :-1 s=READY pgs=4270 cs=0 l=1 rev1=1 crypto rx=0x7fc118005fd0 tx=0x7fc118005ea0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:19.089 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.085+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fc1100bd050 con 0x7fc128106810 2026-03-09T18:01:19.182 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.177+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"} v 0) -- 0x7fc1281a56b0 con 0x7fc128106810 2026-03-09T18:01:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:19 vm00 bash[28333]: audit 2026-03-09T18:01:19.182538+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:19 vm00 bash[28333]: audit 2026-03-09T18:01:19.182538+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:19 vm00 bash[28333]: audit 2026-03-09T18:01:19.183287+0000 mon.a (mon.0) 3678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:19 vm00 bash[28333]: audit 2026-03-09T18:01:19.183287+0000 mon.a (mon.0) 3678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:19 vm00 bash[20770]: audit 2026-03-09T18:01:19.182538+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:19 vm00 bash[20770]: audit 2026-03-09T18:01:19.182538+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:19 vm00 bash[20770]: audit 2026-03-09T18:01:19.183287+0000 mon.a (mon.0) 3678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:19 vm00 bash[20770]: audit 2026-03-09T18:01:19.183287+0000 mon.a (mon.0) 3678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.577 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.573+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 7 ==== osd_map(811..811 src has 1..811) ==== 628+0+0 (secure 0 0 0) 0x7fc1100f8aa0 con 0x7fc128106810 2026-03-09T18:01:19.577 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.573+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=812}) -- 0x7fc0fc083790 con 0x7fc128106810 2026-03-09T18:01:19.579 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.573+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v811) ==== 217+0+0 (secure 0 0 0) 0x7fc110016610 con 0x7fc128106810 2026-03-09T18:01:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:19 vm02 bash[23351]: audit 2026-03-09T18:01:19.182538+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:19 vm02 bash[23351]: audit 2026-03-09T18:01:19.182538+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:19 vm02 bash[23351]: audit 2026-03-09T18:01:19.183287+0000 mon.a (mon.0) 3678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:19 vm02 bash[23351]: audit 2026-03-09T18:01:19.183287+0000 mon.a (mon.0) 3678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:19.637 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:19.633+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"} v 0) -- 0x7fc1281b10a0 con 0x7fc128106810 2026-03-09T18:01:20.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.585+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 9 ==== osd_map(812..812 src has 1..812) ==== 628+0+0 (secure 0 0 0) 0x7fc110004420 con 0x7fc128106810 2026-03-09T18:01:20.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.585+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=813}) -- 0x7fc0fc083bc0 con 0x7fc128106810 2026-03-09T18:01:20.599 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.593+0000 7fc11dffb640 1 -- 192.168.123.100:0/109046873 <== mon.1 v2:192.168.123.102:3300/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v812) ==== 217+0+0 (secure 0 0 0) 0x7fc110100ab0 con 0x7fc128106810 2026-03-09T18:01:20.599 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_bytes = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 2026-03-09T18:01:20.601 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 >> v2:192.168.123.100:6800/2673235927 conn(0x7fc0fc0776d0 msgr2=0x7fc0fc079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:20.601 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/109046873 >> v2:192.168.123.100:6800/2673235927 conn(0x7fc0fc0776d0 0x7fc0fc079b90 secure :-1 s=READY pgs=4270 cs=0 l=1 rev1=1 crypto rx=0x7fc118005fd0 tx=0x7fc118005ea0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc128106810 msgr2=0x7fc1281a4890 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc128106810 0x7fc1281a4890 secure :-1 s=READY pgs=2867 cs=0 l=1 rev1=1 crypto rx=0x7fc11001ce70 tx=0x7fc110002760 comp rx=0 tx=0).stop 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 shutdown_connections 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/109046873 >> v2:192.168.123.100:6800/2673235927 conn(0x7fc0fc0776d0 0x7fc0fc079b90 unknown :-1 s=CLOSED pgs=4270 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc128113710 0x7fc1281a8c20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc128106810 0x7fc1281a4890 unknown :-1 s=CLOSED pgs=2867 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 --2- 192.168.123.100:0/109046873 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc128075960 0x7fc1281a4350 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 >> 192.168.123.100:0/109046873 conn(0x7fc1280fe640 msgr2=0x7fc128100500 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 shutdown_connections 2026-03-09T18:01:20.602 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.597+0000 7fc12f1cc640 1 -- 192.168.123.100:0/109046873 wait complete. 2026-03-09T18:01:20.617 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c max_objects 0 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.677+0000 7f3eaae95640 1 -- 192.168.123.100:0/413796734 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea4101820 msgr2=0x7f3ea4101c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.677+0000 7f3eaae95640 1 --2- 192.168.123.100:0/413796734 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea4101820 0x7f3ea4101c80 secure :-1 s=READY pgs=3138 cs=0 l=1 rev1=1 crypto rx=0x7f3e94009a60 tx=0x7f3e9401c900 comp rx=0 tx=0).stop 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- 192.168.123.100:0/413796734 shutdown_connections 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 --2- 192.168.123.100:0/413796734 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ea41021c0 0x7f3ea410e6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 --2- 192.168.123.100:0/413796734 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea4101820 0x7f3ea4101c80 unknown :-1 s=CLOSED pgs=3138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 --2- 192.168.123.100:0/413796734 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3ea4107820 0x7f3ea4107c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- 192.168.123.100:0/413796734 >> 192.168.123.100:0/413796734 conn(0x7f3ea40fd540 msgr2=0x7f3ea40ff960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- 192.168.123.100:0/413796734 shutdown_connections 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- 192.168.123.100:0/413796734 wait complete. 2026-03-09T18:01:20.684 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 Processor -- start 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- start start 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3ea4101820 0x7f3ea419ae20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea41021c0 0x7f3ea419b360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ea4107820 0x7f3ea419f6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3ea4112810 con 0x7f3ea41021c0 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f3ea4112690 con 0x7f3ea4107820 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f3ea4112990 con 0x7f3ea4101820 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3ea8c0a640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3ea4101820 0x7f3ea419ae20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea41021c0 0x7f3ea419b360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea41021c0 0x7f3ea419b360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:41112/0 (socket says 192.168.123.100:41112) 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 -- 192.168.123.100:0/3638810696 learned_addr learned my addr 192.168.123.100:0/3638810696 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3ea8c0a640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3ea4101820 0x7f3ea419ae20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:34710/0 (socket says 192.168.123.100:34710) 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3ea940b640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ea4107820 0x7f3ea419f6f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 -- 192.168.123.100:0/3638810696 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3ea4101820 msgr2=0x7f3ea419ae20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3ea4101820 0x7f3ea419ae20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 -- 192.168.123.100:0/3638810696 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ea4107820 msgr2=0x7f3ea419f6f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ea4107820 0x7f3ea419f6f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 -- 192.168.123.100:0/3638810696 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3ea419fe70 con 0x7f3ea41021c0 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3ea940b640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ea4107820 0x7f3ea419f6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T18:01:20.685 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e9bfff640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea41021c0 0x7f3ea419b360 secure :-1 s=READY pgs=3139 cs=0 l=1 rev1=1 crypto rx=0x7f3e94098420 tx=0x7f3e940a5ed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:20.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3e940043e0 con 0x7f3ea41021c0 2026-03-09T18:01:20.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f3e94004580 con 0x7f3ea41021c0 2026-03-09T18:01:20.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3e94005030 con 0x7f3ea41021c0 2026-03-09T18:01:20.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- 192.168.123.100:0/3638810696 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3ea41a0100 con 0x7f3ea41021c0 2026-03-09T18:01:20.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3eaae95640 1 -- 192.168.123.100:0/3638810696 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3ea41a79a0 con 0x7f3ea41021c0 2026-03-09T18:01:20.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f3e940c0020 con 0x7f3ea41021c0 2026-03-09T18:01:20.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.681+0000 7f3e99ffb640 1 --2- 192.168.123.100:0/3638810696 >> v2:192.168.123.100:6800/2673235927 conn(0x7f3e800776d0 0x7f3e80079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:20.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.685+0000 7f3ea8c0a640 1 --2- 192.168.123.100:0/3638810696 >> v2:192.168.123.100:6800/2673235927 conn(0x7f3e800776d0 0x7f3e80079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:20.692 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.685+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(812..812 src has 1..812) ==== 7736+0+0 (secure 0 0 0) 0x7f3e94134a10 con 0x7f3ea41021c0 2026-03-09T18:01:20.692 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.685+0000 7f3eaae95640 1 -- 192.168.123.100:0/3638810696 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3e6c005190 con 0x7f3ea41021c0 2026-03-09T18:01:20.692 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.685+0000 7f3ea8c0a640 1 --2- 192.168.123.100:0/3638810696 >> v2:192.168.123.100:6800/2673235927 conn(0x7f3e800776d0 0x7f3e80079b90 secure :-1 s=READY pgs=4271 cs=0 l=1 rev1=1 crypto rx=0x7f3e8c004640 tx=0x7f3e8c009210 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:20.692 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.689+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=813}) -- 0x7f3e80082c20 con 0x7f3ea41021c0 2026-03-09T18:01:20.693 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.689+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3e94016610 con 0x7f3ea41021c0 2026-03-09T18:01:20.786 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:20.781+0000 7f3eaae95640 1 -- 192.168.123.100:0/3638810696 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"} v 0) -- 0x7f3e6c005480 con 0x7f3ea41021c0 2026-03-09T18:01:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: cluster 2026-03-09T18:01:19.130234+0000 mgr.y (mgr.14505) 1307 : cluster [DBG] pgmap v1779: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: cluster 2026-03-09T18:01:19.130234+0000 mgr.y (mgr.14505) 1307 : cluster [DBG] pgmap v1779: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:20.885 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: audit 2026-03-09T18:01:19.572684+0000 mon.a (mon.0) 3679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: audit 2026-03-09T18:01:19.572684+0000 mon.a (mon.0) 3679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: cluster 2026-03-09T18:01:19.583267+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e811: 8 total, 8 up, 8 in 2026-03-09T18:01:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: cluster 2026-03-09T18:01:19.583267+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e811: 8 total, 8 up, 8 in 2026-03-09T18:01:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: audit 2026-03-09T18:01:19.637616+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: audit 2026-03-09T18:01:19.637616+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: audit 2026-03-09T18:01:19.638949+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:20 vm02 bash[23351]: audit 2026-03-09T18:01:19.638949+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: cluster 2026-03-09T18:01:19.130234+0000 mgr.y (mgr.14505) 1307 : cluster [DBG] pgmap v1779: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: cluster 2026-03-09T18:01:19.130234+0000 mgr.y (mgr.14505) 1307 : cluster [DBG] pgmap v1779: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: audit 2026-03-09T18:01:19.572684+0000 mon.a (mon.0) 3679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: audit 2026-03-09T18:01:19.572684+0000 mon.a (mon.0) 3679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: cluster 2026-03-09T18:01:19.583267+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e811: 8 total, 8 up, 8 in 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: cluster 2026-03-09T18:01:19.583267+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e811: 8 total, 8 up, 8 in 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: audit 2026-03-09T18:01:19.637616+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: audit 2026-03-09T18:01:19.637616+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: audit 2026-03-09T18:01:19.638949+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:20 vm00 bash[20770]: audit 2026-03-09T18:01:19.638949+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: cluster 2026-03-09T18:01:19.130234+0000 mgr.y (mgr.14505) 1307 : cluster [DBG] pgmap v1779: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: cluster 2026-03-09T18:01:19.130234+0000 mgr.y (mgr.14505) 1307 : cluster [DBG] pgmap v1779: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:21.049 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: audit 2026-03-09T18:01:19.572684+0000 mon.a (mon.0) 3679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:21.050 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: audit 2026-03-09T18:01:19.572684+0000 mon.a (mon.0) 3679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:21.050 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: cluster 2026-03-09T18:01:19.583267+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e811: 8 total, 8 up, 8 in 2026-03-09T18:01:21.050 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: cluster 2026-03-09T18:01:19.583267+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e811: 8 total, 8 up, 8 in 2026-03-09T18:01:21.050 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: audit 2026-03-09T18:01:19.637616+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.050 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: audit 2026-03-09T18:01:19.637616+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/109046873' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.050 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: audit 2026-03-09T18:01:19.638949+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.050 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:20 vm00 bash[28333]: audit 2026-03-09T18:01:19.638949+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:01:21.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.609+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v813) ==== 221+0+0 (secure 0 0 0) 0x7f3e941000d0 con 0x7f3ea41021c0 2026-03-09T18:01:21.617 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.613+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(813..813 src has 1..813) ==== 628+0+0 (secure 0 0 0) 0x7f3e940f91d0 con 0x7f3ea41021c0 2026-03-09T18:01:21.617 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.613+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=814}) -- 0x7f3e80083b80 con 0x7f3ea41021c0 2026-03-09T18:01:21.671 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.665+0000 7f3eaae95640 1 -- 192.168.123.100:0/3638810696 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"} v 0) -- 0x7f3e6c004a20 con 0x7f3ea41021c0 2026-03-09T18:01:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:21 vm02 bash[23351]: audit 2026-03-09T18:01:20.587037+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:21 vm02 bash[23351]: audit 2026-03-09T18:01:20.587037+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:21 vm02 bash[23351]: cluster 2026-03-09T18:01:20.603888+0000 mon.a (mon.0) 3683 : cluster [DBG] osdmap e812: 8 total, 8 up, 8 in 2026-03-09T18:01:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:21 vm02 bash[23351]: cluster 2026-03-09T18:01:20.603888+0000 mon.a (mon.0) 3683 : cluster [DBG] osdmap e812: 8 total, 8 up, 8 in 2026-03-09T18:01:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:21 vm02 bash[23351]: audit 2026-03-09T18:01:20.787150+0000 mon.a (mon.0) 3684 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:21 vm02 bash[23351]: audit 2026-03-09T18:01:20.787150+0000 mon.a (mon.0) 3684 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:21 vm02 bash[23351]: cluster 2026-03-09T18:01:21.130556+0000 mgr.y (mgr.14505) 1308 : cluster [DBG] pgmap v1782: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:01:21.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:21 vm02 bash[23351]: cluster 2026-03-09T18:01:21.130556+0000 mgr.y (mgr.14505) 1308 : cluster [DBG] pgmap v1782: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:01:21.977 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.973+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v814) ==== 221+0+0 (secure 0 0 0) 0x7f3e941011e0 con 0x7f3ea41021c0 2026-03-09T18:01:21.977 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.973+0000 7f3e99ffb640 1 -- 192.168.123.100:0/3638810696 <== mon.0 v2:192.168.123.100:3300/0 10 ==== osd_map(814..814 src has 1..814) ==== 628+0+0 (secure 0 0 0) 0x7f3e940047d0 con 0x7f3ea41021c0 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.973+0000 7f3e737fe640 1 -- 192.168.123.100:0/3638810696 >> v2:192.168.123.100:6800/2673235927 conn(0x7f3e800776d0 msgr2=0x7f3e80079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.973+0000 7f3e737fe640 1 --2- 192.168.123.100:0/3638810696 >> v2:192.168.123.100:6800/2673235927 conn(0x7f3e800776d0 0x7f3e80079b90 secure :-1 s=READY pgs=4271 cs=0 l=1 rev1=1 crypto rx=0x7f3e8c004640 tx=0x7f3e8c009210 comp rx=0 tx=0).stop 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.973+0000 7f3e737fe640 1 -- 192.168.123.100:0/3638810696 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea41021c0 msgr2=0x7f3ea419b360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.973+0000 7f3e737fe640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea41021c0 0x7f3ea419b360 secure :-1 s=READY pgs=3139 cs=0 l=1 rev1=1 crypto rx=0x7f3e94098420 tx=0x7f3e940a5ed0 comp rx=0 tx=0).stop 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.977+0000 7f3e737fe640 1 -- 192.168.123.100:0/3638810696 shutdown_connections 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.977+0000 7f3e737fe640 1 --2- 192.168.123.100:0/3638810696 >> v2:192.168.123.100:6800/2673235927 conn(0x7f3e800776d0 0x7f3e80079b90 unknown :-1 s=CLOSED pgs=4271 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.977+0000 7f3e737fe640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ea4107820 0x7f3ea419f6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.977+0000 7f3e737fe640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ea41021c0 0x7f3ea419b360 unknown :-1 s=CLOSED pgs=3139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.977+0000 7f3e737fe640 1 --2- 192.168.123.100:0/3638810696 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3ea4101820 0x7f3ea419ae20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.977+0000 7f3e737fe640 1 -- 192.168.123.100:0/3638810696 >> 192.168.123.100:0/3638810696 conn(0x7f3ea40fd540 msgr2=0x7f3ea4105330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:21.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.977+0000 7f3e737fe640 1 -- 192.168.123.100:0/3638810696 shutdown_connections 2026-03-09T18:01:21.983 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:21.977+0000 7f3e737fe640 1 -- 192.168.123.100:0/3638810696 wait complete. 2026-03-09T18:01:22.004 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 131810 2026-03-09T18:01:22.004 INFO:tasks.workunit.client.0.vm00.stderr:+ [ 0 -ne 0 ] 2026-03-09T18:01:22.004 INFO:tasks.workunit.client.0.vm00.stderr:+ true 2026-03-09T18:01:22.004 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put three /etc/passwd 2026-03-09T18:01:22.034 INFO:tasks.workunit.client.0.vm00.stderr:+ uuidgen 2026-03-09T18:01:22.035 INFO:tasks.workunit.client.0.vm00.stderr:+ pp=bbf4d092-de96-4c27-8d15-50edd4b79fe3 2026-03-09T18:01:22.035 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool create bbf4d092-de96-4c27-8d15-50edd4b79fe3 12 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:21 vm00 bash[20770]: audit 2026-03-09T18:01:20.587037+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:21 vm00 bash[20770]: audit 2026-03-09T18:01:20.587037+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:21 vm00 bash[20770]: cluster 2026-03-09T18:01:20.603888+0000 mon.a (mon.0) 3683 : cluster [DBG] osdmap e812: 8 total, 8 up, 8 in 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:21 vm00 bash[20770]: cluster 2026-03-09T18:01:20.603888+0000 mon.a (mon.0) 3683 : cluster [DBG] osdmap e812: 8 total, 8 up, 8 in 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:21 vm00 bash[20770]: audit 2026-03-09T18:01:20.787150+0000 mon.a (mon.0) 3684 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:21 vm00 bash[20770]: audit 2026-03-09T18:01:20.787150+0000 mon.a (mon.0) 3684 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:21 vm00 bash[20770]: cluster 2026-03-09T18:01:21.130556+0000 mgr.y (mgr.14505) 1308 : cluster [DBG] pgmap v1782: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:21 vm00 bash[20770]: cluster 2026-03-09T18:01:21.130556+0000 mgr.y (mgr.14505) 1308 : cluster [DBG] pgmap v1782: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:21 vm00 bash[28333]: audit 2026-03-09T18:01:20.587037+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:21 vm00 bash[28333]: audit 2026-03-09T18:01:20.587037+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:21 vm00 bash[28333]: cluster 2026-03-09T18:01:20.603888+0000 mon.a (mon.0) 3683 : cluster [DBG] osdmap e812: 8 total, 8 up, 8 in 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:21 vm00 bash[28333]: cluster 2026-03-09T18:01:20.603888+0000 mon.a (mon.0) 3683 : cluster [DBG] osdmap e812: 8 total, 8 up, 8 in 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:21 vm00 bash[28333]: audit 2026-03-09T18:01:20.787150+0000 mon.a (mon.0) 3684 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:21 vm00 bash[28333]: audit 2026-03-09T18:01:20.787150+0000 mon.a (mon.0) 3684 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:21 vm00 bash[28333]: cluster 2026-03-09T18:01:21.130556+0000 mgr.y (mgr.14505) 1308 : cluster [DBG] pgmap v1782: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:01:22.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:21 vm00 bash[28333]: cluster 2026-03-09T18:01:21.130556+0000 mgr.y (mgr.14505) 1308 : cluster [DBG] pgmap v1782: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:01:22.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 -- 192.168.123.100:0/4236221063 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ffc101ff0 msgr2=0x7f8ffc10eec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:22.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 --2- 192.168.123.100:0/4236221063 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ffc101ff0 0x7f8ffc10eec0 secure :-1 s=READY pgs=3141 cs=0 l=1 rev1=1 crypto rx=0x7f8ff0009a30 tx=0x7f8ff001c940 comp rx=0 tx=0).stop 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 -- 192.168.123.100:0/4236221063 shutdown_connections 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 --2- 192.168.123.100:0/4236221063 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ffc10f400 0x7f8ffc1117f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 --2- 192.168.123.100:0/4236221063 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ffc101ff0 0x7f8ffc10eec0 unknown :-1 s=CLOSED pgs=3141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 --2- 192.168.123.100:0/4236221063 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8ffc1016d0 0x7f8ffc101ab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 -- 192.168.123.100:0/4236221063 >> 192.168.123.100:0/4236221063 conn(0x7f8ffc0fd540 msgr2=0x7f8ffc0ff960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 -- 192.168.123.100:0/4236221063 shutdown_connections 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.089+0000 7f9001866640 1 -- 192.168.123.100:0/4236221063 wait complete. 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 Processor -- start 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 -- start start 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8ffc1016d0 0x7f8ffc19f250 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ffc101ff0 0x7f8ffc19f790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ffc10f400 0x7f8ffc1a3b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f8ffc116cc0 con 0x7f8ffc101ff0 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f8ffc116b40 con 0x7f8ffc10f400 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f8ffc116e40 con 0x7f8ffc1016d0 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8ffc1016d0 0x7f8ffc19f250 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8ffc1016d0 0x7f8ffc19f250 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:34756/0 (socket says 192.168.123.100:34756) 2026-03-09T18:01:22.096 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 -- 192.168.123.100:0/279682452 learned_addr learned my addr 192.168.123.100:0/279682452 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:01:22.097 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffb7fe640 1 --2- 192.168.123.100:0/279682452 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ffc10f400 0x7f8ffc1a3b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:22.097 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 -- 192.168.123.100:0/279682452 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ffc10f400 msgr2=0x7f8ffc1a3b20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:22.097 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 --2- 192.168.123.100:0/279682452 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ffc10f400 0x7f8ffc1a3b20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:22.097 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 -- 192.168.123.100:0/279682452 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ffc101ff0 msgr2=0x7f8ffc19f790 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T18:01:22.097 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 --2- 192.168.123.100:0/279682452 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ffc101ff0 0x7f8ffc19f790 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:22.097 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 -- 192.168.123.100:0/279682452 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8ffc1a4200 con 0x7f8ffc1016d0 2026-03-09T18:01:22.097 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f8ffaffd640 1 --2- 192.168.123.100:0/279682452 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8ffc1016d0 0x7f8ffc19f250 secure :-1 s=READY pgs=3204 cs=0 l=1 rev1=1 crypto rx=0x7f8fe400dcf0 tx=0x7f8fe400b630 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:22.097 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9000864640 1 -- 192.168.123.100:0/279682452 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8fe4014070 con 0x7f8ffc1016d0 2026-03-09T18:01:22.098 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9000864640 1 -- 192.168.123.100:0/279682452 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f8fe4004540 con 0x7f8ffc1016d0 2026-03-09T18:01:22.098 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9000864640 1 -- 192.168.123.100:0/279682452 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8fe4005280 con 0x7f8ffc1016d0 2026-03-09T18:01:22.098 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 -- 192.168.123.100:0/279682452 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f8ffc1a44f0 con 0x7f8ffc1016d0 2026-03-09T18:01:22.098 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 -- 192.168.123.100:0/279682452 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f8ffc1abdd0 con 0x7f8ffc1016d0 2026-03-09T18:01:22.099 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9000864640 1 -- 192.168.123.100:0/279682452 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f8fe4020020 con 0x7f8ffc1016d0 2026-03-09T18:01:22.099 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.093+0000 7f9001866640 1 -- 192.168.123.100:0/279682452 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8fc0005190 con 0x7f8ffc1016d0 2026-03-09T18:01:22.102 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.097+0000 7f9000864640 1 --2- 192.168.123.100:0/279682452 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8fd00776d0 0x7f8fd0079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:22.102 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.097+0000 7f9000864640 1 -- 192.168.123.100:0/279682452 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(814..814 src has 1..814) ==== 7736+0+0 (secure 0 0 0) 0x7f8fe409aa10 con 0x7f8ffc1016d0 2026-03-09T18:01:22.102 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.097+0000 7f8ffa7fc640 1 --2- 192.168.123.100:0/279682452 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8fd00776d0 0x7f8fd0079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:22.102 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.097+0000 7f8ffa7fc640 1 --2- 192.168.123.100:0/279682452 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8fd00776d0 0x7f8fd0079b90 secure :-1 s=READY pgs=4273 cs=0 l=1 rev1=1 crypto rx=0x7f8ff001ceb0 tx=0x7f8ff00023e0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:22.102 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.097+0000 7f9000864640 1 -- 192.168.123.100:0/279682452 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f8fe40671e0 con 0x7f8ffc1016d0 2026-03-09T18:01:22.190 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.185+0000 7f9001866640 1 -- 192.168.123.100:0/279682452 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12} v 0) -- 0x7f8fc0005480 con 0x7f8ffc1016d0 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:21.614749+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:21.614749+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: cluster 2026-03-09T18:01:21.620818+0000 mon.a (mon.0) 3686 : cluster [DBG] osdmap e813: 8 total, 8 up, 8 in 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: cluster 2026-03-09T18:01:21.620818+0000 mon.a (mon.0) 3686 : cluster [DBG] osdmap e813: 8 total, 8 up, 8 in 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:21.671931+0000 mon.a (mon.0) 3687 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:21.671931+0000 mon.a (mon.0) 3687 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: cluster 2026-03-09T18:01:21.974030+0000 mon.a (mon.0) 3688 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: cluster 2026-03-09T18:01:21.974030+0000 mon.a (mon.0) 3688 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: cluster 2026-03-09T18:01:21.974218+0000 mon.a (mon.0) 3689 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: cluster 2026-03-09T18:01:21.974218+0000 mon.a (mon.0) 3689 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:21.976857+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:21.976857+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: cluster 2026-03-09T18:01:21.987457+0000 mon.a (mon.0) 3691 : cluster [DBG] osdmap e814: 8 total, 8 up, 8 in 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: cluster 2026-03-09T18:01:21.987457+0000 mon.a (mon.0) 3691 : cluster [DBG] osdmap e814: 8 total, 8 up, 8 in 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:22.190678+0000 mon.c (mon.2) 951 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:22.190678+0000 mon.c (mon.2) 951 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:22.191034+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:22 vm02 bash[23351]: audit 2026-03-09T18:01:22.191034+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:22.983 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:22.977+0000 7f9000864640 1 -- 192.168.123.100:0/279682452 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]=0 pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' created v815) ==== 176+0+0 (secure 0 0 0) 0x7f8fe406c090 con 0x7f8ffc1016d0 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:21.614749+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:21.614749+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: cluster 2026-03-09T18:01:21.620818+0000 mon.a (mon.0) 3686 : cluster [DBG] osdmap e813: 8 total, 8 up, 8 in 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: cluster 2026-03-09T18:01:21.620818+0000 mon.a (mon.0) 3686 : cluster [DBG] osdmap e813: 8 total, 8 up, 8 in 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:21.671931+0000 mon.a (mon.0) 3687 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:21.671931+0000 mon.a (mon.0) 3687 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: cluster 2026-03-09T18:01:21.974030+0000 mon.a (mon.0) 3688 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: cluster 2026-03-09T18:01:21.974030+0000 mon.a (mon.0) 3688 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: cluster 2026-03-09T18:01:21.974218+0000 mon.a (mon.0) 3689 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: cluster 2026-03-09T18:01:21.974218+0000 mon.a (mon.0) 3689 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:21.976857+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:21.976857+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: cluster 2026-03-09T18:01:21.987457+0000 mon.a (mon.0) 3691 : cluster [DBG] osdmap e814: 8 total, 8 up, 8 in 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: cluster 2026-03-09T18:01:21.987457+0000 mon.a (mon.0) 3691 : cluster [DBG] osdmap e814: 8 total, 8 up, 8 in 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:22.190678+0000 mon.c (mon.2) 951 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:22.190678+0000 mon.c (mon.2) 951 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:22.191034+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:23.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:22 vm00 bash[28333]: audit 2026-03-09T18:01:22.191034+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:21.614749+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:21.614749+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: cluster 2026-03-09T18:01:21.620818+0000 mon.a (mon.0) 3686 : cluster [DBG] osdmap e813: 8 total, 8 up, 8 in 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: cluster 2026-03-09T18:01:21.620818+0000 mon.a (mon.0) 3686 : cluster [DBG] osdmap e813: 8 total, 8 up, 8 in 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:21.671931+0000 mon.a (mon.0) 3687 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:21.671931+0000 mon.a (mon.0) 3687 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: cluster 2026-03-09T18:01:21.974030+0000 mon.a (mon.0) 3688 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: cluster 2026-03-09T18:01:21.974030+0000 mon.a (mon.0) 3688 : cluster [INF] pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' no longer out of quota; removing NO_QUOTA flag 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: cluster 2026-03-09T18:01:21.974218+0000 mon.a (mon.0) 3689 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: cluster 2026-03-09T18:01:21.974218+0000 mon.a (mon.0) 3689 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:21.976857+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:21.976857+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/3638810696' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: cluster 2026-03-09T18:01:21.987457+0000 mon.a (mon.0) 3691 : cluster [DBG] osdmap e814: 8 total, 8 up, 8 in 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: cluster 2026-03-09T18:01:21.987457+0000 mon.a (mon.0) 3691 : cluster [DBG] osdmap e814: 8 total, 8 up, 8 in 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:22.190678+0000 mon.c (mon.2) 951 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:22.190678+0000 mon.c (mon.2) 951 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:22.191034+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:23.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:22 vm00 bash[20770]: audit 2026-03-09T18:01:22.191034+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:23.039 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.033+0000 7f9001866640 1 -- 192.168.123.100:0/279682452 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12} v 0) -- 0x7f8fc00049a0 con 0x7f8ffc1016d0 2026-03-09T18:01:23.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.033+0000 7f9000864640 1 -- 192.168.123.100:0/279682452 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]=0 pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' already exists v815) ==== 183+0+0 (secure 0 0 0) 0x7f8fe405f280 con 0x7f8ffc1016d0 2026-03-09T18:01:23.040 INFO:tasks.workunit.client.0.vm00.stderr:pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' already exists 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 -- 192.168.123.100:0/279682452 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8fd00776d0 msgr2=0x7f8fd0079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 --2- 192.168.123.100:0/279682452 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8fd00776d0 0x7f8fd0079b90 secure :-1 s=READY pgs=4273 cs=0 l=1 rev1=1 crypto rx=0x7f8ff001ceb0 tx=0x7f8ff00023e0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 -- 192.168.123.100:0/279682452 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8ffc1016d0 msgr2=0x7f8ffc19f250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 --2- 192.168.123.100:0/279682452 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8ffc1016d0 0x7f8ffc19f250 secure :-1 s=READY pgs=3204 cs=0 l=1 rev1=1 crypto rx=0x7f8fe400dcf0 tx=0x7f8fe400b630 comp rx=0 tx=0).stop 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 -- 192.168.123.100:0/279682452 shutdown_connections 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 --2- 192.168.123.100:0/279682452 >> v2:192.168.123.100:6800/2673235927 conn(0x7f8fd00776d0 0x7f8fd0079b90 unknown :-1 s=CLOSED pgs=4273 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 --2- 192.168.123.100:0/279682452 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ffc10f400 0x7f8ffc1a3b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 --2- 192.168.123.100:0/279682452 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8ffc101ff0 0x7f8ffc19f790 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 --2- 192.168.123.100:0/279682452 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8ffc1016d0 0x7f8ffc19f250 unknown :-1 s=CLOSED pgs=3204 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 -- 192.168.123.100:0/279682452 >> 192.168.123.100:0/279682452 conn(0x7f8ffc0fd540 msgr2=0x7f8ffc0ff930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 -- 192.168.123.100:0/279682452 shutdown_connections 2026-03-09T18:01:23.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.037+0000 7f8fda7fc640 1 -- 192.168.123.100:0/279682452 wait complete. 2026-03-09T18:01:23.053 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool application enable bbf4d092-de96-4c27-8d15-50edd4b79fe3 rados 2026-03-09T18:01:23.112 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- 192.168.123.100:0/1709783744 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c103300 msgr2=0x7f915c10f830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:23.112 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 --2- 192.168.123.100:0/1709783744 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c103300 0x7f915c10f830 secure :-1 s=READY pgs=3205 cs=0 l=1 rev1=1 crypto rx=0x7f915800b0d0 tx=0x7f915801ca70 comp rx=0 tx=0).stop 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- 192.168.123.100:0/1709783744 shutdown_connections 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 --2- 192.168.123.100:0/1709783744 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c103300 0x7f915c10f830 unknown :-1 s=CLOSED pgs=3205 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 --2- 192.168.123.100:0/1709783744 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f915c102960 0x7f915c102dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 --2- 192.168.123.100:0/1709783744 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f915c108960 0x7f915c108d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- 192.168.123.100:0/1709783744 >> 192.168.123.100:0/1709783744 conn(0x7f915c0fe640 msgr2=0x7f915c100a60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- 192.168.123.100:0/1709783744 shutdown_connections 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- 192.168.123.100:0/1709783744 wait complete. 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 Processor -- start 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- start start 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f915c102960 0x7f915c19ce90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f915c103300 0x7f915c19d3d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c108960 0x7f915c1a1760 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f915c114a10 con 0x7f915c102960 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f915c114890 con 0x7f915c103300 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f915c114b90 con 0x7f915c108960 2026-03-09T18:01:23.113 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c108960 0x7f915c1a1760 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c108960 0x7f915c1a1760 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:34770/0 (socket says 192.168.123.100:34770) 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 -- 192.168.123.100:0/2079842878 learned_addr learned my addr 192.168.123.100:0/2079842878 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 -- 192.168.123.100:0/2079842878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f915c103300 msgr2=0x7f915c19d3d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f915c103300 0x7f915c19d3d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 -- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f915c102960 msgr2=0x7f915c19ce90 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f9161e1a640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f915c102960 0x7f915c19ce90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f915c102960 0x7f915c19ce90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 -- 192.168.123.100:0/2079842878 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f915c1a1e40 con 0x7f915c108960 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f9161e1a640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f915c102960 0x7f915c19ce90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f916261b640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c108960 0x7f915c1a1760 secure :-1 s=READY pgs=3206 cs=0 l=1 rev1=1 crypto rx=0x7f9158098550 tx=0x7f9158007870 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:23.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f914affd640 1 -- 192.168.123.100:0/2079842878 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f91580096e0 con 0x7f915c108960 2026-03-09T18:01:23.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f914affd640 1 -- 192.168.123.100:0/2079842878 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f9158004070 con 0x7f915c108960 2026-03-09T18:01:23.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f915c1a20d0 con 0x7f915c108960 2026-03-09T18:01:23.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f915c1a2560 con 0x7f915c108960 2026-03-09T18:01:23.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.109+0000 7f914affd640 1 -- 192.168.123.100:0/2079842878 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9158007c60 con 0x7f915c108960 2026-03-09T18:01:23.119 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.113+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9124005190 con 0x7f915c108960 2026-03-09T18:01:23.119 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.113+0000 7f914affd640 1 -- 192.168.123.100:0/2079842878 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f91580a5ce0 con 0x7f915c108960 2026-03-09T18:01:23.119 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.113+0000 7f914affd640 1 --2- 192.168.123.100:0/2079842878 >> v2:192.168.123.100:6800/2673235927 conn(0x7f91380776d0 0x7f9138079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:23.119 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.113+0000 7f914affd640 1 -- 192.168.123.100:0/2079842878 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(815..815 src has 1..815) ==== 8111+0+0 (secure 0 0 0) 0x7f9158134340 con 0x7f915c108960 2026-03-09T18:01:23.119 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.113+0000 7f9161e1a640 1 --2- 192.168.123.100:0/2079842878 >> v2:192.168.123.100:6800/2673235927 conn(0x7f91380776d0 0x7f9138079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:23.120 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.113+0000 7f914affd640 1 -- 192.168.123.100:0/2079842878 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9158016780 con 0x7f915c108960 2026-03-09T18:01:23.120 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.113+0000 7f9161e1a640 1 --2- 192.168.123.100:0/2079842878 >> v2:192.168.123.100:6800/2673235927 conn(0x7f91380776d0 0x7f9138079b90 secure :-1 s=READY pgs=4274 cs=0 l=1 rev1=1 crypto rx=0x7f9150008b10 tx=0x7f9150005e30 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:23.211 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.205+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"} v 0) -- 0x7f9124005480 con 0x7f915c108960 2026-03-09T18:01:23.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:01:23 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:01:23.990 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:23.985+0000 7f914affd640 1 -- 192.168.123.100:0/2079842878 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]=0 enabled application 'rados' on pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' v816) ==== 213+0+0 (secure 0 0 0) 0x7f9158100920 con 0x7f915c108960 2026-03-09T18:01:24.045 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:24.041+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"} v 0) -- 0x7f9124004820 con 0x7f915c108960 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:22.980080+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]': finished 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:22.980080+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]': finished 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: cluster 2026-03-09T18:01:22.989882+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e815: 8 total, 8 up, 8 in 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: cluster 2026-03-09T18:01:22.989882+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e815: 8 total, 8 up, 8 in 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.039236+0000 mon.c (mon.2) 952 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.039236+0000 mon.c (mon.2) 952 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.039743+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.039743+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: cluster 2026-03-09T18:01:23.130901+0000 mgr.y (mgr.14505) 1309 : cluster [DBG] pgmap v1786: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: cluster 2026-03-09T18:01:23.130901+0000 mgr.y (mgr.14505) 1309 : cluster [DBG] pgmap v1786: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.144362+0000 mgr.y (mgr.14505) 1310 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.144362+0000 mgr.y (mgr.14505) 1310 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.211154+0000 mon.c (mon.2) 953 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.211154+0000 mon.c (mon.2) 953 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.211695+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:24 vm00 bash[28333]: audit 2026-03-09T18:01:23.211695+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:22.980080+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]': finished 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:22.980080+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]': finished 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: cluster 2026-03-09T18:01:22.989882+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e815: 8 total, 8 up, 8 in 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: cluster 2026-03-09T18:01:22.989882+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e815: 8 total, 8 up, 8 in 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.039236+0000 mon.c (mon.2) 952 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.039236+0000 mon.c (mon.2) 952 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.039743+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.039743+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: cluster 2026-03-09T18:01:23.130901+0000 mgr.y (mgr.14505) 1309 : cluster [DBG] pgmap v1786: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: cluster 2026-03-09T18:01:23.130901+0000 mgr.y (mgr.14505) 1309 : cluster [DBG] pgmap v1786: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.144362+0000 mgr.y (mgr.14505) 1310 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.144362+0000 mgr.y (mgr.14505) 1310 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.211154+0000 mon.c (mon.2) 953 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.211154+0000 mon.c (mon.2) 953 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.211695+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:24 vm00 bash[20770]: audit 2026-03-09T18:01:23.211695+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:22.980080+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]': finished 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:22.980080+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]': finished 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: cluster 2026-03-09T18:01:22.989882+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e815: 8 total, 8 up, 8 in 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: cluster 2026-03-09T18:01:22.989882+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e815: 8 total, 8 up, 8 in 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.039236+0000 mon.c (mon.2) 952 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.039236+0000 mon.c (mon.2) 952 : audit [INF] from='client.? 192.168.123.100:0/279682452' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.039743+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.039743+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pg_num": 12}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: cluster 2026-03-09T18:01:23.130901+0000 mgr.y (mgr.14505) 1309 : cluster [DBG] pgmap v1786: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: cluster 2026-03-09T18:01:23.130901+0000 mgr.y (mgr.14505) 1309 : cluster [DBG] pgmap v1786: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.144362+0000 mgr.y (mgr.14505) 1310 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.144362+0000 mgr.y (mgr.14505) 1310 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.211154+0000 mon.c (mon.2) 953 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.211154+0000 mon.c (mon.2) 953 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.211695+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:24.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:24 vm02 bash[23351]: audit 2026-03-09T18:01:23.211695+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f914affd640 1 -- 192.168.123.100:0/2079842878 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]=0 enabled application 'rados' on pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' v817) ==== 213+0+0 (secure 0 0 0) 0x7f91581057d0 con 0x7f915c108960 2026-03-09T18:01:25.030 INFO:tasks.workunit.client.0.vm00.stderr:enabled application 'rados' on pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 >> v2:192.168.123.100:6800/2673235927 conn(0x7f91380776d0 msgr2=0x7f9138079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 --2- 192.168.123.100:0/2079842878 >> v2:192.168.123.100:6800/2673235927 conn(0x7f91380776d0 0x7f9138079b90 secure :-1 s=READY pgs=4274 cs=0 l=1 rev1=1 crypto rx=0x7f9150008b10 tx=0x7f9150005e30 comp rx=0 tx=0).stop 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c108960 msgr2=0x7f915c1a1760 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c108960 0x7f915c1a1760 secure :-1 s=READY pgs=3206 cs=0 l=1 rev1=1 crypto rx=0x7f9158098550 tx=0x7f9158007870 comp rx=0 tx=0).stop 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 shutdown_connections 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 --2- 192.168.123.100:0/2079842878 >> v2:192.168.123.100:6800/2673235927 conn(0x7f91380776d0 0x7f9138079b90 unknown :-1 s=CLOSED pgs=4274 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f915c108960 0x7f915c1a1760 unknown :-1 s=CLOSED pgs=3206 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f915c103300 0x7f915c19d3d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.025+0000 7f91640a5640 1 --2- 192.168.123.100:0/2079842878 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f915c102960 0x7f915c19ce90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.029+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 >> 192.168.123.100:0/2079842878 conn(0x7f915c0fe640 msgr2=0x7f915c0fee80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.029+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 shutdown_connections 2026-03-09T18:01:25.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.029+0000 7f91640a5640 1 -- 192.168.123.100:0/2079842878 wait complete. 2026-03-09T18:01:25.050 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota bbf4d092-de96-4c27-8d15-50edd4b79fe3 max_objects 10 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- 192.168.123.100:0/1061402812 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b810f390 msgr2=0x7fd4b8111780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 --2- 192.168.123.100:0/1061402812 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b810f390 0x7fd4b8111780 secure :-1 s=READY pgs=3142 cs=0 l=1 rev1=1 crypto rx=0x7fd4b400b3e0 tx=0x7fd4b401ccb0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- 192.168.123.100:0/1061402812 shutdown_connections 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 --2- 192.168.123.100:0/1061402812 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b810f390 0x7fd4b8111780 unknown :-1 s=CLOSED pgs=3142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 --2- 192.168.123.100:0/1061402812 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd4b8101f80 0x7fd4b810ee50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 --2- 192.168.123.100:0/1061402812 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd4b8101660 0x7fd4b8101a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- 192.168.123.100:0/1061402812 >> 192.168.123.100:0/1061402812 conn(0x7fd4b80fd530 msgr2=0x7fd4b80ff950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- 192.168.123.100:0/1061402812 shutdown_connections 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- 192.168.123.100:0/1061402812 wait complete. 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 Processor -- start 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- start start 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b8101660 0x7fd4b819f1e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd4b8101f80 0x7fd4b819f720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:25.114 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd4b810f390 0x7fd4b81a3ab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd4b8116cb0 con 0x7fd4b8101660 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fd4b8116b30 con 0x7fd4b810f390 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fd4b8116e30 con 0x7fd4b8101f80 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b8101660 0x7fd4b819f1e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b8101660 0x7fd4b819f1e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:41158/0 (socket says 192.168.123.100:41158) 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 -- 192.168.123.100:0/4261399902 learned_addr learned my addr 192.168.123.100:0/4261399902 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4be6f5640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd4b810f390 0x7fd4b81a3ab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bd6f3640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd4b8101f80 0x7fd4b819f720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 -- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd4b8101f80 msgr2=0x7fd4b819f720 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd4b8101f80 0x7fd4b819f720 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 -- 192.168.123.100:0/4261399902 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd4b810f390 msgr2=0x7fd4b81a3ab0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd4b810f390 0x7fd4b81a3ab0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 -- 192.168.123.100:0/4261399902 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd4b81a4230 con 0x7fd4b8101660 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bdef4640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b8101660 0x7fd4b819f1e0 secure :-1 s=READY pgs=3143 cs=0 l=1 rev1=1 crypto rx=0x7fd4ac00dab0 tx=0x7fd4ac00df70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:25.115 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4be6f5640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd4b810f390 0x7fd4b81a3ab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T18:01:25.116 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4bd6f3640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd4b8101f80 0x7fd4b819f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T18:01:25.116 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4a6ffd640 1 -- 192.168.123.100:0/4261399902 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd4ac014070 con 0x7fd4b8101660 2026-03-09T18:01:25.116 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4a6ffd640 1 -- 192.168.123.100:0/4261399902 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd4ac004540 con 0x7fd4b8101660 2026-03-09T18:01:25.116 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4a6ffd640 1 -- 192.168.123.100:0/4261399902 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd4ac002e40 con 0x7fd4b8101660 2026-03-09T18:01:25.116 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd4b81a4520 con 0x7fd4b8101660 2026-03-09T18:01:25.116 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.109+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd4b81038f0 con 0x7fd4b8101660 2026-03-09T18:01:25.116 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.113+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd480005190 con 0x7fd4b8101660 2026-03-09T18:01:25.120 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.113+0000 7fd4a6ffd640 1 -- 192.168.123.100:0/4261399902 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fd4ac0040d0 con 0x7fd4b8101660 2026-03-09T18:01:25.120 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.113+0000 7fd4a6ffd640 1 --2- 192.168.123.100:0/4261399902 >> v2:192.168.123.100:6800/2673235927 conn(0x7fd4940777a0 0x7fd494079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:01:25.120 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.113+0000 7fd4a6ffd640 1 -- 192.168.123.100:0/4261399902 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(817..817 src has 1..817) ==== 8124+0+0 (secure 0 0 0) 0x7fd4ac099fe0 con 0x7fd4b8101660 2026-03-09T18:01:25.120 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.117+0000 7fd4bd6f3640 1 --2- 192.168.123.100:0/4261399902 >> v2:192.168.123.100:6800/2673235927 conn(0x7fd4940777a0 0x7fd494079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:01:25.120 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.117+0000 7fd4a6ffd640 1 -- 192.168.123.100:0/4261399902 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd4ac0666b0 con 0x7fd4b8101660 2026-03-09T18:01:25.120 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.117+0000 7fd4bd6f3640 1 --2- 192.168.123.100:0/4261399902 >> v2:192.168.123.100:6800/2673235927 conn(0x7fd4940777a0 0x7fd494079c60 secure :-1 s=READY pgs=4275 cs=0 l=1 rev1=1 crypto rx=0x7fd4b81006e0 tx=0x7fd4a8009290 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:01:25.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:25.205+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"} v 0) -- 0x7fd480005480 con 0x7fd4b8101660 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: cluster 2026-03-09T18:01:23.980952+0000 mon.a (mon.0) 3697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: cluster 2026-03-09T18:01:23.980952+0000 mon.a (mon.0) 3697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: audit 2026-03-09T18:01:23.982926+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: audit 2026-03-09T18:01:23.982926+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: cluster 2026-03-09T18:01:23.992829+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e816: 8 total, 8 up, 8 in 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: cluster 2026-03-09T18:01:23.992829+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e816: 8 total, 8 up, 8 in 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: audit 2026-03-09T18:01:24.045899+0000 mon.c (mon.2) 954 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: audit 2026-03-09T18:01:24.045899+0000 mon.c (mon.2) 954 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: audit 2026-03-09T18:01:24.046378+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:25 vm02 bash[23351]: audit 2026-03-09T18:01:24.046378+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: cluster 2026-03-09T18:01:23.980952+0000 mon.a (mon.0) 3697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: cluster 2026-03-09T18:01:23.980952+0000 mon.a (mon.0) 3697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: audit 2026-03-09T18:01:23.982926+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: audit 2026-03-09T18:01:23.982926+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: cluster 2026-03-09T18:01:23.992829+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e816: 8 total, 8 up, 8 in 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: cluster 2026-03-09T18:01:23.992829+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e816: 8 total, 8 up, 8 in 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: audit 2026-03-09T18:01:24.045899+0000 mon.c (mon.2) 954 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: audit 2026-03-09T18:01:24.045899+0000 mon.c (mon.2) 954 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: audit 2026-03-09T18:01:24.046378+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:25 vm00 bash[28333]: audit 2026-03-09T18:01:24.046378+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: cluster 2026-03-09T18:01:23.980952+0000 mon.a (mon.0) 3697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:25.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: cluster 2026-03-09T18:01:23.980952+0000 mon.a (mon.0) 3697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:25.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: audit 2026-03-09T18:01:23.982926+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:25.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: audit 2026-03-09T18:01:23.982926+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:25.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: cluster 2026-03-09T18:01:23.992829+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e816: 8 total, 8 up, 8 in 2026-03-09T18:01:25.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: cluster 2026-03-09T18:01:23.992829+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e816: 8 total, 8 up, 8 in 2026-03-09T18:01:25.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: audit 2026-03-09T18:01:24.045899+0000 mon.c (mon.2) 954 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: audit 2026-03-09T18:01:24.045899+0000 mon.c (mon.2) 954 : audit [INF] from='client.? 192.168.123.100:0/2079842878' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: audit 2026-03-09T18:01:24.046378+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:25.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:25 vm00 bash[20770]: audit 2026-03-09T18:01:24.046378+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]: dispatch 2026-03-09T18:01:26.062 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:26.057+0000 7fd4a6ffd640 1 -- 192.168.123.100:0/4261399902 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool bbf4d092-de96-4c27-8d15-50edd4b79fe3 v818) ==== 223+0+0 (secure 0 0 0) 0x7fd4ac06b560 con 0x7fd4b8101660 2026-03-09T18:01:26.117 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:26.113+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"} v 0) -- 0x7fd480004910 con 0x7fd4b8101660 2026-03-09T18:01:26.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:26 vm02 bash[23351]: audit 2026-03-09T18:01:25.021402+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:26 vm02 bash[23351]: audit 2026-03-09T18:01:25.021402+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:26 vm02 bash[23351]: cluster 2026-03-09T18:01:25.026035+0000 mon.a (mon.0) 3702 : cluster [DBG] osdmap e817: 8 total, 8 up, 8 in 2026-03-09T18:01:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:26 vm02 bash[23351]: cluster 2026-03-09T18:01:25.026035+0000 mon.a (mon.0) 3702 : cluster [DBG] osdmap e817: 8 total, 8 up, 8 in 2026-03-09T18:01:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:26 vm02 bash[23351]: cluster 2026-03-09T18:01:25.131226+0000 mgr.y (mgr.14505) 1311 : cluster [DBG] pgmap v1789: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T18:01:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:26 vm02 bash[23351]: cluster 2026-03-09T18:01:25.131226+0000 mgr.y (mgr.14505) 1311 : cluster [DBG] pgmap v1789: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T18:01:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:26 vm02 bash[23351]: audit 2026-03-09T18:01:25.209266+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:26.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:26 vm02 bash[23351]: audit 2026-03-09T18:01:25.209266+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:26 vm00 bash[20770]: audit 2026-03-09T18:01:25.021402+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:26 vm00 bash[20770]: audit 2026-03-09T18:01:25.021402+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:26 vm00 bash[20770]: cluster 2026-03-09T18:01:25.026035+0000 mon.a (mon.0) 3702 : cluster [DBG] osdmap e817: 8 total, 8 up, 8 in 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:26 vm00 bash[20770]: cluster 2026-03-09T18:01:25.026035+0000 mon.a (mon.0) 3702 : cluster [DBG] osdmap e817: 8 total, 8 up, 8 in 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:26 vm00 bash[20770]: cluster 2026-03-09T18:01:25.131226+0000 mgr.y (mgr.14505) 1311 : cluster [DBG] pgmap v1789: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:26 vm00 bash[20770]: cluster 2026-03-09T18:01:25.131226+0000 mgr.y (mgr.14505) 1311 : cluster [DBG] pgmap v1789: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:26 vm00 bash[20770]: audit 2026-03-09T18:01:25.209266+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:26 vm00 bash[20770]: audit 2026-03-09T18:01:25.209266+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:26 vm00 bash[28333]: audit 2026-03-09T18:01:25.021402+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:26 vm00 bash[28333]: audit 2026-03-09T18:01:25.021402+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "app": "rados"}]': finished 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:26 vm00 bash[28333]: cluster 2026-03-09T18:01:25.026035+0000 mon.a (mon.0) 3702 : cluster [DBG] osdmap e817: 8 total, 8 up, 8 in 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:26 vm00 bash[28333]: cluster 2026-03-09T18:01:25.026035+0000 mon.a (mon.0) 3702 : cluster [DBG] osdmap e817: 8 total, 8 up, 8 in 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:26 vm00 bash[28333]: cluster 2026-03-09T18:01:25.131226+0000 mgr.y (mgr.14505) 1311 : cluster [DBG] pgmap v1789: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:26 vm00 bash[28333]: cluster 2026-03-09T18:01:25.131226+0000 mgr.y (mgr.14505) 1311 : cluster [DBG] pgmap v1789: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:26 vm00 bash[28333]: audit 2026-03-09T18:01:25.209266+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:26 vm00 bash[28333]: audit 2026-03-09T18:01:25.209266+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:26.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:01:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:01:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:01:27.074 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.069+0000 7fd4a6ffd640 1 -- 192.168.123.100:0/4261399902 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool bbf4d092-de96-4c27-8d15-50edd4b79fe3 v819) ==== 223+0+0 (secure 0 0 0) 0x7fd4ac05e750 con 0x7fd4b8101660 2026-03-09T18:01:27.074 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 10 for pool bbf4d092-de96-4c27-8d15-50edd4b79fe3 2026-03-09T18:01:27.076 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 >> v2:192.168.123.100:6800/2673235927 conn(0x7fd4940777a0 msgr2=0x7fd494079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:27.076 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 --2- 192.168.123.100:0/4261399902 >> v2:192.168.123.100:6800/2673235927 conn(0x7fd4940777a0 0x7fd494079c60 secure :-1 s=READY pgs=4275 cs=0 l=1 rev1=1 crypto rx=0x7fd4b81006e0 tx=0x7fd4a8009290 comp rx=0 tx=0).stop 2026-03-09T18:01:27.076 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b8101660 msgr2=0x7fd4b819f1e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:01:27.076 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b8101660 0x7fd4b819f1e0 secure :-1 s=READY pgs=3143 cs=0 l=1 rev1=1 crypto rx=0x7fd4ac00dab0 tx=0x7fd4ac00df70 comp rx=0 tx=0).stop 2026-03-09T18:01:27.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 shutdown_connections 2026-03-09T18:01:27.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 --2- 192.168.123.100:0/4261399902 >> v2:192.168.123.100:6800/2673235927 conn(0x7fd4940777a0 0x7fd494079c60 unknown :-1 s=CLOSED pgs=4275 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:27.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd4b810f390 0x7fd4b81a3ab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:27.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd4b8101f80 0x7fd4b819f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:27.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 --2- 192.168.123.100:0/4261399902 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd4b8101660 0x7fd4b819f1e0 unknown :-1 s=CLOSED pgs=3143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:01:27.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 >> 192.168.123.100:0/4261399902 conn(0x7fd4b80fd530 msgr2=0x7fd4b810fa10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:01:27.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 shutdown_connections 2026-03-09T18:01:27.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:01:27.073+0000 7fd4c017f640 1 -- 192.168.123.100:0/4261399902 wait complete. 2026-03-09T18:01:27.092 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-09T18:01:27.385 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:27 vm02 bash[23351]: audit 2026-03-09T18:01:26.061814+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:27 vm02 bash[23351]: audit 2026-03-09T18:01:26.061814+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:27 vm02 bash[23351]: cluster 2026-03-09T18:01:26.079306+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e818: 8 total, 8 up, 8 in 2026-03-09T18:01:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:27 vm02 bash[23351]: cluster 2026-03-09T18:01:26.079306+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e818: 8 total, 8 up, 8 in 2026-03-09T18:01:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:27 vm02 bash[23351]: audit 2026-03-09T18:01:26.117968+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:27.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:27 vm02 bash[23351]: audit 2026-03-09T18:01:26.117968+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:27 vm00 bash[28333]: audit 2026-03-09T18:01:26.061814+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:27 vm00 bash[28333]: audit 2026-03-09T18:01:26.061814+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:27 vm00 bash[28333]: cluster 2026-03-09T18:01:26.079306+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e818: 8 total, 8 up, 8 in 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:27 vm00 bash[28333]: cluster 2026-03-09T18:01:26.079306+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e818: 8 total, 8 up, 8 in 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:27 vm00 bash[28333]: audit 2026-03-09T18:01:26.117968+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:27 vm00 bash[28333]: audit 2026-03-09T18:01:26.117968+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:27 vm00 bash[20770]: audit 2026-03-09T18:01:26.061814+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:27 vm00 bash[20770]: audit 2026-03-09T18:01:26.061814+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:27 vm00 bash[20770]: cluster 2026-03-09T18:01:26.079306+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e818: 8 total, 8 up, 8 in 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:27 vm00 bash[20770]: cluster 2026-03-09T18:01:26.079306+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e818: 8 total, 8 up, 8 in 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:27 vm00 bash[20770]: audit 2026-03-09T18:01:26.117968+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:27.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:27 vm00 bash[20770]: audit 2026-03-09T18:01:26.117968+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: audit 2026-03-09T18:01:27.074259+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: audit 2026-03-09T18:01:27.074259+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: cluster 2026-03-09T18:01:27.083408+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e819: 8 total, 8 up, 8 in 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: cluster 2026-03-09T18:01:27.083408+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e819: 8 total, 8 up, 8 in 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: cluster 2026-03-09T18:01:27.131557+0000 mgr.y (mgr.14505) 1312 : cluster [DBG] pgmap v1792: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: cluster 2026-03-09T18:01:27.131557+0000 mgr.y (mgr.14505) 1312 : cluster [DBG] pgmap v1792: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: audit 2026-03-09T18:01:27.531395+0000 mon.c (mon.2) 955 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: audit 2026-03-09T18:01:27.531395+0000 mon.c (mon.2) 955 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: audit 2026-03-09T18:01:27.816839+0000 mon.a (mon.0) 3709 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: audit 2026-03-09T18:01:27.816839+0000 mon.a (mon.0) 3709 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: audit 2026-03-09T18:01:27.823892+0000 mon.a (mon.0) 3710 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:28 vm02 bash[23351]: audit 2026-03-09T18:01:27.823892+0000 mon.a (mon.0) 3710 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: audit 2026-03-09T18:01:27.074259+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: audit 2026-03-09T18:01:27.074259+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: cluster 2026-03-09T18:01:27.083408+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e819: 8 total, 8 up, 8 in 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: cluster 2026-03-09T18:01:27.083408+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e819: 8 total, 8 up, 8 in 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: cluster 2026-03-09T18:01:27.131557+0000 mgr.y (mgr.14505) 1312 : cluster [DBG] pgmap v1792: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: cluster 2026-03-09T18:01:27.131557+0000 mgr.y (mgr.14505) 1312 : cluster [DBG] pgmap v1792: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: audit 2026-03-09T18:01:27.531395+0000 mon.c (mon.2) 955 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: audit 2026-03-09T18:01:27.531395+0000 mon.c (mon.2) 955 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: audit 2026-03-09T18:01:27.816839+0000 mon.a (mon.0) 3709 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: audit 2026-03-09T18:01:27.816839+0000 mon.a (mon.0) 3709 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: audit 2026-03-09T18:01:27.823892+0000 mon.a (mon.0) 3710 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:28 vm00 bash[28333]: audit 2026-03-09T18:01:27.823892+0000 mon.a (mon.0) 3710 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: audit 2026-03-09T18:01:27.074259+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: audit 2026-03-09T18:01:27.074259+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? 192.168.123.100:0/4261399902' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "field": "max_objects", "val": "10"}]': finished 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: cluster 2026-03-09T18:01:27.083408+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e819: 8 total, 8 up, 8 in 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: cluster 2026-03-09T18:01:27.083408+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e819: 8 total, 8 up, 8 in 2026-03-09T18:01:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: cluster 2026-03-09T18:01:27.131557+0000 mgr.y (mgr.14505) 1312 : cluster [DBG] pgmap v1792: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T18:01:28.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: cluster 2026-03-09T18:01:27.131557+0000 mgr.y (mgr.14505) 1312 : cluster [DBG] pgmap v1792: 188 pgs: 12 unknown, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T18:01:28.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: audit 2026-03-09T18:01:27.531395+0000 mon.c (mon.2) 955 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:01:28.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: audit 2026-03-09T18:01:27.531395+0000 mon.c (mon.2) 955 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:01:28.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: audit 2026-03-09T18:01:27.816839+0000 mon.a (mon.0) 3709 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: audit 2026-03-09T18:01:27.816839+0000 mon.a (mon.0) 3709 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: audit 2026-03-09T18:01:27.823892+0000 mon.a (mon.0) 3710 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:28.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:28 vm00 bash[20770]: audit 2026-03-09T18:01:27.823892+0000 mon.a (mon.0) 3710 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.137980+0000 mon.c (mon.2) 956 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.137980+0000 mon.c (mon.2) 956 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.139006+0000 mon.c (mon.2) 957 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.139006+0000 mon.c (mon.2) 957 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.144414+0000 mon.a (mon.0) 3711 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.144414+0000 mon.a (mon.0) 3711 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.992886+0000 mon.a (mon.0) 3712 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.992886+0000 mon.a (mon.0) 3712 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.994002+0000 mon.c (mon.2) 958 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:29.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:29 vm02 bash[23351]: audit 2026-03-09T18:01:28.994002+0000 mon.c (mon.2) 958 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.137980+0000 mon.c (mon.2) 956 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.137980+0000 mon.c (mon.2) 956 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.139006+0000 mon.c (mon.2) 957 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.139006+0000 mon.c (mon.2) 957 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.144414+0000 mon.a (mon.0) 3711 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.144414+0000 mon.a (mon.0) 3711 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.992886+0000 mon.a (mon.0) 3712 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.992886+0000 mon.a (mon.0) 3712 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.994002+0000 mon.c (mon.2) 958 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:29 vm00 bash[28333]: audit 2026-03-09T18:01:28.994002+0000 mon.c (mon.2) 958 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.137980+0000 mon.c (mon.2) 956 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.137980+0000 mon.c (mon.2) 956 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.139006+0000 mon.c (mon.2) 957 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.139006+0000 mon.c (mon.2) 957 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.144414+0000 mon.a (mon.0) 3711 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.144414+0000 mon.a (mon.0) 3711 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.992886+0000 mon.a (mon.0) 3712 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.992886+0000 mon.a (mon.0) 3712 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.994002+0000 mon.c (mon.2) 958 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:29 vm00 bash[20770]: audit 2026-03-09T18:01:28.994002+0000 mon.c (mon.2) 958 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:30 vm02 bash[23351]: cluster 2026-03-09T18:01:29.132151+0000 mgr.y (mgr.14505) 1313 : cluster [DBG] pgmap v1793: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 0 op/s 2026-03-09T18:01:30.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:30 vm02 bash[23351]: cluster 2026-03-09T18:01:29.132151+0000 mgr.y (mgr.14505) 1313 : cluster [DBG] pgmap v1793: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 0 op/s 2026-03-09T18:01:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:30 vm00 bash[28333]: cluster 2026-03-09T18:01:29.132151+0000 mgr.y (mgr.14505) 1313 : cluster [DBG] pgmap v1793: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 0 op/s 2026-03-09T18:01:30.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:30 vm00 bash[28333]: cluster 2026-03-09T18:01:29.132151+0000 mgr.y (mgr.14505) 1313 : cluster [DBG] pgmap v1793: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 0 op/s 2026-03-09T18:01:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:30 vm00 bash[20770]: cluster 2026-03-09T18:01:29.132151+0000 mgr.y (mgr.14505) 1313 : cluster [DBG] pgmap v1793: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 0 op/s 2026-03-09T18:01:30.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:30 vm00 bash[20770]: cluster 2026-03-09T18:01:29.132151+0000 mgr.y (mgr.14505) 1313 : cluster [DBG] pgmap v1793: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 0 op/s 2026-03-09T18:01:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:32 vm00 bash[28333]: cluster 2026-03-09T18:01:31.132501+0000 mgr.y (mgr.14505) 1314 : cluster [DBG] pgmap v1794: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 838 B/s rd, 0 op/s 2026-03-09T18:01:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:32 vm00 bash[28333]: cluster 2026-03-09T18:01:31.132501+0000 mgr.y (mgr.14505) 1314 : cluster [DBG] pgmap v1794: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 838 B/s rd, 0 op/s 2026-03-09T18:01:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:32 vm00 bash[28333]: cluster 2026-03-09T18:01:31.976078+0000 mon.a (mon.0) 3713 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:32.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:32 vm00 bash[28333]: cluster 2026-03-09T18:01:31.976078+0000 mon.a (mon.0) 3713 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:32 vm00 bash[20770]: cluster 2026-03-09T18:01:31.132501+0000 mgr.y (mgr.14505) 1314 : cluster [DBG] pgmap v1794: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 838 B/s rd, 0 op/s 2026-03-09T18:01:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:32 vm00 bash[20770]: cluster 2026-03-09T18:01:31.132501+0000 mgr.y (mgr.14505) 1314 : cluster [DBG] pgmap v1794: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 838 B/s rd, 0 op/s 2026-03-09T18:01:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:32 vm00 bash[20770]: cluster 2026-03-09T18:01:31.976078+0000 mon.a (mon.0) 3713 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:32.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:32 vm00 bash[20770]: cluster 2026-03-09T18:01:31.976078+0000 mon.a (mon.0) 3713 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:32 vm02 bash[23351]: cluster 2026-03-09T18:01:31.132501+0000 mgr.y (mgr.14505) 1314 : cluster [DBG] pgmap v1794: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 838 B/s rd, 0 op/s 2026-03-09T18:01:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:32 vm02 bash[23351]: cluster 2026-03-09T18:01:31.132501+0000 mgr.y (mgr.14505) 1314 : cluster [DBG] pgmap v1794: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 838 B/s rd, 0 op/s 2026-03-09T18:01:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:32 vm02 bash[23351]: cluster 2026-03-09T18:01:31.976078+0000 mon.a (mon.0) 3713 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:32.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:32 vm02 bash[23351]: cluster 2026-03-09T18:01:31.976078+0000 mon.a (mon.0) 3713 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:01:33.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:01:33 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:01:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:34 vm00 bash[28333]: cluster 2026-03-09T18:01:33.133195+0000 mgr.y (mgr.14505) 1315 : cluster [DBG] pgmap v1795: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:34 vm00 bash[28333]: cluster 2026-03-09T18:01:33.133195+0000 mgr.y (mgr.14505) 1315 : cluster [DBG] pgmap v1795: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:34 vm00 bash[28333]: audit 2026-03-09T18:01:33.155054+0000 mgr.y (mgr.14505) 1316 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:34.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:34 vm00 bash[28333]: audit 2026-03-09T18:01:33.155054+0000 mgr.y (mgr.14505) 1316 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:34 vm00 bash[20770]: cluster 2026-03-09T18:01:33.133195+0000 mgr.y (mgr.14505) 1315 : cluster [DBG] pgmap v1795: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:34.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:34 vm00 bash[20770]: cluster 2026-03-09T18:01:33.133195+0000 mgr.y (mgr.14505) 1315 : cluster [DBG] pgmap v1795: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:34.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:34 vm00 bash[20770]: audit 2026-03-09T18:01:33.155054+0000 mgr.y (mgr.14505) 1316 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:34.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:34 vm00 bash[20770]: audit 2026-03-09T18:01:33.155054+0000 mgr.y (mgr.14505) 1316 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:34 vm02 bash[23351]: cluster 2026-03-09T18:01:33.133195+0000 mgr.y (mgr.14505) 1315 : cluster [DBG] pgmap v1795: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:34 vm02 bash[23351]: cluster 2026-03-09T18:01:33.133195+0000 mgr.y (mgr.14505) 1315 : cluster [DBG] pgmap v1795: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:34 vm02 bash[23351]: audit 2026-03-09T18:01:33.155054+0000 mgr.y (mgr.14505) 1316 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:34.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:34 vm02 bash[23351]: audit 2026-03-09T18:01:33.155054+0000 mgr.y (mgr.14505) 1316 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:36 vm00 bash[28333]: cluster 2026-03-09T18:01:35.133528+0000 mgr.y (mgr.14505) 1317 : cluster [DBG] pgmap v1796: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:01:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:36 vm00 bash[28333]: cluster 2026-03-09T18:01:35.133528+0000 mgr.y (mgr.14505) 1317 : cluster [DBG] pgmap v1796: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:01:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:36 vm00 bash[20770]: cluster 2026-03-09T18:01:35.133528+0000 mgr.y (mgr.14505) 1317 : cluster [DBG] pgmap v1796: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:01:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:36 vm00 bash[20770]: cluster 2026-03-09T18:01:35.133528+0000 mgr.y (mgr.14505) 1317 : cluster [DBG] pgmap v1796: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:01:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:01:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:01:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:01:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:36 vm02 bash[23351]: cluster 2026-03-09T18:01:35.133528+0000 mgr.y (mgr.14505) 1317 : cluster [DBG] pgmap v1796: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:01:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:36 vm02 bash[23351]: cluster 2026-03-09T18:01:35.133528+0000 mgr.y (mgr.14505) 1317 : cluster [DBG] pgmap v1796: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:01:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:38 vm00 bash[28333]: cluster 2026-03-09T18:01:37.133861+0000 mgr.y (mgr.14505) 1318 : cluster [DBG] pgmap v1797: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-09T18:01:38.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:38 vm00 bash[28333]: cluster 2026-03-09T18:01:37.133861+0000 mgr.y (mgr.14505) 1318 : cluster [DBG] pgmap v1797: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-09T18:01:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:38 vm00 bash[20770]: cluster 2026-03-09T18:01:37.133861+0000 mgr.y (mgr.14505) 1318 : cluster [DBG] pgmap v1797: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-09T18:01:38.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:38 vm00 bash[20770]: cluster 2026-03-09T18:01:37.133861+0000 mgr.y (mgr.14505) 1318 : cluster [DBG] pgmap v1797: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-09T18:01:38.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:38 vm02 bash[23351]: cluster 2026-03-09T18:01:37.133861+0000 mgr.y (mgr.14505) 1318 : cluster [DBG] pgmap v1797: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-09T18:01:38.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:38 vm02 bash[23351]: cluster 2026-03-09T18:01:37.133861+0000 mgr.y (mgr.14505) 1318 : cluster [DBG] pgmap v1797: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-09T18:01:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:40 vm00 bash[28333]: cluster 2026-03-09T18:01:39.134713+0000 mgr.y (mgr.14505) 1319 : cluster [DBG] pgmap v1798: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:40.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:40 vm00 bash[28333]: cluster 2026-03-09T18:01:39.134713+0000 mgr.y (mgr.14505) 1319 : cluster [DBG] pgmap v1798: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:40 vm00 bash[20770]: cluster 2026-03-09T18:01:39.134713+0000 mgr.y (mgr.14505) 1319 : cluster [DBG] pgmap v1798: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:40.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:40 vm00 bash[20770]: cluster 2026-03-09T18:01:39.134713+0000 mgr.y (mgr.14505) 1319 : cluster [DBG] pgmap v1798: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:40 vm02 bash[23351]: cluster 2026-03-09T18:01:39.134713+0000 mgr.y (mgr.14505) 1319 : cluster [DBG] pgmap v1798: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:40.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:40 vm02 bash[23351]: cluster 2026-03-09T18:01:39.134713+0000 mgr.y (mgr.14505) 1319 : cluster [DBG] pgmap v1798: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:42 vm00 bash[28333]: cluster 2026-03-09T18:01:41.135092+0000 mgr.y (mgr.14505) 1320 : cluster [DBG] pgmap v1799: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:42 vm00 bash[28333]: cluster 2026-03-09T18:01:41.135092+0000 mgr.y (mgr.14505) 1320 : cluster [DBG] pgmap v1799: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:42 vm00 bash[20770]: cluster 2026-03-09T18:01:41.135092+0000 mgr.y (mgr.14505) 1320 : cluster [DBG] pgmap v1799: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:42 vm00 bash[20770]: cluster 2026-03-09T18:01:41.135092+0000 mgr.y (mgr.14505) 1320 : cluster [DBG] pgmap v1799: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:42.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:42 vm02 bash[23351]: cluster 2026-03-09T18:01:41.135092+0000 mgr.y (mgr.14505) 1320 : cluster [DBG] pgmap v1799: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:42 vm02 bash[23351]: cluster 2026-03-09T18:01:41.135092+0000 mgr.y (mgr.14505) 1320 : cluster [DBG] pgmap v1799: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:43.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:01:43 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:44 vm00 bash[28333]: cluster 2026-03-09T18:01:43.135651+0000 mgr.y (mgr.14505) 1321 : cluster [DBG] pgmap v1800: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:44 vm00 bash[28333]: cluster 2026-03-09T18:01:43.135651+0000 mgr.y (mgr.14505) 1321 : cluster [DBG] pgmap v1800: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:44 vm00 bash[28333]: audit 2026-03-09T18:01:43.165685+0000 mgr.y (mgr.14505) 1322 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:44 vm00 bash[28333]: audit 2026-03-09T18:01:43.165685+0000 mgr.y (mgr.14505) 1322 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:44 vm00 bash[28333]: audit 2026-03-09T18:01:44.000798+0000 mon.c (mon.2) 959 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:44 vm00 bash[28333]: audit 2026-03-09T18:01:44.000798+0000 mon.c (mon.2) 959 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:44 vm00 bash[20770]: cluster 2026-03-09T18:01:43.135651+0000 mgr.y (mgr.14505) 1321 : cluster [DBG] pgmap v1800: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:44 vm00 bash[20770]: cluster 2026-03-09T18:01:43.135651+0000 mgr.y (mgr.14505) 1321 : cluster [DBG] pgmap v1800: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:44 vm00 bash[20770]: audit 2026-03-09T18:01:43.165685+0000 mgr.y (mgr.14505) 1322 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:44 vm00 bash[20770]: audit 2026-03-09T18:01:43.165685+0000 mgr.y (mgr.14505) 1322 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:44 vm00 bash[20770]: audit 2026-03-09T18:01:44.000798+0000 mon.c (mon.2) 959 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:44.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:44 vm00 bash[20770]: audit 2026-03-09T18:01:44.000798+0000 mon.c (mon.2) 959 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:44.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:44 vm02 bash[23351]: cluster 2026-03-09T18:01:43.135651+0000 mgr.y (mgr.14505) 1321 : cluster [DBG] pgmap v1800: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:44 vm02 bash[23351]: cluster 2026-03-09T18:01:43.135651+0000 mgr.y (mgr.14505) 1321 : cluster [DBG] pgmap v1800: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:44 vm02 bash[23351]: audit 2026-03-09T18:01:43.165685+0000 mgr.y (mgr.14505) 1322 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:44 vm02 bash[23351]: audit 2026-03-09T18:01:43.165685+0000 mgr.y (mgr.14505) 1322 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:44 vm02 bash[23351]: audit 2026-03-09T18:01:44.000798+0000 mon.c (mon.2) 959 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:44 vm02 bash[23351]: audit 2026-03-09T18:01:44.000798+0000 mon.c (mon.2) 959 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:46 vm00 bash[28333]: cluster 2026-03-09T18:01:45.135934+0000 mgr.y (mgr.14505) 1323 : cluster [DBG] pgmap v1801: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:46 vm00 bash[28333]: cluster 2026-03-09T18:01:45.135934+0000 mgr.y (mgr.14505) 1323 : cluster [DBG] pgmap v1801: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:46 vm00 bash[20770]: cluster 2026-03-09T18:01:45.135934+0000 mgr.y (mgr.14505) 1323 : cluster [DBG] pgmap v1801: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:46 vm00 bash[20770]: cluster 2026-03-09T18:01:45.135934+0000 mgr.y (mgr.14505) 1323 : cluster [DBG] pgmap v1801: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:01:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:01:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:01:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:46 vm02 bash[23351]: cluster 2026-03-09T18:01:45.135934+0000 mgr.y (mgr.14505) 1323 : cluster [DBG] pgmap v1801: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:46 vm02 bash[23351]: cluster 2026-03-09T18:01:45.135934+0000 mgr.y (mgr.14505) 1323 : cluster [DBG] pgmap v1801: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:47 vm00 bash[28333]: cluster 2026-03-09T18:01:47.136447+0000 mgr.y (mgr.14505) 1324 : cluster [DBG] pgmap v1802: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:48.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:47 vm00 bash[28333]: cluster 2026-03-09T18:01:47.136447+0000 mgr.y (mgr.14505) 1324 : cluster [DBG] pgmap v1802: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:48.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:47 vm00 bash[20770]: cluster 2026-03-09T18:01:47.136447+0000 mgr.y (mgr.14505) 1324 : cluster [DBG] pgmap v1802: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:48.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:47 vm00 bash[20770]: cluster 2026-03-09T18:01:47.136447+0000 mgr.y (mgr.14505) 1324 : cluster [DBG] pgmap v1802: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:47 vm02 bash[23351]: cluster 2026-03-09T18:01:47.136447+0000 mgr.y (mgr.14505) 1324 : cluster [DBG] pgmap v1802: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:48.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:47 vm02 bash[23351]: cluster 2026-03-09T18:01:47.136447+0000 mgr.y (mgr.14505) 1324 : cluster [DBG] pgmap v1802: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:50 vm00 bash[28333]: cluster 2026-03-09T18:01:49.137087+0000 mgr.y (mgr.14505) 1325 : cluster [DBG] pgmap v1803: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:50 vm00 bash[28333]: cluster 2026-03-09T18:01:49.137087+0000 mgr.y (mgr.14505) 1325 : cluster [DBG] pgmap v1803: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:50 vm00 bash[20770]: cluster 2026-03-09T18:01:49.137087+0000 mgr.y (mgr.14505) 1325 : cluster [DBG] pgmap v1803: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:50 vm00 bash[20770]: cluster 2026-03-09T18:01:49.137087+0000 mgr.y (mgr.14505) 1325 : cluster [DBG] pgmap v1803: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:50 vm02 bash[23351]: cluster 2026-03-09T18:01:49.137087+0000 mgr.y (mgr.14505) 1325 : cluster [DBG] pgmap v1803: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:50 vm02 bash[23351]: cluster 2026-03-09T18:01:49.137087+0000 mgr.y (mgr.14505) 1325 : cluster [DBG] pgmap v1803: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:52 vm00 bash[28333]: cluster 2026-03-09T18:01:51.137479+0000 mgr.y (mgr.14505) 1326 : cluster [DBG] pgmap v1804: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:52 vm00 bash[28333]: cluster 2026-03-09T18:01:51.137479+0000 mgr.y (mgr.14505) 1326 : cluster [DBG] pgmap v1804: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:52 vm00 bash[20770]: cluster 2026-03-09T18:01:51.137479+0000 mgr.y (mgr.14505) 1326 : cluster [DBG] pgmap v1804: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:52 vm00 bash[20770]: cluster 2026-03-09T18:01:51.137479+0000 mgr.y (mgr.14505) 1326 : cluster [DBG] pgmap v1804: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:52 vm02 bash[23351]: cluster 2026-03-09T18:01:51.137479+0000 mgr.y (mgr.14505) 1326 : cluster [DBG] pgmap v1804: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:52 vm02 bash[23351]: cluster 2026-03-09T18:01:51.137479+0000 mgr.y (mgr.14505) 1326 : cluster [DBG] pgmap v1804: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:53.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:01:53 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:01:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:54 vm02 bash[23351]: cluster 2026-03-09T18:01:53.138192+0000 mgr.y (mgr.14505) 1327 : cluster [DBG] pgmap v1805: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:54 vm02 bash[23351]: cluster 2026-03-09T18:01:53.138192+0000 mgr.y (mgr.14505) 1327 : cluster [DBG] pgmap v1805: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:54 vm02 bash[23351]: audit 2026-03-09T18:01:53.173668+0000 mgr.y (mgr.14505) 1328 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:54 vm02 bash[23351]: audit 2026-03-09T18:01:53.173668+0000 mgr.y (mgr.14505) 1328 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:54 vm00 bash[28333]: cluster 2026-03-09T18:01:53.138192+0000 mgr.y (mgr.14505) 1327 : cluster [DBG] pgmap v1805: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:54 vm00 bash[28333]: cluster 2026-03-09T18:01:53.138192+0000 mgr.y (mgr.14505) 1327 : cluster [DBG] pgmap v1805: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:54.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:54 vm00 bash[28333]: audit 2026-03-09T18:01:53.173668+0000 mgr.y (mgr.14505) 1328 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:54.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:54 vm00 bash[28333]: audit 2026-03-09T18:01:53.173668+0000 mgr.y (mgr.14505) 1328 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:54.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:54 vm00 bash[20770]: cluster 2026-03-09T18:01:53.138192+0000 mgr.y (mgr.14505) 1327 : cluster [DBG] pgmap v1805: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:54.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:54 vm00 bash[20770]: cluster 2026-03-09T18:01:53.138192+0000 mgr.y (mgr.14505) 1327 : cluster [DBG] pgmap v1805: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:01:54.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:54 vm00 bash[20770]: audit 2026-03-09T18:01:53.173668+0000 mgr.y (mgr.14505) 1328 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:54.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:54 vm00 bash[20770]: audit 2026-03-09T18:01:53.173668+0000 mgr.y (mgr.14505) 1328 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:01:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:56 vm00 bash[28333]: cluster 2026-03-09T18:01:55.138532+0000 mgr.y (mgr.14505) 1329 : cluster [DBG] pgmap v1806: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:56.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:56 vm00 bash[28333]: cluster 2026-03-09T18:01:55.138532+0000 mgr.y (mgr.14505) 1329 : cluster [DBG] pgmap v1806: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:56 vm00 bash[20770]: cluster 2026-03-09T18:01:55.138532+0000 mgr.y (mgr.14505) 1329 : cluster [DBG] pgmap v1806: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:56.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:56 vm00 bash[20770]: cluster 2026-03-09T18:01:55.138532+0000 mgr.y (mgr.14505) 1329 : cluster [DBG] pgmap v1806: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:56.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:01:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:01:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:01:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:56 vm02 bash[23351]: cluster 2026-03-09T18:01:55.138532+0000 mgr.y (mgr.14505) 1329 : cluster [DBG] pgmap v1806: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:56.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:56 vm02 bash[23351]: cluster 2026-03-09T18:01:55.138532+0000 mgr.y (mgr.14505) 1329 : cluster [DBG] pgmap v1806: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:57.093 INFO:tasks.workunit.client.0.vm00.stderr:+ seq 1 10 2026-03-09T18:01:57.093 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj1 /etc/passwd 2026-03-09T18:01:57.124 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj2 /etc/passwd 2026-03-09T18:01:57.154 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj3 /etc/passwd 2026-03-09T18:01:57.183 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj4 /etc/passwd 2026-03-09T18:01:57.212 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj5 /etc/passwd 2026-03-09T18:01:57.241 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj6 /etc/passwd 2026-03-09T18:01:57.270 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj7 /etc/passwd 2026-03-09T18:01:57.304 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj8 /etc/passwd 2026-03-09T18:01:57.334 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj9 /etc/passwd 2026-03-09T18:01:57.361 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bbf4d092-de96-4c27-8d15-50edd4b79fe3 put obj10 /etc/passwd 2026-03-09T18:01:57.390 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-09T18:01:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:58 vm00 bash[28333]: cluster 2026-03-09T18:01:57.138849+0000 mgr.y (mgr.14505) 1330 : cluster [DBG] pgmap v1807: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:58.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:58 vm00 bash[28333]: cluster 2026-03-09T18:01:57.138849+0000 mgr.y (mgr.14505) 1330 : cluster [DBG] pgmap v1807: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:58 vm00 bash[20770]: cluster 2026-03-09T18:01:57.138849+0000 mgr.y (mgr.14505) 1330 : cluster [DBG] pgmap v1807: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:58.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:58 vm00 bash[20770]: cluster 2026-03-09T18:01:57.138849+0000 mgr.y (mgr.14505) 1330 : cluster [DBG] pgmap v1807: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:58 vm02 bash[23351]: cluster 2026-03-09T18:01:57.138849+0000 mgr.y (mgr.14505) 1330 : cluster [DBG] pgmap v1807: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:58.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:58 vm02 bash[23351]: cluster 2026-03-09T18:01:57.138849+0000 mgr.y (mgr.14505) 1330 : cluster [DBG] pgmap v1807: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:01:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:59 vm00 bash[28333]: audit 2026-03-09T18:01:59.006724+0000 mon.c (mon.2) 960 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:01:59 vm00 bash[28333]: audit 2026-03-09T18:01:59.006724+0000 mon.c (mon.2) 960 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:59 vm00 bash[20770]: audit 2026-03-09T18:01:59.006724+0000 mon.c (mon.2) 960 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:01:59 vm00 bash[20770]: audit 2026-03-09T18:01:59.006724+0000 mon.c (mon.2) 960 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:59 vm02 bash[23351]: audit 2026-03-09T18:01:59.006724+0000 mon.c (mon.2) 960 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:01:59.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:01:59 vm02 bash[23351]: audit 2026-03-09T18:01:59.006724+0000 mon.c (mon.2) 960 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:00 vm00 bash[28333]: cluster 2026-03-09T18:01:59.139461+0000 mgr.y (mgr.14505) 1331 : cluster [DBG] pgmap v1808: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T18:02:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:00 vm00 bash[28333]: cluster 2026-03-09T18:01:59.139461+0000 mgr.y (mgr.14505) 1331 : cluster [DBG] pgmap v1808: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T18:02:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:00 vm00 bash[20770]: cluster 2026-03-09T18:01:59.139461+0000 mgr.y (mgr.14505) 1331 : cluster [DBG] pgmap v1808: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T18:02:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:00 vm00 bash[20770]: cluster 2026-03-09T18:01:59.139461+0000 mgr.y (mgr.14505) 1331 : cluster [DBG] pgmap v1808: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T18:02:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:00 vm02 bash[23351]: cluster 2026-03-09T18:01:59.139461+0000 mgr.y (mgr.14505) 1331 : cluster [DBG] pgmap v1808: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T18:02:00.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:00 vm02 bash[23351]: cluster 2026-03-09T18:01:59.139461+0000 mgr.y (mgr.14505) 1331 : cluster [DBG] pgmap v1808: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:02 vm00 bash[28333]: cluster 2026-03-09T18:02:01.139741+0000 mgr.y (mgr.14505) 1332 : cluster [DBG] pgmap v1809: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:02 vm00 bash[28333]: cluster 2026-03-09T18:02:01.139741+0000 mgr.y (mgr.14505) 1332 : cluster [DBG] pgmap v1809: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:02 vm00 bash[28333]: cluster 2026-03-09T18:02:01.981981+0000 mon.a (mon.0) 3714 : cluster [WRN] pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' is full (reached quota's max_objects: 10) 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:02 vm00 bash[28333]: cluster 2026-03-09T18:02:01.981981+0000 mon.a (mon.0) 3714 : cluster [WRN] pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' is full (reached quota's max_objects: 10) 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:02 vm00 bash[28333]: cluster 2026-03-09T18:02:01.982170+0000 mon.a (mon.0) 3715 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:02 vm00 bash[28333]: cluster 2026-03-09T18:02:01.982170+0000 mon.a (mon.0) 3715 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:02 vm00 bash[28333]: cluster 2026-03-09T18:02:01.992359+0000 mon.a (mon.0) 3716 : cluster [DBG] osdmap e820: 8 total, 8 up, 8 in 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:02 vm00 bash[28333]: cluster 2026-03-09T18:02:01.992359+0000 mon.a (mon.0) 3716 : cluster [DBG] osdmap e820: 8 total, 8 up, 8 in 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:02 vm00 bash[20770]: cluster 2026-03-09T18:02:01.139741+0000 mgr.y (mgr.14505) 1332 : cluster [DBG] pgmap v1809: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:02 vm00 bash[20770]: cluster 2026-03-09T18:02:01.139741+0000 mgr.y (mgr.14505) 1332 : cluster [DBG] pgmap v1809: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:02 vm00 bash[20770]: cluster 2026-03-09T18:02:01.981981+0000 mon.a (mon.0) 3714 : cluster [WRN] pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' is full (reached quota's max_objects: 10) 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:02 vm00 bash[20770]: cluster 2026-03-09T18:02:01.981981+0000 mon.a (mon.0) 3714 : cluster [WRN] pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' is full (reached quota's max_objects: 10) 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:02 vm00 bash[20770]: cluster 2026-03-09T18:02:01.982170+0000 mon.a (mon.0) 3715 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:02 vm00 bash[20770]: cluster 2026-03-09T18:02:01.982170+0000 mon.a (mon.0) 3715 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:02 vm00 bash[20770]: cluster 2026-03-09T18:02:01.992359+0000 mon.a (mon.0) 3716 : cluster [DBG] osdmap e820: 8 total, 8 up, 8 in 2026-03-09T18:02:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:02 vm00 bash[20770]: cluster 2026-03-09T18:02:01.992359+0000 mon.a (mon.0) 3716 : cluster [DBG] osdmap e820: 8 total, 8 up, 8 in 2026-03-09T18:02:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:02 vm02 bash[23351]: cluster 2026-03-09T18:02:01.139741+0000 mgr.y (mgr.14505) 1332 : cluster [DBG] pgmap v1809: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T18:02:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:02 vm02 bash[23351]: cluster 2026-03-09T18:02:01.139741+0000 mgr.y (mgr.14505) 1332 : cluster [DBG] pgmap v1809: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T18:02:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:02 vm02 bash[23351]: cluster 2026-03-09T18:02:01.981981+0000 mon.a (mon.0) 3714 : cluster [WRN] pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' is full (reached quota's max_objects: 10) 2026-03-09T18:02:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:02 vm02 bash[23351]: cluster 2026-03-09T18:02:01.981981+0000 mon.a (mon.0) 3714 : cluster [WRN] pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' is full (reached quota's max_objects: 10) 2026-03-09T18:02:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:02 vm02 bash[23351]: cluster 2026-03-09T18:02:01.982170+0000 mon.a (mon.0) 3715 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:02:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:02 vm02 bash[23351]: cluster 2026-03-09T18:02:01.982170+0000 mon.a (mon.0) 3715 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T18:02:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:02 vm02 bash[23351]: cluster 2026-03-09T18:02:01.992359+0000 mon.a (mon.0) 3716 : cluster [DBG] osdmap e820: 8 total, 8 up, 8 in 2026-03-09T18:02:02.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:02 vm02 bash[23351]: cluster 2026-03-09T18:02:01.992359+0000 mon.a (mon.0) 3716 : cluster [DBG] osdmap e820: 8 total, 8 up, 8 in 2026-03-09T18:02:03.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:02:03 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:02:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:04 vm00 bash[28333]: cluster 2026-03-09T18:02:03.140339+0000 mgr.y (mgr.14505) 1333 : cluster [DBG] pgmap v1811: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:04 vm00 bash[28333]: cluster 2026-03-09T18:02:03.140339+0000 mgr.y (mgr.14505) 1333 : cluster [DBG] pgmap v1811: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:04 vm00 bash[28333]: audit 2026-03-09T18:02:03.184311+0000 mgr.y (mgr.14505) 1334 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:04 vm00 bash[28333]: audit 2026-03-09T18:02:03.184311+0000 mgr.y (mgr.14505) 1334 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:04 vm00 bash[20770]: cluster 2026-03-09T18:02:03.140339+0000 mgr.y (mgr.14505) 1333 : cluster [DBG] pgmap v1811: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:04 vm00 bash[20770]: cluster 2026-03-09T18:02:03.140339+0000 mgr.y (mgr.14505) 1333 : cluster [DBG] pgmap v1811: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:04 vm00 bash[20770]: audit 2026-03-09T18:02:03.184311+0000 mgr.y (mgr.14505) 1334 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:04.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:04 vm00 bash[20770]: audit 2026-03-09T18:02:03.184311+0000 mgr.y (mgr.14505) 1334 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:04 vm02 bash[23351]: cluster 2026-03-09T18:02:03.140339+0000 mgr.y (mgr.14505) 1333 : cluster [DBG] pgmap v1811: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:04 vm02 bash[23351]: cluster 2026-03-09T18:02:03.140339+0000 mgr.y (mgr.14505) 1333 : cluster [DBG] pgmap v1811: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:04 vm02 bash[23351]: audit 2026-03-09T18:02:03.184311+0000 mgr.y (mgr.14505) 1334 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:04.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:04 vm02 bash[23351]: audit 2026-03-09T18:02:03.184311+0000 mgr.y (mgr.14505) 1334 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:06 vm00 bash[28333]: cluster 2026-03-09T18:02:05.140635+0000 mgr.y (mgr.14505) 1335 : cluster [DBG] pgmap v1812: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:06 vm00 bash[28333]: cluster 2026-03-09T18:02:05.140635+0000 mgr.y (mgr.14505) 1335 : cluster [DBG] pgmap v1812: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:06 vm00 bash[20770]: cluster 2026-03-09T18:02:05.140635+0000 mgr.y (mgr.14505) 1335 : cluster [DBG] pgmap v1812: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:06 vm00 bash[20770]: cluster 2026-03-09T18:02:05.140635+0000 mgr.y (mgr.14505) 1335 : cluster [DBG] pgmap v1812: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:02:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:02:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:02:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:06 vm02 bash[23351]: cluster 2026-03-09T18:02:05.140635+0000 mgr.y (mgr.14505) 1335 : cluster [DBG] pgmap v1812: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:06.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:06 vm02 bash[23351]: cluster 2026-03-09T18:02:05.140635+0000 mgr.y (mgr.14505) 1335 : cluster [DBG] pgmap v1812: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:08 vm00 bash[28333]: cluster 2026-03-09T18:02:07.140919+0000 mgr.y (mgr.14505) 1336 : cluster [DBG] pgmap v1813: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:08.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:08 vm00 bash[28333]: cluster 2026-03-09T18:02:07.140919+0000 mgr.y (mgr.14505) 1336 : cluster [DBG] pgmap v1813: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:08 vm00 bash[20770]: cluster 2026-03-09T18:02:07.140919+0000 mgr.y (mgr.14505) 1336 : cluster [DBG] pgmap v1813: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:08.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:08 vm00 bash[20770]: cluster 2026-03-09T18:02:07.140919+0000 mgr.y (mgr.14505) 1336 : cluster [DBG] pgmap v1813: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:08 vm02 bash[23351]: cluster 2026-03-09T18:02:07.140919+0000 mgr.y (mgr.14505) 1336 : cluster [DBG] pgmap v1813: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:08.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:08 vm02 bash[23351]: cluster 2026-03-09T18:02:07.140919+0000 mgr.y (mgr.14505) 1336 : cluster [DBG] pgmap v1813: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T18:02:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:10 vm00 bash[28333]: cluster 2026-03-09T18:02:09.141490+0000 mgr.y (mgr.14505) 1337 : cluster [DBG] pgmap v1814: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:10.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:10 vm00 bash[28333]: cluster 2026-03-09T18:02:09.141490+0000 mgr.y (mgr.14505) 1337 : cluster [DBG] pgmap v1814: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:10 vm00 bash[20770]: cluster 2026-03-09T18:02:09.141490+0000 mgr.y (mgr.14505) 1337 : cluster [DBG] pgmap v1814: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:10.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:10 vm00 bash[20770]: cluster 2026-03-09T18:02:09.141490+0000 mgr.y (mgr.14505) 1337 : cluster [DBG] pgmap v1814: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:10 vm02 bash[23351]: cluster 2026-03-09T18:02:09.141490+0000 mgr.y (mgr.14505) 1337 : cluster [DBG] pgmap v1814: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:10.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:10 vm02 bash[23351]: cluster 2026-03-09T18:02:09.141490+0000 mgr.y (mgr.14505) 1337 : cluster [DBG] pgmap v1814: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:12 vm00 bash[28333]: cluster 2026-03-09T18:02:11.141793+0000 mgr.y (mgr.14505) 1338 : cluster [DBG] pgmap v1815: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:12.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:12 vm00 bash[28333]: cluster 2026-03-09T18:02:11.141793+0000 mgr.y (mgr.14505) 1338 : cluster [DBG] pgmap v1815: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:12 vm00 bash[20770]: cluster 2026-03-09T18:02:11.141793+0000 mgr.y (mgr.14505) 1338 : cluster [DBG] pgmap v1815: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:12.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:12 vm00 bash[20770]: cluster 2026-03-09T18:02:11.141793+0000 mgr.y (mgr.14505) 1338 : cluster [DBG] pgmap v1815: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:12 vm02 bash[23351]: cluster 2026-03-09T18:02:11.141793+0000 mgr.y (mgr.14505) 1338 : cluster [DBG] pgmap v1815: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:12.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:12 vm02 bash[23351]: cluster 2026-03-09T18:02:11.141793+0000 mgr.y (mgr.14505) 1338 : cluster [DBG] pgmap v1815: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:13.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:02:13 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:14 vm00 bash[28333]: cluster 2026-03-09T18:02:13.142350+0000 mgr.y (mgr.14505) 1339 : cluster [DBG] pgmap v1816: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:14 vm00 bash[28333]: cluster 2026-03-09T18:02:13.142350+0000 mgr.y (mgr.14505) 1339 : cluster [DBG] pgmap v1816: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:14 vm00 bash[28333]: audit 2026-03-09T18:02:13.194127+0000 mgr.y (mgr.14505) 1340 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:14 vm00 bash[28333]: audit 2026-03-09T18:02:13.194127+0000 mgr.y (mgr.14505) 1340 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:14 vm00 bash[28333]: audit 2026-03-09T18:02:14.017664+0000 mon.a (mon.0) 3717 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:14 vm00 bash[28333]: audit 2026-03-09T18:02:14.017664+0000 mon.a (mon.0) 3717 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:14 vm00 bash[28333]: audit 2026-03-09T18:02:14.018956+0000 mon.c (mon.2) 961 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:14 vm00 bash[28333]: audit 2026-03-09T18:02:14.018956+0000 mon.c (mon.2) 961 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:14 vm00 bash[20770]: cluster 2026-03-09T18:02:13.142350+0000 mgr.y (mgr.14505) 1339 : cluster [DBG] pgmap v1816: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:14 vm00 bash[20770]: cluster 2026-03-09T18:02:13.142350+0000 mgr.y (mgr.14505) 1339 : cluster [DBG] pgmap v1816: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:14 vm00 bash[20770]: audit 2026-03-09T18:02:13.194127+0000 mgr.y (mgr.14505) 1340 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:14 vm00 bash[20770]: audit 2026-03-09T18:02:13.194127+0000 mgr.y (mgr.14505) 1340 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:14 vm00 bash[20770]: audit 2026-03-09T18:02:14.017664+0000 mon.a (mon.0) 3717 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:14 vm00 bash[20770]: audit 2026-03-09T18:02:14.017664+0000 mon.a (mon.0) 3717 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:14 vm00 bash[20770]: audit 2026-03-09T18:02:14.018956+0000 mon.c (mon.2) 961 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:14.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:14 vm00 bash[20770]: audit 2026-03-09T18:02:14.018956+0000 mon.c (mon.2) 961 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:14 vm02 bash[23351]: cluster 2026-03-09T18:02:13.142350+0000 mgr.y (mgr.14505) 1339 : cluster [DBG] pgmap v1816: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:14 vm02 bash[23351]: cluster 2026-03-09T18:02:13.142350+0000 mgr.y (mgr.14505) 1339 : cluster [DBG] pgmap v1816: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:14 vm02 bash[23351]: audit 2026-03-09T18:02:13.194127+0000 mgr.y (mgr.14505) 1340 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:14 vm02 bash[23351]: audit 2026-03-09T18:02:13.194127+0000 mgr.y (mgr.14505) 1340 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:14 vm02 bash[23351]: audit 2026-03-09T18:02:14.017664+0000 mon.a (mon.0) 3717 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:14 vm02 bash[23351]: audit 2026-03-09T18:02:14.017664+0000 mon.a (mon.0) 3717 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:14 vm02 bash[23351]: audit 2026-03-09T18:02:14.018956+0000 mon.c (mon.2) 961 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:14.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:14 vm02 bash[23351]: audit 2026-03-09T18:02:14.018956+0000 mon.c (mon.2) 961 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:16 vm00 bash[28333]: cluster 2026-03-09T18:02:15.142691+0000 mgr.y (mgr.14505) 1341 : cluster [DBG] pgmap v1817: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:16.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:16 vm00 bash[28333]: cluster 2026-03-09T18:02:15.142691+0000 mgr.y (mgr.14505) 1341 : cluster [DBG] pgmap v1817: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:16 vm00 bash[20770]: cluster 2026-03-09T18:02:15.142691+0000 mgr.y (mgr.14505) 1341 : cluster [DBG] pgmap v1817: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:16.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:16 vm00 bash[20770]: cluster 2026-03-09T18:02:15.142691+0000 mgr.y (mgr.14505) 1341 : cluster [DBG] pgmap v1817: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:16.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:02:16 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:02:16] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:02:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:16 vm02 bash[23351]: cluster 2026-03-09T18:02:15.142691+0000 mgr.y (mgr.14505) 1341 : cluster [DBG] pgmap v1817: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:16.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:16 vm02 bash[23351]: cluster 2026-03-09T18:02:15.142691+0000 mgr.y (mgr.14505) 1341 : cluster [DBG] pgmap v1817: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:18 vm00 bash[28333]: cluster 2026-03-09T18:02:17.142987+0000 mgr.y (mgr.14505) 1342 : cluster [DBG] pgmap v1818: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:18.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:18 vm00 bash[28333]: cluster 2026-03-09T18:02:17.142987+0000 mgr.y (mgr.14505) 1342 : cluster [DBG] pgmap v1818: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:18 vm00 bash[20770]: cluster 2026-03-09T18:02:17.142987+0000 mgr.y (mgr.14505) 1342 : cluster [DBG] pgmap v1818: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:18.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:18 vm00 bash[20770]: cluster 2026-03-09T18:02:17.142987+0000 mgr.y (mgr.14505) 1342 : cluster [DBG] pgmap v1818: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:18 vm02 bash[23351]: cluster 2026-03-09T18:02:17.142987+0000 mgr.y (mgr.14505) 1342 : cluster [DBG] pgmap v1818: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:18.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:18 vm02 bash[23351]: cluster 2026-03-09T18:02:17.142987+0000 mgr.y (mgr.14505) 1342 : cluster [DBG] pgmap v1818: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:20 vm00 bash[28333]: cluster 2026-03-09T18:02:19.143550+0000 mgr.y (mgr.14505) 1343 : cluster [DBG] pgmap v1819: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:20.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:20 vm00 bash[28333]: cluster 2026-03-09T18:02:19.143550+0000 mgr.y (mgr.14505) 1343 : cluster [DBG] pgmap v1819: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:20 vm00 bash[20770]: cluster 2026-03-09T18:02:19.143550+0000 mgr.y (mgr.14505) 1343 : cluster [DBG] pgmap v1819: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:20.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:20 vm00 bash[20770]: cluster 2026-03-09T18:02:19.143550+0000 mgr.y (mgr.14505) 1343 : cluster [DBG] pgmap v1819: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:20 vm02 bash[23351]: cluster 2026-03-09T18:02:19.143550+0000 mgr.y (mgr.14505) 1343 : cluster [DBG] pgmap v1819: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:20.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:20 vm02 bash[23351]: cluster 2026-03-09T18:02:19.143550+0000 mgr.y (mgr.14505) 1343 : cluster [DBG] pgmap v1819: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:22 vm00 bash[28333]: cluster 2026-03-09T18:02:21.143858+0000 mgr.y (mgr.14505) 1344 : cluster [DBG] pgmap v1820: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:22.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:22 vm00 bash[28333]: cluster 2026-03-09T18:02:21.143858+0000 mgr.y (mgr.14505) 1344 : cluster [DBG] pgmap v1820: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:22 vm00 bash[20770]: cluster 2026-03-09T18:02:21.143858+0000 mgr.y (mgr.14505) 1344 : cluster [DBG] pgmap v1820: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:22.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:22 vm00 bash[20770]: cluster 2026-03-09T18:02:21.143858+0000 mgr.y (mgr.14505) 1344 : cluster [DBG] pgmap v1820: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:22 vm02 bash[23351]: cluster 2026-03-09T18:02:21.143858+0000 mgr.y (mgr.14505) 1344 : cluster [DBG] pgmap v1820: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:22.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:22 vm02 bash[23351]: cluster 2026-03-09T18:02:21.143858+0000 mgr.y (mgr.14505) 1344 : cluster [DBG] pgmap v1820: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:23.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:02:23 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:02:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:24 vm00 bash[28333]: cluster 2026-03-09T18:02:23.144493+0000 mgr.y (mgr.14505) 1345 : cluster [DBG] pgmap v1821: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:24 vm00 bash[28333]: cluster 2026-03-09T18:02:23.144493+0000 mgr.y (mgr.14505) 1345 : cluster [DBG] pgmap v1821: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:24 vm00 bash[28333]: audit 2026-03-09T18:02:23.204238+0000 mgr.y (mgr.14505) 1346 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:24.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:24 vm00 bash[28333]: audit 2026-03-09T18:02:23.204238+0000 mgr.y (mgr.14505) 1346 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:24.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:24 vm00 bash[20770]: cluster 2026-03-09T18:02:23.144493+0000 mgr.y (mgr.14505) 1345 : cluster [DBG] pgmap v1821: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:24.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:24 vm00 bash[20770]: cluster 2026-03-09T18:02:23.144493+0000 mgr.y (mgr.14505) 1345 : cluster [DBG] pgmap v1821: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:24.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:24 vm00 bash[20770]: audit 2026-03-09T18:02:23.204238+0000 mgr.y (mgr.14505) 1346 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:24.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:24 vm00 bash[20770]: audit 2026-03-09T18:02:23.204238+0000 mgr.y (mgr.14505) 1346 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:24 vm02 bash[23351]: cluster 2026-03-09T18:02:23.144493+0000 mgr.y (mgr.14505) 1345 : cluster [DBG] pgmap v1821: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:24 vm02 bash[23351]: cluster 2026-03-09T18:02:23.144493+0000 mgr.y (mgr.14505) 1345 : cluster [DBG] pgmap v1821: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:24 vm02 bash[23351]: audit 2026-03-09T18:02:23.204238+0000 mgr.y (mgr.14505) 1346 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:24.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:24 vm02 bash[23351]: audit 2026-03-09T18:02:23.204238+0000 mgr.y (mgr.14505) 1346 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:26.551 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:02:26 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:02:26] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:02:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:26 vm02 bash[23351]: cluster 2026-03-09T18:02:25.144854+0000 mgr.y (mgr.14505) 1347 : cluster [DBG] pgmap v1822: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:26.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:26 vm02 bash[23351]: cluster 2026-03-09T18:02:25.144854+0000 mgr.y (mgr.14505) 1347 : cluster [DBG] pgmap v1822: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:26 vm00 bash[28333]: cluster 2026-03-09T18:02:25.144854+0000 mgr.y (mgr.14505) 1347 : cluster [DBG] pgmap v1822: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:27.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:26 vm00 bash[28333]: cluster 2026-03-09T18:02:25.144854+0000 mgr.y (mgr.14505) 1347 : cluster [DBG] pgmap v1822: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:26 vm00 bash[20770]: cluster 2026-03-09T18:02:25.144854+0000 mgr.y (mgr.14505) 1347 : cluster [DBG] pgmap v1822: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:27.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:26 vm00 bash[20770]: cluster 2026-03-09T18:02:25.144854+0000 mgr.y (mgr.14505) 1347 : cluster [DBG] pgmap v1822: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:27.390 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c put threemore /etc/passwd 2026-03-09T18:02:27.577 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c max_bytes 0 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 -- 192.168.123.100:0/1930184935 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8104e20 msgr2=0x7fabf8105200 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 --2- 192.168.123.100:0/1930184935 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8104e20 0x7fabf8105200 secure :-1 s=READY pgs=2874 cs=0 l=1 rev1=1 crypto rx=0x7fabec009a30 tx=0x7fabec01c900 comp rx=0 tx=0).stop 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 -- 192.168.123.100:0/1930184935 shutdown_connections 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 --2- 192.168.123.100:0/1930184935 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabf8109f50 0x7fabf8111ad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 --2- 192.168.123.100:0/1930184935 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabf81057d0 0x7fabf8109820 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 --2- 192.168.123.100:0/1930184935 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8104e20 0x7fabf8105200 unknown :-1 s=CLOSED pgs=2874 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 -- 192.168.123.100:0/1930184935 >> 192.168.123.100:0/1930184935 conn(0x7fabf8100880 msgr2=0x7fabf8102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 -- 192.168.123.100:0/1930184935 shutdown_connections 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.797+0000 7fabff7e2640 1 -- 192.168.123.100:0/1930184935 wait complete. 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 Processor -- start 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 -- start start 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabf8104e20 0x7fabf819f190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabf81057d0 0x7fabf819f6d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8109f50 0x7fabf81a3a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fabf8116c00 con 0x7fabf8104e20 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7fabf8116a80 con 0x7fabf8109f50 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fabf8116d80 con 0x7fabf81057d0 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabf8104e20 0x7fabf819f190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabf8104e20 0x7fabf819f190 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53104/0 (socket says 192.168.123.100:53104) 2026-03-09T18:02:27.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 -- 192.168.123.100:0/3092072168 learned_addr learned my addr 192.168.123.100:0/3092072168 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 -- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabf81057d0 msgr2=0x7fabf819f6d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfdd58640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8109f50 0x7fabf81a3a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfcd56640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabf81057d0 0x7fabf819f6d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabf81057d0 0x7fabf819f6d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 -- 192.168.123.100:0/3092072168 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8109f50 msgr2=0x7fabf81a3a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8109f50 0x7fabf81a3a60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 -- 192.168.123.100:0/3092072168 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fabf81a4140 con 0x7fabf8104e20 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfdd58640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8109f50 0x7fabf81a3a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfcd56640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabf81057d0 0x7fabf819f6d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabfd557640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabf8104e20 0x7fabf819f190 secure :-1 s=READY pgs=3153 cs=0 l=1 rev1=1 crypto rx=0x7fabec01cde0 tx=0x7fabec0a5eb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fabec005e60 con 0x7fabf8104e20 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fabec002800 con 0x7fabf8104e20 2026-03-09T18:02:27.805 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fabf81a43d0 con 0x7fabf8104e20 2026-03-09T18:02:27.807 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fabec0ae9a0 con 0x7fabf8104e20 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fabf81abd10 con 0x7fabf8104e20 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fabec0aeb40 con 0x7fabf8104e20 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.801+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fabf8102620 con 0x7fabf8104e20 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.805+0000 7fabe67fc640 1 --2- 192.168.123.100:0/3092072168 >> v2:192.168.123.100:6800/2673235927 conn(0x7fabd0077640 0x7fabd0079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.805+0000 7fabfcd56640 1 --2- 192.168.123.100:0/3092072168 >> v2:192.168.123.100:6800/2673235927 conn(0x7fabd0077640 0x7fabd0079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.805+0000 7fabfcd56640 1 --2- 192.168.123.100:0/3092072168 >> v2:192.168.123.100:6800/2673235927 conn(0x7fabd0077640 0x7fabd0079b00 secure :-1 s=READY pgs=4287 cs=0 l=1 rev1=1 crypto rx=0x7fabe8005fd0 tx=0x7fabe8005ea0 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.805+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(820..820 src has 1..820) ==== 8124+0+0 (secure 0 0 0) 0x7fabec137170 con 0x7fabf8104e20 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.805+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=821}) -- 0x7fabd0083330 con 0x7fabf8104e20 2026-03-09T18:02:27.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.805+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fabf8102620 con 0x7fabf8104e20 2026-03-09T18:02:27.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:27.901+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"} v 0) -- 0x7fabf8106300 con 0x7fabf8104e20 2026-03-09T18:02:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:28 vm00 bash[20770]: cluster 2026-03-09T18:02:27.145275+0000 mgr.y (mgr.14505) 1348 : cluster [DBG] pgmap v1823: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:28.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:28 vm00 bash[20770]: cluster 2026-03-09T18:02:27.145275+0000 mgr.y (mgr.14505) 1348 : cluster [DBG] pgmap v1823: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:28 vm00 bash[28333]: cluster 2026-03-09T18:02:27.145275+0000 mgr.y (mgr.14505) 1348 : cluster [DBG] pgmap v1823: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:28.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:28 vm00 bash[28333]: cluster 2026-03-09T18:02:27.145275+0000 mgr.y (mgr.14505) 1348 : cluster [DBG] pgmap v1823: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:28.570 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:28.565+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v821) ==== 217+0+0 (secure 0 0 0) 0x7fabec102e20 con 0x7fabf8104e20 2026-03-09T18:02:28.621 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:28.617+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"} v 0) -- 0x7fabf8105200 con 0x7fabf8104e20 2026-03-09T18:02:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:28 vm02 bash[23351]: cluster 2026-03-09T18:02:27.145275+0000 mgr.y (mgr.14505) 1348 : cluster [DBG] pgmap v1823: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:28.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:28 vm02 bash[23351]: cluster 2026-03-09T18:02:27.145275+0000 mgr.y (mgr.14505) 1348 : cluster [DBG] pgmap v1823: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:28.645 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:28.641+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(821..821 src has 1..821) ==== 628+0+0 (secure 0 0 0) 0x7fabec0f7bd0 con 0x7fabf8104e20 2026-03-09T18:02:28.645 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:28.641+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=822}) -- 0x7fabd0084380 con 0x7fabf8104e20 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:27.994628+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:27.994628+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.186655+0000 mon.c (mon.2) 962 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.186655+0000 mon.c (mon.2) 962 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.569652+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.569652+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: cluster 2026-03-09T18:02:28.644972+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e821: 8 total, 8 up, 8 in 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: cluster 2026-03-09T18:02:28.644972+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e821: 8 total, 8 up, 8 in 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.645708+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.645708+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.831287+0000 mon.a (mon.0) 3722 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.831287+0000 mon.a (mon.0) 3722 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.843443+0000 mon.a (mon.0) 3723 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:28.843443+0000 mon.a (mon.0) 3723 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:29.025101+0000 mon.c (mon.2) 963 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:29.025101+0000 mon.c (mon.2) 963 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:29.181597+0000 mon.c (mon.2) 964 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:29.181597+0000 mon.c (mon.2) 964 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:29.188723+0000 mon.c (mon.2) 965 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:29.188723+0000 mon.c (mon.2) 965 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:29.195969+0000 mon.a (mon.0) 3724 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:29 vm00 bash[28333]: audit 2026-03-09T18:02:29.195969+0000 mon.a (mon.0) 3724 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:27.994628+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:27.994628+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.186655+0000 mon.c (mon.2) 962 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.186655+0000 mon.c (mon.2) 962 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.569652+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.569652+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: cluster 2026-03-09T18:02:28.644972+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e821: 8 total, 8 up, 8 in 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: cluster 2026-03-09T18:02:28.644972+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e821: 8 total, 8 up, 8 in 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.645708+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.645708+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.831287+0000 mon.a (mon.0) 3722 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.831287+0000 mon.a (mon.0) 3722 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.843443+0000 mon.a (mon.0) 3723 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:28.843443+0000 mon.a (mon.0) 3723 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:29.025101+0000 mon.c (mon.2) 963 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:29.025101+0000 mon.c (mon.2) 963 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:29.181597+0000 mon.c (mon.2) 964 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:29.181597+0000 mon.c (mon.2) 964 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:29.188723+0000 mon.c (mon.2) 965 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:29.188723+0000 mon.c (mon.2) 965 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:29.195969+0000 mon.a (mon.0) 3724 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:29 vm00 bash[20770]: audit 2026-03-09T18:02:29.195969+0000 mon.a (mon.0) 3724 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.586 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.581+0000 7fabe67fc640 1 -- 192.168.123.100:0/3092072168 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v822) ==== 217+0+0 (secure 0 0 0) 0x7fabec016610 con 0x7fabf8104e20 2026-03-09T18:02:29.588 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_bytes = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 2026-03-09T18:02:29.588 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 >> v2:192.168.123.100:6800/2673235927 conn(0x7fabd0077640 msgr2=0x7fabd0079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:02:29.588 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 --2- 192.168.123.100:0/3092072168 >> v2:192.168.123.100:6800/2673235927 conn(0x7fabd0077640 0x7fabd0079b00 secure :-1 s=READY pgs=4287 cs=0 l=1 rev1=1 crypto rx=0x7fabe8005fd0 tx=0x7fabe8005ea0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.588 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabf8104e20 msgr2=0x7fabf819f190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:02:29.588 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabf8104e20 0x7fabf819f190 secure :-1 s=READY pgs=3153 cs=0 l=1 rev1=1 crypto rx=0x7fabec01cde0 tx=0x7fabec0a5eb0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 shutdown_connections 2026-03-09T18:02:29.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 --2- 192.168.123.100:0/3092072168 >> v2:192.168.123.100:6800/2673235927 conn(0x7fabd0077640 0x7fabd0079b00 unknown :-1 s=CLOSED pgs=4287 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fabf8109f50 0x7fabf81a3a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabf81057d0 0x7fabf819f6d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 --2- 192.168.123.100:0/3092072168 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabf8104e20 0x7fabf819f190 unknown :-1 s=CLOSED pgs=3153 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 >> 192.168.123.100:0/3092072168 conn(0x7fabf8100880 msgr2=0x7fabf8100e80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:02:29.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 shutdown_connections 2026-03-09T18:02:29.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.585+0000 7fabff7e2640 1 -- 192.168.123.100:0/3092072168 wait complete. 2026-03-09T18:02:29.608 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c max_objects 0 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:27.994628+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:27.994628+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.186655+0000 mon.c (mon.2) 962 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.186655+0000 mon.c (mon.2) 962 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.569652+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.569652+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: cluster 2026-03-09T18:02:28.644972+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e821: 8 total, 8 up, 8 in 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: cluster 2026-03-09T18:02:28.644972+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e821: 8 total, 8 up, 8 in 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.645708+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.645708+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.831287+0000 mon.a (mon.0) 3722 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.831287+0000 mon.a (mon.0) 3722 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.843443+0000 mon.a (mon.0) 3723 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:28.843443+0000 mon.a (mon.0) 3723 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:29.025101+0000 mon.c (mon.2) 963 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:29.025101+0000 mon.c (mon.2) 963 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:29.181597+0000 mon.c (mon.2) 964 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:29.181597+0000 mon.c (mon.2) 964 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:29.188723+0000 mon.c (mon.2) 965 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:29.188723+0000 mon.c (mon.2) 965 : audit [INF] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:29.195969+0000 mon.a (mon.0) 3724 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:29 vm02 bash[23351]: audit 2026-03-09T18:02:29.195969+0000 mon.a (mon.0) 3724 : audit [INF] from='mgr.14505 ' entity='mgr.y' 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- 192.168.123.100:0/3651251220 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c101e30 msgr2=0x7f563c10ed00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 --2- 192.168.123.100:0/3651251220 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c101e30 0x7f563c10ed00 secure :-1 s=READY pgs=3226 cs=0 l=1 rev1=1 crypto rx=0x7f562c009a30 tx=0x7f562c01c920 comp rx=0 tx=0).stop 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- 192.168.123.100:0/3651251220 shutdown_connections 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 --2- 192.168.123.100:0/3651251220 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f563c10f240 0x7f563c111630 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 --2- 192.168.123.100:0/3651251220 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c101e30 0x7f563c10ed00 unknown :-1 s=CLOSED pgs=3226 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 --2- 192.168.123.100:0/3651251220 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f563c101510 0x7f563c1018f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- 192.168.123.100:0/3651251220 >> 192.168.123.100:0/3651251220 conn(0x7f563c0fd3c0 msgr2=0x7f563c0ff7e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- 192.168.123.100:0/3651251220 shutdown_connections 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- 192.168.123.100:0/3651251220 wait complete. 2026-03-09T18:02:29.674 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 Processor -- start 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- start start 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f563c101510 0x7f563c19f120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f563c101e30 0x7f563c19f660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c10f240 0x7f563c1a39f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f563c116bc0 con 0x7f563c101e30 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f563c116a40 con 0x7f563c101510 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564291e640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f563c116d40 con 0x7f563c10f240 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564111b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f563c101e30 0x7f563c19f660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c10f240 0x7f563c1a39f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c10f240 0x7f563c1a39f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:59676/0 (socket says 192.168.123.100:59676) 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 -- 192.168.123.100:0/2102194944 learned_addr learned my addr 192.168.123.100:0/2102194944 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 -- 192.168.123.100:0/2102194944 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f563c101510 msgr2=0x7f563c19f120 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T18:02:29.675 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564191c640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f563c101510 0x7f563c19f120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f563c101510 0x7f563c19f120 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 -- 192.168.123.100:0/2102194944 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f563c101e30 msgr2=0x7f563c19f660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f563c101e30 0x7f563c19f660 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f563c1a4170 con 0x7f563c10f240 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564111b640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f563c101e30 0x7f563c19f660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f564211d640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c10f240 0x7f563c1a39f0 secure :-1 s=READY pgs=3227 cs=0 l=1 rev1=1 crypto rx=0x7f563800b9e0 tx=0x7f563800bea0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f563800c710 con 0x7f563c10f240 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.669+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f5638010070 con 0x7f563c10f240 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.673+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f563c1a4460 con 0x7f563c10f240 2026-03-09T18:02:29.676 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.673+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f563c1abca0 con 0x7f563c10f240 2026-03-09T18:02:29.677 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.673+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f563800cab0 con 0x7f563c10f240 2026-03-09T18:02:29.681 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.673+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5604005190 con 0x7f563c10f240 2026-03-09T18:02:29.681 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.673+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f5638004590 con 0x7f563c10f240 2026-03-09T18:02:29.681 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.673+0000 7f562affd640 1 --2- 192.168.123.100:0/2102194944 >> v2:192.168.123.100:6800/2673235927 conn(0x7f56180776d0 0x7f5618079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:02:29.681 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.673+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(822..822 src has 1..822) ==== 8124+0+0 (secure 0 0 0) 0x7f5638099a60 con 0x7f563c10f240 2026-03-09T18:02:29.681 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.673+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=823}) -- 0x7f56180833c0 con 0x7f563c10f240 2026-03-09T18:02:29.681 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.677+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f563801c2e0 con 0x7f563c10f240 2026-03-09T18:02:29.681 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.677+0000 7f564191c640 1 --2- 192.168.123.100:0/2102194944 >> v2:192.168.123.100:6800/2673235927 conn(0x7f56180776d0 0x7f5618079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:02:29.681 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.677+0000 7f564191c640 1 --2- 192.168.123.100:0/2102194944 >> v2:192.168.123.100:6800/2673235927 conn(0x7f56180776d0 0x7f5618079b90 secure :-1 s=READY pgs=4288 cs=0 l=1 rev1=1 crypto rx=0x7f563000a8d0 tx=0x7f5630008040 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:02:29.773 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:29.769+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"} v 0) -- 0x7f5604005480 con 0x7f563c10f240 2026-03-09T18:02:30.605 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:30.601+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 7 ==== osd_map(823..823 src has 1..823) ==== 628+0+0 (secure 0 0 0) 0x7f563805e0a0 con 0x7f563c10f240 2026-03-09T18:02:30.605 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:30.601+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=824}) -- 0x7f5618083ff0 con 0x7f563c10f240 2026-03-09T18:02:30.609 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:30.605+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v823) ==== 221+0+0 (secure 0 0 0) 0x7f56380048b0 con 0x7f563c10f240 2026-03-09T18:02:30.659 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:30.653+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"} v 0) -- 0x7f5604003560 con 0x7f563c10f240 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: cluster 2026-03-09T18:02:29.145681+0000 mgr.y (mgr.14505) 1349 : cluster [DBG] pgmap v1825: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: cluster 2026-03-09T18:02:29.145681+0000 mgr.y (mgr.14505) 1349 : cluster [DBG] pgmap v1825: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: audit 2026-03-09T18:02:29.586143+0000 mon.a (mon.0) 3725 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: audit 2026-03-09T18:02:29.586143+0000 mon.a (mon.0) 3725 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: cluster 2026-03-09T18:02:29.596107+0000 mon.a (mon.0) 3726 : cluster [DBG] osdmap e822: 8 total, 8 up, 8 in 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: cluster 2026-03-09T18:02:29.596107+0000 mon.a (mon.0) 3726 : cluster [DBG] osdmap e822: 8 total, 8 up, 8 in 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: audit 2026-03-09T18:02:29.774360+0000 mon.c (mon.2) 966 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: audit 2026-03-09T18:02:29.774360+0000 mon.c (mon.2) 966 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: audit 2026-03-09T18:02:29.775667+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:30.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:30 vm02 bash[23351]: audit 2026-03-09T18:02:29.775667+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: cluster 2026-03-09T18:02:29.145681+0000 mgr.y (mgr.14505) 1349 : cluster [DBG] pgmap v1825: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: cluster 2026-03-09T18:02:29.145681+0000 mgr.y (mgr.14505) 1349 : cluster [DBG] pgmap v1825: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: audit 2026-03-09T18:02:29.586143+0000 mon.a (mon.0) 3725 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: audit 2026-03-09T18:02:29.586143+0000 mon.a (mon.0) 3725 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: cluster 2026-03-09T18:02:29.596107+0000 mon.a (mon.0) 3726 : cluster [DBG] osdmap e822: 8 total, 8 up, 8 in 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: cluster 2026-03-09T18:02:29.596107+0000 mon.a (mon.0) 3726 : cluster [DBG] osdmap e822: 8 total, 8 up, 8 in 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: audit 2026-03-09T18:02:29.774360+0000 mon.c (mon.2) 966 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: audit 2026-03-09T18:02:29.774360+0000 mon.c (mon.2) 966 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: audit 2026-03-09T18:02:29.775667+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:30 vm00 bash[28333]: audit 2026-03-09T18:02:29.775667+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: cluster 2026-03-09T18:02:29.145681+0000 mgr.y (mgr.14505) 1349 : cluster [DBG] pgmap v1825: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: cluster 2026-03-09T18:02:29.145681+0000 mgr.y (mgr.14505) 1349 : cluster [DBG] pgmap v1825: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: audit 2026-03-09T18:02:29.586143+0000 mon.a (mon.0) 3725 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: audit 2026-03-09T18:02:29.586143+0000 mon.a (mon.0) 3725 : audit [INF] from='client.? 192.168.123.100:0/3092072168' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: cluster 2026-03-09T18:02:29.596107+0000 mon.a (mon.0) 3726 : cluster [DBG] osdmap e822: 8 total, 8 up, 8 in 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: cluster 2026-03-09T18:02:29.596107+0000 mon.a (mon.0) 3726 : cluster [DBG] osdmap e822: 8 total, 8 up, 8 in 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: audit 2026-03-09T18:02:29.774360+0000 mon.c (mon.2) 966 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: audit 2026-03-09T18:02:29.774360+0000 mon.c (mon.2) 966 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: audit 2026-03-09T18:02:29.775667+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:30 vm00 bash[20770]: audit 2026-03-09T18:02:29.775667+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.626 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.621+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 9 ==== osd_map(824..824 src has 1..824) ==== 628+0+0 (secure 0 0 0) 0x7f563805dd40 con 0x7f563c10f240 2026-03-09T18:02:31.626 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.621+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=825}) -- 0x7f5618084630 con 0x7f563c10f240 2026-03-09T18:02:31.632 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f562affd640 1 -- 192.168.123.100:0/2102194944 <== mon.2 v2:192.168.123.100:3301/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c v824) ==== 221+0+0 (secure 0 0 0) 0x7f56380660b0 con 0x7f563c10f240 2026-03-09T18:02:31.632 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 0 for pool bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 >> v2:192.168.123.100:6800/2673235927 conn(0x7f56180776d0 msgr2=0x7f5618079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 --2- 192.168.123.100:0/2102194944 >> v2:192.168.123.100:6800/2673235927 conn(0x7f56180776d0 0x7f5618079b90 secure :-1 s=READY pgs=4288 cs=0 l=1 rev1=1 crypto rx=0x7f563000a8d0 tx=0x7f5630008040 comp rx=0 tx=0).stop 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c10f240 msgr2=0x7f563c1a39f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c10f240 0x7f563c1a39f0 secure :-1 s=READY pgs=3227 cs=0 l=1 rev1=1 crypto rx=0x7f563800b9e0 tx=0x7f563800bea0 comp rx=0 tx=0).stop 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 shutdown_connections 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 --2- 192.168.123.100:0/2102194944 >> v2:192.168.123.100:6800/2673235927 conn(0x7f56180776d0 0x7f5618079b90 unknown :-1 s=CLOSED pgs=4288 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f563c10f240 0x7f563c1a39f0 unknown :-1 s=CLOSED pgs=3227 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f563c101e30 0x7f563c19f660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 --2- 192.168.123.100:0/2102194944 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f563c101510 0x7f563c19f120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 >> 192.168.123.100:0/2102194944 conn(0x7f563c0fd3c0 msgr2=0x7f563c10f840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 shutdown_connections 2026-03-09T18:02:31.635 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:02:31.629+0000 7f564291e640 1 -- 192.168.123.100:0/2102194944 wait complete. 2026-03-09T18:02:31.646 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: audit 2026-03-09T18:02:30.596910+0000 mon.a (mon.0) 3728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: audit 2026-03-09T18:02:30.596910+0000 mon.a (mon.0) 3728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: cluster 2026-03-09T18:02:30.606028+0000 mon.a (mon.0) 3729 : cluster [DBG] osdmap e823: 8 total, 8 up, 8 in 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: cluster 2026-03-09T18:02:30.606028+0000 mon.a (mon.0) 3729 : cluster [DBG] osdmap e823: 8 total, 8 up, 8 in 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: audit 2026-03-09T18:02:30.659980+0000 mon.c (mon.2) 967 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: audit 2026-03-09T18:02:30.659980+0000 mon.c (mon.2) 967 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: audit 2026-03-09T18:02:30.660579+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: audit 2026-03-09T18:02:30.660579+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: cluster 2026-03-09T18:02:31.146089+0000 mgr.y (mgr.14505) 1350 : cluster [DBG] pgmap v1828: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T18:02:31.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:31 vm02 bash[23351]: cluster 2026-03-09T18:02:31.146089+0000 mgr.y (mgr.14505) 1350 : cluster [DBG] pgmap v1828: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T18:02:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: audit 2026-03-09T18:02:30.596910+0000 mon.a (mon.0) 3728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: audit 2026-03-09T18:02:30.596910+0000 mon.a (mon.0) 3728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: cluster 2026-03-09T18:02:30.606028+0000 mon.a (mon.0) 3729 : cluster [DBG] osdmap e823: 8 total, 8 up, 8 in 2026-03-09T18:02:32.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: cluster 2026-03-09T18:02:30.606028+0000 mon.a (mon.0) 3729 : cluster [DBG] osdmap e823: 8 total, 8 up, 8 in 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: audit 2026-03-09T18:02:30.659980+0000 mon.c (mon.2) 967 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: audit 2026-03-09T18:02:30.659980+0000 mon.c (mon.2) 967 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: audit 2026-03-09T18:02:30.660579+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: audit 2026-03-09T18:02:30.660579+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: cluster 2026-03-09T18:02:31.146089+0000 mgr.y (mgr.14505) 1350 : cluster [DBG] pgmap v1828: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:31 vm00 bash[28333]: cluster 2026-03-09T18:02:31.146089+0000 mgr.y (mgr.14505) 1350 : cluster [DBG] pgmap v1828: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: audit 2026-03-09T18:02:30.596910+0000 mon.a (mon.0) 3728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: audit 2026-03-09T18:02:30.596910+0000 mon.a (mon.0) 3728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: cluster 2026-03-09T18:02:30.606028+0000 mon.a (mon.0) 3729 : cluster [DBG] osdmap e823: 8 total, 8 up, 8 in 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: cluster 2026-03-09T18:02:30.606028+0000 mon.a (mon.0) 3729 : cluster [DBG] osdmap e823: 8 total, 8 up, 8 in 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: audit 2026-03-09T18:02:30.659980+0000 mon.c (mon.2) 967 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: audit 2026-03-09T18:02:30.659980+0000 mon.c (mon.2) 967 : audit [INF] from='client.? 192.168.123.100:0/2102194944' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: audit 2026-03-09T18:02:30.660579+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: audit 2026-03-09T18:02:30.660579+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: cluster 2026-03-09T18:02:31.146089+0000 mgr.y (mgr.14505) 1350 : cluster [DBG] pgmap v1828: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T18:02:32.039 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:31 vm00 bash[20770]: cluster 2026-03-09T18:02:31.146089+0000 mgr.y (mgr.14505) 1350 : cluster [DBG] pgmap v1828: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T18:02:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:32 vm02 bash[23351]: audit 2026-03-09T18:02:31.609229+0000 mon.a (mon.0) 3731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:32 vm02 bash[23351]: audit 2026-03-09T18:02:31.609229+0000 mon.a (mon.0) 3731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:32 vm02 bash[23351]: cluster 2026-03-09T18:02:31.631393+0000 mon.a (mon.0) 3732 : cluster [DBG] osdmap e824: 8 total, 8 up, 8 in 2026-03-09T18:02:32.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:32 vm02 bash[23351]: cluster 2026-03-09T18:02:31.631393+0000 mon.a (mon.0) 3732 : cluster [DBG] osdmap e824: 8 total, 8 up, 8 in 2026-03-09T18:02:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:32 vm00 bash[28333]: audit 2026-03-09T18:02:31.609229+0000 mon.a (mon.0) 3731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:32 vm00 bash[28333]: audit 2026-03-09T18:02:31.609229+0000 mon.a (mon.0) 3731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:32 vm00 bash[28333]: cluster 2026-03-09T18:02:31.631393+0000 mon.a (mon.0) 3732 : cluster [DBG] osdmap e824: 8 total, 8 up, 8 in 2026-03-09T18:02:33.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:32 vm00 bash[28333]: cluster 2026-03-09T18:02:31.631393+0000 mon.a (mon.0) 3732 : cluster [DBG] osdmap e824: 8 total, 8 up, 8 in 2026-03-09T18:02:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:32 vm00 bash[20770]: audit 2026-03-09T18:02:31.609229+0000 mon.a (mon.0) 3731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:32 vm00 bash[20770]: audit 2026-03-09T18:02:31.609229+0000 mon.a (mon.0) 3731 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "field": "max_objects", "val": "0"}]': finished 2026-03-09T18:02:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:32 vm00 bash[20770]: cluster 2026-03-09T18:02:31.631393+0000 mon.a (mon.0) 3732 : cluster [DBG] osdmap e824: 8 total, 8 up, 8 in 2026-03-09T18:02:33.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:32 vm00 bash[20770]: cluster 2026-03-09T18:02:31.631393+0000 mon.a (mon.0) 3732 : cluster [DBG] osdmap e824: 8 total, 8 up, 8 in 2026-03-09T18:02:33.620 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:02:33 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:02:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:33 vm02 bash[23351]: cluster 2026-03-09T18:02:33.146709+0000 mgr.y (mgr.14505) 1351 : cluster [DBG] pgmap v1830: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:02:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:33 vm02 bash[23351]: cluster 2026-03-09T18:02:33.146709+0000 mgr.y (mgr.14505) 1351 : cluster [DBG] pgmap v1830: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:02:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:33 vm02 bash[23351]: audit 2026-03-09T18:02:33.207598+0000 mgr.y (mgr.14505) 1352 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:33.886 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:33 vm02 bash[23351]: audit 2026-03-09T18:02:33.207598+0000 mgr.y (mgr.14505) 1352 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:33 vm00 bash[28333]: cluster 2026-03-09T18:02:33.146709+0000 mgr.y (mgr.14505) 1351 : cluster [DBG] pgmap v1830: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:02:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:33 vm00 bash[28333]: cluster 2026-03-09T18:02:33.146709+0000 mgr.y (mgr.14505) 1351 : cluster [DBG] pgmap v1830: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:02:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:33 vm00 bash[28333]: audit 2026-03-09T18:02:33.207598+0000 mgr.y (mgr.14505) 1352 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:34.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:33 vm00 bash[28333]: audit 2026-03-09T18:02:33.207598+0000 mgr.y (mgr.14505) 1352 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:33 vm00 bash[20770]: cluster 2026-03-09T18:02:33.146709+0000 mgr.y (mgr.14505) 1351 : cluster [DBG] pgmap v1830: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:02:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:33 vm00 bash[20770]: cluster 2026-03-09T18:02:33.146709+0000 mgr.y (mgr.14505) 1351 : cluster [DBG] pgmap v1830: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:02:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:33 vm00 bash[20770]: audit 2026-03-09T18:02:33.207598+0000 mgr.y (mgr.14505) 1352 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:34.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:33 vm00 bash[20770]: audit 2026-03-09T18:02:33.207598+0000 mgr.y (mgr.14505) 1352 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:36 vm00 bash[28333]: cluster 2026-03-09T18:02:35.147039+0000 mgr.y (mgr.14505) 1353 : cluster [DBG] pgmap v1831: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:36.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:36 vm00 bash[28333]: cluster 2026-03-09T18:02:35.147039+0000 mgr.y (mgr.14505) 1353 : cluster [DBG] pgmap v1831: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:36 vm00 bash[20770]: cluster 2026-03-09T18:02:35.147039+0000 mgr.y (mgr.14505) 1353 : cluster [DBG] pgmap v1831: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:36.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:36 vm00 bash[20770]: cluster 2026-03-09T18:02:35.147039+0000 mgr.y (mgr.14505) 1353 : cluster [DBG] pgmap v1831: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:36.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:02:36 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:02:36] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:02:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:36 vm02 bash[23351]: cluster 2026-03-09T18:02:35.147039+0000 mgr.y (mgr.14505) 1353 : cluster [DBG] pgmap v1831: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:36.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:36 vm02 bash[23351]: cluster 2026-03-09T18:02:35.147039+0000 mgr.y (mgr.14505) 1353 : cluster [DBG] pgmap v1831: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:38 vm00 bash[28333]: cluster 2026-03-09T18:02:37.147372+0000 mgr.y (mgr.14505) 1354 : cluster [DBG] pgmap v1832: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 678 B/s rd, 0 op/s 2026-03-09T18:02:39.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:38 vm00 bash[28333]: cluster 2026-03-09T18:02:37.147372+0000 mgr.y (mgr.14505) 1354 : cluster [DBG] pgmap v1832: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 678 B/s rd, 0 op/s 2026-03-09T18:02:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:38 vm00 bash[20770]: cluster 2026-03-09T18:02:37.147372+0000 mgr.y (mgr.14505) 1354 : cluster [DBG] pgmap v1832: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 678 B/s rd, 0 op/s 2026-03-09T18:02:39.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:38 vm00 bash[20770]: cluster 2026-03-09T18:02:37.147372+0000 mgr.y (mgr.14505) 1354 : cluster [DBG] pgmap v1832: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 678 B/s rd, 0 op/s 2026-03-09T18:02:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:38 vm02 bash[23351]: cluster 2026-03-09T18:02:37.147372+0000 mgr.y (mgr.14505) 1354 : cluster [DBG] pgmap v1832: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 678 B/s rd, 0 op/s 2026-03-09T18:02:39.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:38 vm02 bash[23351]: cluster 2026-03-09T18:02:37.147372+0000 mgr.y (mgr.14505) 1354 : cluster [DBG] pgmap v1832: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 678 B/s rd, 0 op/s 2026-03-09T18:02:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:39 vm00 bash[28333]: cluster 2026-03-09T18:02:39.147974+0000 mgr.y (mgr.14505) 1355 : cluster [DBG] pgmap v1833: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:40.038 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:39 vm00 bash[28333]: cluster 2026-03-09T18:02:39.147974+0000 mgr.y (mgr.14505) 1355 : cluster [DBG] pgmap v1833: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:39 vm00 bash[20770]: cluster 2026-03-09T18:02:39.147974+0000 mgr.y (mgr.14505) 1355 : cluster [DBG] pgmap v1833: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:40.038 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:39 vm00 bash[20770]: cluster 2026-03-09T18:02:39.147974+0000 mgr.y (mgr.14505) 1355 : cluster [DBG] pgmap v1833: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:39 vm02 bash[23351]: cluster 2026-03-09T18:02:39.147974+0000 mgr.y (mgr.14505) 1355 : cluster [DBG] pgmap v1833: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:40.136 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:39 vm02 bash[23351]: cluster 2026-03-09T18:02:39.147974+0000 mgr.y (mgr.14505) 1355 : cluster [DBG] pgmap v1833: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:42 vm00 bash[28333]: cluster 2026-03-09T18:02:41.148240+0000 mgr.y (mgr.14505) 1356 : cluster [DBG] pgmap v1834: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:42.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:42 vm00 bash[28333]: cluster 2026-03-09T18:02:41.148240+0000 mgr.y (mgr.14505) 1356 : cluster [DBG] pgmap v1834: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:42 vm00 bash[20770]: cluster 2026-03-09T18:02:41.148240+0000 mgr.y (mgr.14505) 1356 : cluster [DBG] pgmap v1834: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:42.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:42 vm00 bash[20770]: cluster 2026-03-09T18:02:41.148240+0000 mgr.y (mgr.14505) 1356 : cluster [DBG] pgmap v1834: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:42 vm02 bash[23351]: cluster 2026-03-09T18:02:41.148240+0000 mgr.y (mgr.14505) 1356 : cluster [DBG] pgmap v1834: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:42.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:42 vm02 bash[23351]: cluster 2026-03-09T18:02:41.148240+0000 mgr.y (mgr.14505) 1356 : cluster [DBG] pgmap v1834: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:02:43.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:02:43 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:02:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:44 vm00 bash[28333]: cluster 2026-03-09T18:02:43.148675+0000 mgr.y (mgr.14505) 1357 : cluster [DBG] pgmap v1835: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:44 vm00 bash[28333]: cluster 2026-03-09T18:02:43.148675+0000 mgr.y (mgr.14505) 1357 : cluster [DBG] pgmap v1835: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:44 vm00 bash[28333]: audit 2026-03-09T18:02:43.217882+0000 mgr.y (mgr.14505) 1358 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:44 vm00 bash[28333]: audit 2026-03-09T18:02:43.217882+0000 mgr.y (mgr.14505) 1358 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:44 vm00 bash[28333]: audit 2026-03-09T18:02:44.032369+0000 mon.c (mon.2) 968 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:44.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:44 vm00 bash[28333]: audit 2026-03-09T18:02:44.032369+0000 mon.c (mon.2) 968 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:44.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:44 vm00 bash[20770]: cluster 2026-03-09T18:02:43.148675+0000 mgr.y (mgr.14505) 1357 : cluster [DBG] pgmap v1835: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:44.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:44 vm00 bash[20770]: cluster 2026-03-09T18:02:43.148675+0000 mgr.y (mgr.14505) 1357 : cluster [DBG] pgmap v1835: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:44.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:44 vm00 bash[20770]: audit 2026-03-09T18:02:43.217882+0000 mgr.y (mgr.14505) 1358 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:44.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:44 vm00 bash[20770]: audit 2026-03-09T18:02:43.217882+0000 mgr.y (mgr.14505) 1358 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:44.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:44 vm00 bash[20770]: audit 2026-03-09T18:02:44.032369+0000 mon.c (mon.2) 968 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:44.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:44 vm00 bash[20770]: audit 2026-03-09T18:02:44.032369+0000 mon.c (mon.2) 968 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:44 vm02 bash[23351]: cluster 2026-03-09T18:02:43.148675+0000 mgr.y (mgr.14505) 1357 : cluster [DBG] pgmap v1835: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:44 vm02 bash[23351]: cluster 2026-03-09T18:02:43.148675+0000 mgr.y (mgr.14505) 1357 : cluster [DBG] pgmap v1835: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:02:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:44 vm02 bash[23351]: audit 2026-03-09T18:02:43.217882+0000 mgr.y (mgr.14505) 1358 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:44 vm02 bash[23351]: audit 2026-03-09T18:02:43.217882+0000 mgr.y (mgr.14505) 1358 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:44 vm02 bash[23351]: audit 2026-03-09T18:02:44.032369+0000 mon.c (mon.2) 968 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:44.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:44 vm02 bash[23351]: audit 2026-03-09T18:02:44.032369+0000 mon.c (mon.2) 968 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:46 vm00 bash[20770]: cluster 2026-03-09T18:02:45.148915+0000 mgr.y (mgr.14505) 1359 : cluster [DBG] pgmap v1836: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:46.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:46 vm00 bash[20770]: cluster 2026-03-09T18:02:45.148915+0000 mgr.y (mgr.14505) 1359 : cluster [DBG] pgmap v1836: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:46.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:02:46 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:02:46] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:02:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:46 vm00 bash[28333]: cluster 2026-03-09T18:02:45.148915+0000 mgr.y (mgr.14505) 1359 : cluster [DBG] pgmap v1836: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:46.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:46 vm00 bash[28333]: cluster 2026-03-09T18:02:45.148915+0000 mgr.y (mgr.14505) 1359 : cluster [DBG] pgmap v1836: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:46 vm02 bash[23351]: cluster 2026-03-09T18:02:45.148915+0000 mgr.y (mgr.14505) 1359 : cluster [DBG] pgmap v1836: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:46.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:46 vm02 bash[23351]: cluster 2026-03-09T18:02:45.148915+0000 mgr.y (mgr.14505) 1359 : cluster [DBG] pgmap v1836: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:48 vm00 bash[20770]: cluster 2026-03-09T18:02:47.149234+0000 mgr.y (mgr.14505) 1360 : cluster [DBG] pgmap v1837: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:48.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:48 vm00 bash[20770]: cluster 2026-03-09T18:02:47.149234+0000 mgr.y (mgr.14505) 1360 : cluster [DBG] pgmap v1837: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:48 vm00 bash[28333]: cluster 2026-03-09T18:02:47.149234+0000 mgr.y (mgr.14505) 1360 : cluster [DBG] pgmap v1837: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:48.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:48 vm00 bash[28333]: cluster 2026-03-09T18:02:47.149234+0000 mgr.y (mgr.14505) 1360 : cluster [DBG] pgmap v1837: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:48 vm02 bash[23351]: cluster 2026-03-09T18:02:47.149234+0000 mgr.y (mgr.14505) 1360 : cluster [DBG] pgmap v1837: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:48.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:48 vm02 bash[23351]: cluster 2026-03-09T18:02:47.149234+0000 mgr.y (mgr.14505) 1360 : cluster [DBG] pgmap v1837: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:50 vm00 bash[20770]: cluster 2026-03-09T18:02:49.149930+0000 mgr.y (mgr.14505) 1361 : cluster [DBG] pgmap v1838: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:50.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:50 vm00 bash[20770]: cluster 2026-03-09T18:02:49.149930+0000 mgr.y (mgr.14505) 1361 : cluster [DBG] pgmap v1838: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:50 vm00 bash[28333]: cluster 2026-03-09T18:02:49.149930+0000 mgr.y (mgr.14505) 1361 : cluster [DBG] pgmap v1838: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:50.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:50 vm00 bash[28333]: cluster 2026-03-09T18:02:49.149930+0000 mgr.y (mgr.14505) 1361 : cluster [DBG] pgmap v1838: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:50 vm02 bash[23351]: cluster 2026-03-09T18:02:49.149930+0000 mgr.y (mgr.14505) 1361 : cluster [DBG] pgmap v1838: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:50.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:50 vm02 bash[23351]: cluster 2026-03-09T18:02:49.149930+0000 mgr.y (mgr.14505) 1361 : cluster [DBG] pgmap v1838: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:52 vm00 bash[28333]: cluster 2026-03-09T18:02:51.150196+0000 mgr.y (mgr.14505) 1362 : cluster [DBG] pgmap v1839: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:52.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:52 vm00 bash[28333]: cluster 2026-03-09T18:02:51.150196+0000 mgr.y (mgr.14505) 1362 : cluster [DBG] pgmap v1839: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:52 vm00 bash[20770]: cluster 2026-03-09T18:02:51.150196+0000 mgr.y (mgr.14505) 1362 : cluster [DBG] pgmap v1839: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:52.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:52 vm00 bash[20770]: cluster 2026-03-09T18:02:51.150196+0000 mgr.y (mgr.14505) 1362 : cluster [DBG] pgmap v1839: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:52.635 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:52 vm02 bash[23351]: cluster 2026-03-09T18:02:51.150196+0000 mgr.y (mgr.14505) 1362 : cluster [DBG] pgmap v1839: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:52.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:52 vm02 bash[23351]: cluster 2026-03-09T18:02:51.150196+0000 mgr.y (mgr.14505) 1362 : cluster [DBG] pgmap v1839: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:53.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:02:53 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:02:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:54 vm00 bash[20770]: cluster 2026-03-09T18:02:53.150831+0000 mgr.y (mgr.14505) 1363 : cluster [DBG] pgmap v1840: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:54 vm00 bash[20770]: cluster 2026-03-09T18:02:53.150831+0000 mgr.y (mgr.14505) 1363 : cluster [DBG] pgmap v1840: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:54 vm00 bash[20770]: audit 2026-03-09T18:02:53.228513+0000 mgr.y (mgr.14505) 1364 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:54.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:54 vm00 bash[20770]: audit 2026-03-09T18:02:53.228513+0000 mgr.y (mgr.14505) 1364 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:54 vm00 bash[28333]: cluster 2026-03-09T18:02:53.150831+0000 mgr.y (mgr.14505) 1363 : cluster [DBG] pgmap v1840: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:54 vm00 bash[28333]: cluster 2026-03-09T18:02:53.150831+0000 mgr.y (mgr.14505) 1363 : cluster [DBG] pgmap v1840: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:54 vm00 bash[28333]: audit 2026-03-09T18:02:53.228513+0000 mgr.y (mgr.14505) 1364 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:54.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:54 vm00 bash[28333]: audit 2026-03-09T18:02:53.228513+0000 mgr.y (mgr.14505) 1364 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:54 vm02 bash[23351]: cluster 2026-03-09T18:02:53.150831+0000 mgr.y (mgr.14505) 1363 : cluster [DBG] pgmap v1840: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:54 vm02 bash[23351]: cluster 2026-03-09T18:02:53.150831+0000 mgr.y (mgr.14505) 1363 : cluster [DBG] pgmap v1840: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:02:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:54 vm02 bash[23351]: audit 2026-03-09T18:02:53.228513+0000 mgr.y (mgr.14505) 1364 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:54.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:54 vm02 bash[23351]: audit 2026-03-09T18:02:53.228513+0000 mgr.y (mgr.14505) 1364 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:02:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:56 vm00 bash[20770]: cluster 2026-03-09T18:02:55.151096+0000 mgr.y (mgr.14505) 1365 : cluster [DBG] pgmap v1841: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:56.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:56 vm00 bash[20770]: cluster 2026-03-09T18:02:55.151096+0000 mgr.y (mgr.14505) 1365 : cluster [DBG] pgmap v1841: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:56 vm00 bash[28333]: cluster 2026-03-09T18:02:55.151096+0000 mgr.y (mgr.14505) 1365 : cluster [DBG] pgmap v1841: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:56.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:56 vm00 bash[28333]: cluster 2026-03-09T18:02:55.151096+0000 mgr.y (mgr.14505) 1365 : cluster [DBG] pgmap v1841: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:56.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:02:56 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:02:56] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:02:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:56 vm02 bash[23351]: cluster 2026-03-09T18:02:55.151096+0000 mgr.y (mgr.14505) 1365 : cluster [DBG] pgmap v1841: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:56.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:56 vm02 bash[23351]: cluster 2026-03-09T18:02:55.151096+0000 mgr.y (mgr.14505) 1365 : cluster [DBG] pgmap v1841: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:58 vm00 bash[20770]: cluster 2026-03-09T18:02:57.151342+0000 mgr.y (mgr.14505) 1366 : cluster [DBG] pgmap v1842: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:58.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:58 vm00 bash[20770]: cluster 2026-03-09T18:02:57.151342+0000 mgr.y (mgr.14505) 1366 : cluster [DBG] pgmap v1842: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:58 vm00 bash[28333]: cluster 2026-03-09T18:02:57.151342+0000 mgr.y (mgr.14505) 1366 : cluster [DBG] pgmap v1842: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:58.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:58 vm00 bash[28333]: cluster 2026-03-09T18:02:57.151342+0000 mgr.y (mgr.14505) 1366 : cluster [DBG] pgmap v1842: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:58 vm02 bash[23351]: cluster 2026-03-09T18:02:57.151342+0000 mgr.y (mgr.14505) 1366 : cluster [DBG] pgmap v1842: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:58.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:58 vm02 bash[23351]: cluster 2026-03-09T18:02:57.151342+0000 mgr.y (mgr.14505) 1366 : cluster [DBG] pgmap v1842: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:02:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:59 vm02 bash[23351]: audit 2026-03-09T18:02:59.038321+0000 mon.c (mon.2) 969 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:59.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:02:59 vm02 bash[23351]: audit 2026-03-09T18:02:59.038321+0000 mon.c (mon.2) 969 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:59 vm00 bash[28333]: audit 2026-03-09T18:02:59.038321+0000 mon.c (mon.2) 969 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:59.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:02:59 vm00 bash[28333]: audit 2026-03-09T18:02:59.038321+0000 mon.c (mon.2) 969 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:59 vm00 bash[20770]: audit 2026-03-09T18:02:59.038321+0000 mon.c (mon.2) 969 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:02:59.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:02:59 vm00 bash[20770]: audit 2026-03-09T18:02:59.038321+0000 mon.c (mon.2) 969 : audit [DBG] from='mgr.14505 192.168.123.100:0/1610805934' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:03:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:00 vm02 bash[23351]: cluster 2026-03-09T18:02:59.151988+0000 mgr.y (mgr.14505) 1367 : cluster [DBG] pgmap v1843: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:00.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:00 vm02 bash[23351]: cluster 2026-03-09T18:02:59.151988+0000 mgr.y (mgr.14505) 1367 : cluster [DBG] pgmap v1843: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:00 vm00 bash[20770]: cluster 2026-03-09T18:02:59.151988+0000 mgr.y (mgr.14505) 1367 : cluster [DBG] pgmap v1843: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:00.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:00 vm00 bash[20770]: cluster 2026-03-09T18:02:59.151988+0000 mgr.y (mgr.14505) 1367 : cluster [DBG] pgmap v1843: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:00 vm00 bash[28333]: cluster 2026-03-09T18:02:59.151988+0000 mgr.y (mgr.14505) 1367 : cluster [DBG] pgmap v1843: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:00.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:00 vm00 bash[28333]: cluster 2026-03-09T18:02:59.151988+0000 mgr.y (mgr.14505) 1367 : cluster [DBG] pgmap v1843: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:01.647 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool delete bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c --yes-i-really-really-mean-it 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- 192.168.123.100:0/3894333679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0618101ff0 msgr2=0x7f061810eec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 --2- 192.168.123.100:0/3894333679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0618101ff0 0x7f061810eec0 secure :-1 s=READY pgs=2875 cs=0 l=1 rev1=1 crypto rx=0x7f0600009a30 tx=0x7f060001c920 comp rx=0 tx=0).stop 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- 192.168.123.100:0/3894333679 shutdown_connections 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 --2- 192.168.123.100:0/3894333679 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f061810f400 0x7f06181117f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 --2- 192.168.123.100:0/3894333679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0618101ff0 0x7f061810eec0 unknown :-1 s=CLOSED pgs=2875 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 --2- 192.168.123.100:0/3894333679 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f06181016d0 0x7f0618101ab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- 192.168.123.100:0/3894333679 >> 192.168.123.100:0/3894333679 conn(0x7f06180fd540 msgr2=0x7f06180ff960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- 192.168.123.100:0/3894333679 shutdown_connections 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- 192.168.123.100:0/3894333679 wait complete. 2026-03-09T18:03:01.714 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 Processor -- start 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- start start 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f06181016d0 0x7f061819f190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0618101ff0 0x7f061819f6d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f061810f400 0x7f06181a3a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0618116c70 con 0x7f0618101ff0 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f0618116af0 con 0x7f06181016d0 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f061d447640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f0618116df0 con 0x7f061810f400 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0618101ff0 0x7f061819f6d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0618101ff0 0x7f061819f6d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:50970/0 (socket says 192.168.123.100:50970) 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 -- 192.168.123.100:0/452950533 learned_addr learned my addr 192.168.123.100:0/452950533 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06177fe640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f061810f400 0x7f06181a3a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 -- 192.168.123.100:0/452950533 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f061810f400 msgr2=0x7f06181a3a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f061810f400 0x7f06181a3a60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 -- 192.168.123.100:0/452950533 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f06181016d0 msgr2=0x7f061819f190 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T18:03:01.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f0616ffd640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f06181016d0 0x7f061819f190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:03:01.716 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f06181016d0 0x7f061819f190 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:01.716 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 -- 192.168.123.100:0/452950533 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f06181a41e0 con 0x7f0618101ff0 2026-03-09T18:03:01.716 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.709+0000 7f06167fc640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0618101ff0 0x7f061819f6d0 secure :-1 s=READY pgs=3154 cs=0 l=1 rev1=1 crypto rx=0x7f0600005bc0 tx=0x7f0600005a00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:03:01.716 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.713+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0600004300 con 0x7f0618101ff0 2026-03-09T18:03:01.716 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.713+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f06000044a0 con 0x7f0618101ff0 2026-03-09T18:03:01.716 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.713+0000 7f061d447640 1 -- 192.168.123.100:0/452950533 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f06181a4470 con 0x7f0618101ff0 2026-03-09T18:03:01.717 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.713+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f06000ae890 con 0x7f0618101ff0 2026-03-09T18:03:01.717 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.713+0000 7f061d447640 1 -- 192.168.123.100:0/452950533 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f06181abd10 con 0x7f0618101ff0 2026-03-09T18:03:01.718 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.713+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f06000aeb10 con 0x7f0618101ff0 2026-03-09T18:03:01.718 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.713+0000 7f061d447640 1 -- 192.168.123.100:0/452950533 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f061806b830 con 0x7f0618101ff0 2026-03-09T18:03:01.721 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.717+0000 7f05f7fff640 1 --2- 192.168.123.100:0/452950533 >> v2:192.168.123.100:6800/2673235927 conn(0x7f05ec077690 0x7f05ec079b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:03:01.721 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.717+0000 7f0616ffd640 1 --2- 192.168.123.100:0/452950533 >> v2:192.168.123.100:6800/2673235927 conn(0x7f05ec077690 0x7f05ec079b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:03:01.721 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.717+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(824..824 src has 1..824) ==== 8124+0+0 (secure 0 0 0) 0x7f0600002800 con 0x7f0618101ff0 2026-03-09T18:03:01.721 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.717+0000 7f0616ffd640 1 --2- 192.168.123.100:0/452950533 >> v2:192.168.123.100:6800/2673235927 conn(0x7f05ec077690 0x7f05ec079b50 secure :-1 s=READY pgs=4289 cs=0 l=1 rev1=1 crypto rx=0x7f060c00ab20 tx=0x7f060c00a520 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:03:01.721 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.717+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=825}) -- 0x7f05ec083400 con 0x7f0618101ff0 2026-03-09T18:03:01.721 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.717+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0600016650 con 0x7f0618101ff0 2026-03-09T18:03:01.811 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:01.805+0000 7f061d447640 1 -- 192.168.123.100:0/452950533 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true} v 0) -- 0x7f0618101ab0 con 0x7f0618101ff0 2026-03-09T18:03:02.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.309+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]=0 pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' removed v825) ==== 248+0+0 (secure 0 0 0) 0x7f0600101220 con 0x7f0618101ff0 2026-03-09T18:03:02.331 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.325+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(825..825 src has 1..825) ==== 296+0+0 (secure 0 0 0) 0x7f06000f9210 con 0x7f0618101ff0 2026-03-09T18:03:02.332 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.325+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=826}) -- 0x7f05ec084120 con 0x7f0618101ff0 2026-03-09T18:03:02.373 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.369+0000 7f061d447640 1 -- 192.168.123.100:0/452950533 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true} v 0) -- 0x7f06181a04f0 con 0x7f0618101ff0 2026-03-09T18:03:02.374 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.369+0000 7f05f7fff640 1 -- 192.168.123.100:0/452950533 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]=0 pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' does not exist v825) ==== 255+0+0 (secure 0 0 0) 0x7f06001060d0 con 0x7f0618101ff0 2026-03-09T18:03:02.374 INFO:tasks.workunit.client.0.vm00.stderr:pool 'bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c' does not exist 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 -- 192.168.123.100:0/452950533 >> v2:192.168.123.100:6800/2673235927 conn(0x7f05ec077690 msgr2=0x7f05ec079b50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 --2- 192.168.123.100:0/452950533 >> v2:192.168.123.100:6800/2673235927 conn(0x7f05ec077690 0x7f05ec079b50 secure :-1 s=READY pgs=4289 cs=0 l=1 rev1=1 crypto rx=0x7f060c00ab20 tx=0x7f060c00a520 comp rx=0 tx=0).stop 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 -- 192.168.123.100:0/452950533 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0618101ff0 msgr2=0x7f061819f6d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0618101ff0 0x7f061819f6d0 secure :-1 s=READY pgs=3154 cs=0 l=1 rev1=1 crypto rx=0x7f0600005bc0 tx=0x7f0600005a00 comp rx=0 tx=0).stop 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 -- 192.168.123.100:0/452950533 shutdown_connections 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 --2- 192.168.123.100:0/452950533 >> v2:192.168.123.100:6800/2673235927 conn(0x7f05ec077690 0x7f05ec079b50 unknown :-1 s=CLOSED pgs=4289 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f061810f400 0x7f06181a3a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0618101ff0 0x7f061819f6d0 unknown :-1 s=CLOSED pgs=3154 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 --2- 192.168.123.100:0/452950533 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f06181016d0 0x7f061819f190 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 -- 192.168.123.100:0/452950533 >> 192.168.123.100:0/452950533 conn(0x7f06180fd540 msgr2=0x7f06180ff930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 -- 192.168.123.100:0/452950533 shutdown_connections 2026-03-09T18:03:02.376 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.373+0000 7f05f5ffb640 1 -- 192.168.123.100:0/452950533 wait complete. 2026-03-09T18:03:02.388 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool delete bbf4d092-de96-4c27-8d15-50edd4b79fe3 bbf4d092-de96-4c27-8d15-50edd4b79fe3 --yes-i-really-really-mean-it 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- 192.168.123.100:0/3385928929 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4e54108f00 msgr2=0x7f4e5410f910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/3385928929 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4e54108f00 0x7f4e5410f910 secure :-1 s=READY pgs=3228 cs=0 l=1 rev1=1 crypto rx=0x7f4e5000b3e0 tx=0x7f4e5001ccb0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- 192.168.123.100:0/3385928929 shutdown_connections 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/3385928929 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4e54108f00 0x7f4e5410f910 unknown :-1 s=CLOSED pgs=3228 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/3385928929 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4e54101f50 0x7f4e541087d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/3385928929 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4e541015a0 0x7f4e54101980 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- 192.168.123.100:0/3385928929 >> 192.168.123.100:0/3385928929 conn(0x7f4e54077ee0 msgr2=0x7f4e540782f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- 192.168.123.100:0/3385928929 shutdown_connections 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- 192.168.123.100:0/3385928929 wait complete. 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 Processor -- start 2026-03-09T18:03:02.449 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- start start 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4e541015a0 0x7f4e5419cf50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4e54101f50 0x7f4e5419d490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4e54108f00 0x7f4e541a1820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4e54114b00 con 0x7f4e54101f50 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 -- 0x7f4e54114980 con 0x7f4e54108f00 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f4e54114c80 con 0x7f4e541015a0 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4e54101f50 0x7f4e5419d490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4e54101f50 0x7f4e5419d490 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:51000/0 (socket says 192.168.123.100:51000) 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 -- 192.168.123.100:0/1654583673 learned_addr learned my addr 192.168.123.100:0/1654583673 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e598f0640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4e54108f00 0x7f4e541a1820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 -- 192.168.123.100:0/1654583673 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4e541015a0 msgr2=0x7f4e5419cf50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4e541015a0 0x7f4e5419cf50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 -- 192.168.123.100:0/1654583673 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4e54108f00 msgr2=0x7f4e541a1820 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4e54108f00 0x7f4e541a1820 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 -- 192.168.123.100:0/1654583673 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4e541a1fa0 con 0x7f4e54101f50 2026-03-09T18:03:02.450 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e598f0640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4e54108f00 0x7f4e541a1820 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T18:03:02.451 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e588ee640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4e54101f50 0x7f4e5419d490 secure :-1 s=READY pgs=3155 cs=0 l=1 rev1=1 crypto rx=0x7f4e4400d950 tx=0x7f4e4400de10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:03:02.451 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4e44014070 con 0x7f4e54101f50 2026-03-09T18:03:02.451 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f4e440044e0 con 0x7f4e54101f50 2026-03-09T18:03:02.451 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4e44010460 con 0x7f4e54101f50 2026-03-09T18:03:02.452 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4e541a2290 con 0x7f4e54101f50 2026-03-09T18:03:02.452 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.445+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f4e54102490 con 0x7f4e54101f50 2026-03-09T18:03:02.456 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.449+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4e1c005190 con 0x7f4e54101f50 2026-03-09T18:03:02.456 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.449+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f4e4400b840 con 0x7f4e54101f50 2026-03-09T18:03:02.456 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.449+0000 7f4e4a7fc640 1 --2- 192.168.123.100:0/1654583673 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4e300777a0 0x7f4e30079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T18:03:02.456 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.449+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(825..825 src has 1..825) ==== 7736+0+0 (secure 0 0 0) 0x7f4e44099df0 con 0x7f4e54101f50 2026-03-09T18:03:02.456 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.449+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=826}) -- 0x7f4e30082cf0 con 0x7f4e54101f50 2026-03-09T18:03:02.456 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.453+0000 7f4e590ef640 1 --2- 192.168.123.100:0/1654583673 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4e300777a0 0x7f4e30079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T18:03:02.456 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.453+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4e44066640 con 0x7f4e54101f50 2026-03-09T18:03:02.456 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.453+0000 7f4e590ef640 1 --2- 192.168.123.100:0/1654583673 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4e300777a0 0x7f4e30079c60 secure :-1 s=READY pgs=4290 cs=0 l=1 rev1=1 crypto rx=0x7f4e3c0059d0 tx=0x7f4e3c005960 comp rx=0 tx=0).ready entity=mgr.14505 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T18:03:02.551 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:02.545+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true} v 0) -- 0x7f4e1c005480 con 0x7f4e54101f50 2026-03-09T18:03:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:02 vm02 bash[23351]: cluster 2026-03-09T18:03:01.152296+0000 mgr.y (mgr.14505) 1368 : cluster [DBG] pgmap v1844: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:03:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:02 vm02 bash[23351]: cluster 2026-03-09T18:03:01.152296+0000 mgr.y (mgr.14505) 1368 : cluster [DBG] pgmap v1844: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:03:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:02 vm02 bash[23351]: audit 2026-03-09T18:03:01.811929+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:02.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:02 vm02 bash[23351]: audit 2026-03-09T18:03:01.811929+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:02 vm00 bash[20770]: cluster 2026-03-09T18:03:01.152296+0000 mgr.y (mgr.14505) 1368 : cluster [DBG] pgmap v1844: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:03:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:02 vm00 bash[20770]: cluster 2026-03-09T18:03:01.152296+0000 mgr.y (mgr.14505) 1368 : cluster [DBG] pgmap v1844: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:03:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:02 vm00 bash[20770]: audit 2026-03-09T18:03:01.811929+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:02.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:02 vm00 bash[20770]: audit 2026-03-09T18:03:01.811929+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:02 vm00 bash[28333]: cluster 2026-03-09T18:03:01.152296+0000 mgr.y (mgr.14505) 1368 : cluster [DBG] pgmap v1844: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:03:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:02 vm00 bash[28333]: cluster 2026-03-09T18:03:01.152296+0000 mgr.y (mgr.14505) 1368 : cluster [DBG] pgmap v1844: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:03:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:02 vm00 bash[28333]: audit 2026-03-09T18:03:01.811929+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:02.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:02 vm00 bash[28333]: audit 2026-03-09T18:03:01.811929+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.323 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.317+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]=0 pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' removed v826) ==== 248+0+0 (secure 0 0 0) 0x7f4e4406b4f0 con 0x7f4e54101f50 2026-03-09T18:03:03.340 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.333+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(826..826 src has 1..826) ==== 296+0+0 (secure 0 0 0) 0x7f4e4409d030 con 0x7f4e54101f50 2026-03-09T18:03:03.340 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.333+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=827}) -- 0x7f4e30083a50 con 0x7f4e54101f50 2026-03-09T18:03:03.388 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.381+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true} v 0) -- 0x7f4e1c0034a0 con 0x7f4e54101f50 2026-03-09T18:03:03.388 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e4a7fc640 1 -- 192.168.123.100:0/1654583673 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]=0 pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' does not exist v826) ==== 255+0+0 (secure 0 0 0) 0x7f4e4405e630 con 0x7f4e54101f50 2026-03-09T18:03:03.389 INFO:tasks.workunit.client.0.vm00.stderr:pool 'bbf4d092-de96-4c27-8d15-50edd4b79fe3' does not exist 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4e300777a0 msgr2=0x7f4e30079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/1654583673 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4e300777a0 0x7f4e30079c60 secure :-1 s=READY pgs=4290 cs=0 l=1 rev1=1 crypto rx=0x7f4e3c0059d0 tx=0x7f4e3c005960 comp rx=0 tx=0).stop 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4e54101f50 msgr2=0x7f4e5419d490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4e54101f50 0x7f4e5419d490 secure :-1 s=READY pgs=3155 cs=0 l=1 rev1=1 crypto rx=0x7f4e4400d950 tx=0x7f4e4400de10 comp rx=0 tx=0).stop 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 shutdown_connections 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/1654583673 >> v2:192.168.123.100:6800/2673235927 conn(0x7f4e300777a0 0x7f4e30079c60 unknown :-1 s=CLOSED pgs=4290 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4e54108f00 0x7f4e541a1820 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4e54101f50 0x7f4e5419d490 unknown :-1 s=CLOSED pgs=3155 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 --2- 192.168.123.100:0/1654583673 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4e541015a0 0x7f4e5419cf50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 >> 192.168.123.100:0/1654583673 conn(0x7f4e54077ee0 msgr2=0x7f4e540ffd80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 shutdown_connections 2026-03-09T18:03:03.390 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-09T18:03:03.385+0000 7f4e5b37a640 1 -- 192.168.123.100:0/1654583673 wait complete. 2026-03-09T18:03:03.401 INFO:tasks.workunit.client.0.vm00.stdout:OK 2026-03-09T18:03:03.401 INFO:tasks.workunit.client.0.vm00.stderr:+ echo OK 2026-03-09T18:03:03.401 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T18:03:03.401 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T18:03:03.410 INFO:tasks.workunit:Stopping ['rados/test.sh', 'rados/test_pool_quota.sh'] on client.0... 2026-03-09T18:03:03.410 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T18:03:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:03 vm02 bash[23351]: audit 2026-03-09T18:03:02.313696+0000 mon.a (mon.0) 3734 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:03 vm02 bash[23351]: audit 2026-03-09T18:03:02.313696+0000 mon.a (mon.0) 3734 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:03 vm02 bash[23351]: cluster 2026-03-09T18:03:02.319381+0000 mon.a (mon.0) 3735 : cluster [DBG] osdmap e825: 8 total, 8 up, 8 in 2026-03-09T18:03:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:03 vm02 bash[23351]: cluster 2026-03-09T18:03:02.319381+0000 mon.a (mon.0) 3735 : cluster [DBG] osdmap e825: 8 total, 8 up, 8 in 2026-03-09T18:03:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:03 vm02 bash[23351]: audit 2026-03-09T18:03:02.374112+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:03 vm02 bash[23351]: audit 2026-03-09T18:03:02.374112+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:03 vm02 bash[23351]: audit 2026-03-09T18:03:02.552206+0000 mon.a (mon.0) 3737 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:03 vm02 bash[23351]: audit 2026-03-09T18:03:02.552206+0000 mon.a (mon.0) 3737 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:03:03 vm02 bash[48996]: debug there is no tcmu-runner data available 2026-03-09T18:03:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:03 vm00 bash[28333]: audit 2026-03-09T18:03:02.313696+0000 mon.a (mon.0) 3734 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:03 vm00 bash[28333]: audit 2026-03-09T18:03:02.313696+0000 mon.a (mon.0) 3734 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:03 vm00 bash[28333]: cluster 2026-03-09T18:03:02.319381+0000 mon.a (mon.0) 3735 : cluster [DBG] osdmap e825: 8 total, 8 up, 8 in 2026-03-09T18:03:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:03 vm00 bash[28333]: cluster 2026-03-09T18:03:02.319381+0000 mon.a (mon.0) 3735 : cluster [DBG] osdmap e825: 8 total, 8 up, 8 in 2026-03-09T18:03:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:03 vm00 bash[28333]: audit 2026-03-09T18:03:02.374112+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:03 vm00 bash[28333]: audit 2026-03-09T18:03:02.374112+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:03 vm00 bash[28333]: audit 2026-03-09T18:03:02.552206+0000 mon.a (mon.0) 3737 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:03 vm00 bash[28333]: audit 2026-03-09T18:03:02.552206+0000 mon.a (mon.0) 3737 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:03 vm00 bash[20770]: audit 2026-03-09T18:03:02.313696+0000 mon.a (mon.0) 3734 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:03 vm00 bash[20770]: audit 2026-03-09T18:03:02.313696+0000 mon.a (mon.0) 3734 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:03 vm00 bash[20770]: cluster 2026-03-09T18:03:02.319381+0000 mon.a (mon.0) 3735 : cluster [DBG] osdmap e825: 8 total, 8 up, 8 in 2026-03-09T18:03:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:03 vm00 bash[20770]: cluster 2026-03-09T18:03:02.319381+0000 mon.a (mon.0) 3735 : cluster [DBG] osdmap e825: 8 total, 8 up, 8 in 2026-03-09T18:03:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:03 vm00 bash[20770]: audit 2026-03-09T18:03:02.374112+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:03 vm00 bash[20770]: audit 2026-03-09T18:03:02.374112+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? 192.168.123.100:0/452950533' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "pool2": "bf8b5401-2db9-4b6a-ba7e-fbbf27caed8c", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:03 vm00 bash[20770]: audit 2026-03-09T18:03:02.552206+0000 mon.a (mon.0) 3737 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:03 vm00 bash[20770]: audit 2026-03-09T18:03:02.552206+0000 mon.a (mon.0) 3737 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:03.848 DEBUG:teuthology.parallel:result is None 2026-03-09T18:03:03.848 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T18:03:03.857 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T18:03:03.857 DEBUG:teuthology.orchestra.run.vm00:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T18:03:03.904 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T18:03:03.904 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T18:03:03.906 INFO:tasks.cephadm:Teardown begin 2026-03-09T18:03:03.906 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:03:03.957 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:03:03.968 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T18:03:03.969 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e -- ceph mgr module disable cephadm 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: cluster 2026-03-09T18:03:03.152766+0000 mgr.y (mgr.14505) 1369 : cluster [DBG] pgmap v1846: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: cluster 2026-03-09T18:03:03.152766+0000 mgr.y (mgr.14505) 1369 : cluster [DBG] pgmap v1846: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: audit 2026-03-09T18:03:03.230484+0000 mgr.y (mgr.14505) 1370 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: audit 2026-03-09T18:03:03.230484+0000 mgr.y (mgr.14505) 1370 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: cluster 2026-03-09T18:03:03.315054+0000 mon.a (mon.0) 3738 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: cluster 2026-03-09T18:03:03.315054+0000 mon.a (mon.0) 3738 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: audit 2026-03-09T18:03:03.322905+0000 mon.a (mon.0) 3739 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: audit 2026-03-09T18:03:03.322905+0000 mon.a (mon.0) 3739 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: cluster 2026-03-09T18:03:03.365603+0000 mon.a (mon.0) 3740 : cluster [DBG] osdmap e826: 8 total, 8 up, 8 in 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: cluster 2026-03-09T18:03:03.365603+0000 mon.a (mon.0) 3740 : cluster [DBG] osdmap e826: 8 total, 8 up, 8 in 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: audit 2026-03-09T18:03:03.388538+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:04.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:04 vm02 bash[23351]: audit 2026-03-09T18:03:03.388538+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: cluster 2026-03-09T18:03:03.152766+0000 mgr.y (mgr.14505) 1369 : cluster [DBG] pgmap v1846: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: cluster 2026-03-09T18:03:03.152766+0000 mgr.y (mgr.14505) 1369 : cluster [DBG] pgmap v1846: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: audit 2026-03-09T18:03:03.230484+0000 mgr.y (mgr.14505) 1370 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: audit 2026-03-09T18:03:03.230484+0000 mgr.y (mgr.14505) 1370 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: cluster 2026-03-09T18:03:03.315054+0000 mon.a (mon.0) 3738 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: cluster 2026-03-09T18:03:03.315054+0000 mon.a (mon.0) 3738 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: audit 2026-03-09T18:03:03.322905+0000 mon.a (mon.0) 3739 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: audit 2026-03-09T18:03:03.322905+0000 mon.a (mon.0) 3739 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: cluster 2026-03-09T18:03:03.365603+0000 mon.a (mon.0) 3740 : cluster [DBG] osdmap e826: 8 total, 8 up, 8 in 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: cluster 2026-03-09T18:03:03.365603+0000 mon.a (mon.0) 3740 : cluster [DBG] osdmap e826: 8 total, 8 up, 8 in 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: audit 2026-03-09T18:03:03.388538+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:04.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:04 vm00 bash[28333]: audit 2026-03-09T18:03:03.388538+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: cluster 2026-03-09T18:03:03.152766+0000 mgr.y (mgr.14505) 1369 : cluster [DBG] pgmap v1846: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: cluster 2026-03-09T18:03:03.152766+0000 mgr.y (mgr.14505) 1369 : cluster [DBG] pgmap v1846: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: audit 2026-03-09T18:03:03.230484+0000 mgr.y (mgr.14505) 1370 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: audit 2026-03-09T18:03:03.230484+0000 mgr.y (mgr.14505) 1370 : audit [DBG] from='client.14484 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: cluster 2026-03-09T18:03:03.315054+0000 mon.a (mon.0) 3738 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: cluster 2026-03-09T18:03:03.315054+0000 mon.a (mon.0) 3738 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: audit 2026-03-09T18:03:03.322905+0000 mon.a (mon.0) 3739 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: audit 2026-03-09T18:03:03.322905+0000 mon.a (mon.0) 3739 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: cluster 2026-03-09T18:03:03.365603+0000 mon.a (mon.0) 3740 : cluster [DBG] osdmap e826: 8 total, 8 up, 8 in 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: cluster 2026-03-09T18:03:03.365603+0000 mon.a (mon.0) 3740 : cluster [DBG] osdmap e826: 8 total, 8 up, 8 in 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: audit 2026-03-09T18:03:03.388538+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:04.789 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:04 vm00 bash[20770]: audit 2026-03-09T18:03:03.388538+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? 192.168.123.100:0/1654583673' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "pool2": "bbf4d092-de96-4c27-8d15-50edd4b79fe3", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T18:03:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:06 vm02 bash[23351]: cluster 2026-03-09T18:03:05.153029+0000 mgr.y (mgr.14505) 1371 : cluster [DBG] pgmap v1848: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:06.636 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:06 vm02 bash[23351]: cluster 2026-03-09T18:03:05.153029+0000 mgr.y (mgr.14505) 1371 : cluster [DBG] pgmap v1848: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:06 vm00 bash[20770]: cluster 2026-03-09T18:03:05.153029+0000 mgr.y (mgr.14505) 1371 : cluster [DBG] pgmap v1848: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:06.788 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:06 vm00 bash[20770]: cluster 2026-03-09T18:03:05.153029+0000 mgr.y (mgr.14505) 1371 : cluster [DBG] pgmap v1848: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:06.788 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:03:06 vm00 bash[21037]: ::ffff:192.168.123.102 - - [09/Mar/2026:18:03:06] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:03:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:06 vm00 bash[28333]: cluster 2026-03-09T18:03:05.153029+0000 mgr.y (mgr.14505) 1371 : cluster [DBG] pgmap v1848: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:06.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:06 vm00 bash[28333]: cluster 2026-03-09T18:03:05.153029+0000 mgr.y (mgr.14505) 1371 : cluster [DBG] pgmap v1848: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:03:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:08 vm00 bash[28333]: cluster 2026-03-09T18:03:07.153338+0000 mgr.y (mgr.14505) 1372 : cluster [DBG] pgmap v1849: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:03:08.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:08 vm00 bash[28333]: cluster 2026-03-09T18:03:07.153338+0000 mgr.y (mgr.14505) 1372 : cluster [DBG] pgmap v1849: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:03:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:08 vm00 bash[20770]: cluster 2026-03-09T18:03:07.153338+0000 mgr.y (mgr.14505) 1372 : cluster [DBG] pgmap v1849: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:03:08.288 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:08 vm00 bash[20770]: cluster 2026-03-09T18:03:07.153338+0000 mgr.y (mgr.14505) 1372 : cluster [DBG] pgmap v1849: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:03:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:08 vm02 bash[23351]: cluster 2026-03-09T18:03:07.153338+0000 mgr.y (mgr.14505) 1372 : cluster [DBG] pgmap v1849: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:03:08.386 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 09 18:03:08 vm02 bash[23351]: cluster 2026-03-09T18:03:07.153338+0000 mgr.y (mgr.14505) 1372 : cluster [DBG] pgmap v1849: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:03:08.635 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/mon.c/config 2026-03-09T18:03:08.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T18:03:08.797+0000 7fa185b53640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T18:03:08.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T18:03:08.797+0000 7fa185b53640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T18:03:08.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T18:03:08.797+0000 7fa185b53640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T18:03:08.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T18:03:08.797+0000 7fa185b53640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T18:03:08.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T18:03:08.797+0000 7fa185b53640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T18:03:08.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T18:03:08.797+0000 7fa185b53640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T18:03:08.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-09T18:03:08.797+0000 7fa185b53640 -1 monclient: keyring not found 2026-03-09T18:03:08.804 INFO:teuthology.orchestra.run.vm00.stderr:[errno 21] error connecting to the cluster 2026-03-09T18:03:08.848 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:03:08.848 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T18:03:08.848 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:03:08.851 DEBUG:teuthology.orchestra.run.vm02:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:03:08.854 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T18:03:08.854 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T18:03:08.854 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.a 2026-03-09T18:03:08.944 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:03:08 vm00 systemd[1]: Stopping Ceph mon.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:09.049 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.a.service' 2026-03-09T18:03:09.061 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:09.061 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T18:03:09.061 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-09T18:03:09.061 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.c 2026-03-09T18:03:09.288 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:03:09 vm00 bash[21037]: [09/Mar/2026:18:03:09] ENGINE Bus STOPPING 2026-03-09T18:03:09.288 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:09 vm00 systemd[1]: Stopping Ceph mon.c for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:09 vm00 bash[28333]: debug 2026-03-09T18:03:09.153+0000 7f109141e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:03:09.289 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:09 vm00 bash[28333]: debug 2026-03-09T18:03:09.153+0000 7f109141e640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T18:03:09.362 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:03:09 vm00 bash[132560]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-mon-c 2026-03-09T18:03:09.362 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:03:09 vm00 bash[21037]: [09/Mar/2026:18:03:09] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:03:09.362 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:03:09 vm00 bash[21037]: [09/Mar/2026:18:03:09] ENGINE Bus STOPPED 2026-03-09T18:03:09.362 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:03:09 vm00 bash[21037]: [09/Mar/2026:18:03:09] ENGINE Bus STARTING 2026-03-09T18:03:09.389 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.c.service' 2026-03-09T18:03:09.401 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:09.401 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-09T18:03:09.401 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T18:03:09.401 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.b 2026-03-09T18:03:09.632 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mon.b.service' 2026-03-09T18:03:09.644 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:09.644 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T18:03:09.644 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-09T18:03:09.644 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.y 2026-03-09T18:03:09.652 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:03:09 vm00 bash[21037]: [09/Mar/2026:18:03:09] ENGINE Serving on http://:::9283 2026-03-09T18:03:09.652 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:03:09 vm00 bash[21037]: [09/Mar/2026:18:03:09] ENGINE Bus STARTED 2026-03-09T18:03:09.812 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.y.service' 2026-03-09T18:03:09.868 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:09.868 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-09T18:03:09.868 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T18:03:09.868 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.x 2026-03-09T18:03:09.992 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 18:03:09 vm02 systemd[1]: Stopping Ceph mgr.x for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:09.993 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 09 18:03:09 vm02 bash[57240]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-mgr-x 2026-03-09T18:03:10.018 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@mgr.x.service' 2026-03-09T18:03:10.030 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:10.030 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T18:03:10.030 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T18:03:10.030 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.0 2026-03-09T18:03:10.288 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:03:10 vm00 systemd[1]: Stopping Ceph osd.0 for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:10.288 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:03:10 vm00 bash[31220]: debug 2026-03-09T18:03:10.073+0000 7f47998ca640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:10.288 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:03:10 vm00 bash[31220]: debug 2026-03-09T18:03:10.073+0000 7f47998ca640 -1 osd.0 826 *** Got signal Terminated *** 2026-03-09T18:03:10.288 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:03:10 vm00 bash[31220]: debug 2026-03-09T18:03:10.073+0000 7f47998ca640 -1 osd.0 826 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:03:15.430 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:03:15 vm00 bash[132744]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-osd-0 2026-03-09T18:03:15.473 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.0.service' 2026-03-09T18:03:15.486 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:15.486 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T18:03:15.486 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T18:03:15.486 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.1 2026-03-09T18:03:15.788 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:03:15 vm00 systemd[1]: Stopping Ceph osd.1 for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:15.788 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:03:15 vm00 bash[37416]: debug 2026-03-09T18:03:15.573+0000 7f7398f10640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:15.788 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:03:15 vm00 bash[37416]: debug 2026-03-09T18:03:15.573+0000 7f7398f10640 -1 osd.1 826 *** Got signal Terminated *** 2026-03-09T18:03:15.788 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:03:15 vm00 bash[37416]: debug 2026-03-09T18:03:15.573+0000 7f7398f10640 -1 osd.1 826 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:03:20.946 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:03:20 vm00 bash[132921]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-osd-1 2026-03-09T18:03:20.974 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.1.service' 2026-03-09T18:03:20.985 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:20.985 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T18:03:20.985 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T18:03:20.985 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.2 2026-03-09T18:03:21.288 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:03:21 vm00 systemd[1]: Stopping Ceph osd.2 for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:21.288 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:03:21 vm00 bash[43223]: debug 2026-03-09T18:03:21.069+0000 7fcae6508640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:21.288 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:03:21 vm00 bash[43223]: debug 2026-03-09T18:03:21.069+0000 7fcae6508640 -1 osd.2 826 *** Got signal Terminated *** 2026-03-09T18:03:21.288 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:03:21 vm00 bash[43223]: debug 2026-03-09T18:03:21.069+0000 7fcae6508640 -1 osd.2 826 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:03:25.886 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 18:03:25 vm02 bash[51866]: ts=2026-03-09T18:03:25.550Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:03:25.886 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 18:03:25 vm02 bash[51866]: ts=2026-03-09T18:03:25.550Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:03:25.886 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 18:03:25 vm02 bash[51866]: ts=2026-03-09T18:03:25.550Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:03:25.886 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 18:03:25 vm02 bash[51866]: ts=2026-03-09T18:03:25.550Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:03:25.886 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 18:03:25 vm02 bash[51866]: ts=2026-03-09T18:03:25.550Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:03:25.886 INFO:journalctl@ceph.prometheus.a.vm02.stdout:Mar 09 18:03:25 vm02 bash[51866]: ts=2026-03-09T18:03:25.550Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:03:26.433 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:03:26 vm00 bash[133107]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-osd-2 2026-03-09T18:03:26.569 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.2.service' 2026-03-09T18:03:26.581 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:26.581 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T18:03:26.581 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T18:03:26.581 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.3 2026-03-09T18:03:26.788 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:03:26 vm00 systemd[1]: Stopping Ceph osd.3 for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:26.788 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:03:26 vm00 bash[49274]: debug 2026-03-09T18:03:26.673+0000 7fea72f0f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:26.788 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:03:26 vm00 bash[49274]: debug 2026-03-09T18:03:26.673+0000 7fea72f0f640 -1 osd.3 826 *** Got signal Terminated *** 2026-03-09T18:03:26.788 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:03:26 vm00 bash[49274]: debug 2026-03-09T18:03:26.673+0000 7fea72f0f640 -1 osd.3 826 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:03:32.038 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:03:31 vm00 bash[133290]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-osd-3 2026-03-09T18:03:32.093 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.3.service' 2026-03-09T18:03:32.104 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:32.105 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T18:03:32.105 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T18:03:32.105 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.4 2026-03-09T18:03:32.386 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 18:03:32 vm02 systemd[1]: Stopping Ceph osd.4 for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:32.386 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 18:03:32 vm02 bash[26623]: debug 2026-03-09T18:03:32.142+0000 7fd79405f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:32.386 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 18:03:32 vm02 bash[26623]: debug 2026-03-09T18:03:32.142+0000 7fd79405f640 -1 osd.4 826 *** Got signal Terminated *** 2026-03-09T18:03:32.386 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 18:03:32 vm02 bash[26623]: debug 2026-03-09T18:03:32.142+0000 7fd79405f640 -1 osd.4 826 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:03:36.386 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 18:03:36 vm02 bash[26623]: debug 2026-03-09T18:03:35.998+0000 7fd78fe77640 -1 osd.4 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.036452+0000 front 2026-03-09T18:03:11.036436+0000 (oldest deadline 2026-03-09T18:03:35.736113+0000) 2026-03-09T18:03:37.136 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 18:03:36 vm02 bash[26623]: debug 2026-03-09T18:03:36.974+0000 7fd78fe77640 -1 osd.4 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.036452+0000 front 2026-03-09T18:03:11.036436+0000 (oldest deadline 2026-03-09T18:03:35.736113+0000) 2026-03-09T18:03:37.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:36 vm02 bash[44798]: debug 2026-03-09T18:03:36.842+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:37.496 INFO:journalctl@ceph.osd.4.vm02.stdout:Mar 09 18:03:37 vm02 bash[57331]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-osd-4 2026-03-09T18:03:37.524 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.4.service' 2026-03-09T18:03:37.535 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:37.535 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T18:03:37.535 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T18:03:37.535 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.5 2026-03-09T18:03:37.867 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:37 vm02 bash[38575]: debug 2026-03-09T18:03:37.634+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:37.868 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 18:03:37 vm02 systemd[1]: Stopping Ceph osd.5 for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:37.868 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 18:03:37 vm02 bash[32792]: debug 2026-03-09T18:03:37.618+0000 7f84f5f6c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:37.868 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 18:03:37 vm02 bash[32792]: debug 2026-03-09T18:03:37.618+0000 7f84f5f6c640 -1 osd.5 826 *** Got signal Terminated *** 2026-03-09T18:03:37.868 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 18:03:37 vm02 bash[32792]: debug 2026-03-09T18:03:37.618+0000 7f84f5f6c640 -1 osd.5 826 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:03:38.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:37 vm02 bash[44798]: debug 2026-03-09T18:03:37.866+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:38.886 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:38 vm02 bash[38575]: debug 2026-03-09T18:03:38.622+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:38.886 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:38 vm02 bash[44798]: debug 2026-03-09T18:03:38.838+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:40.136 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:39 vm02 bash[38575]: debug 2026-03-09T18:03:39.634+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:40.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:39 vm02 bash[44798]: debug 2026-03-09T18:03:39.814+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:41.033 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:40 vm02 bash[38575]: debug 2026-03-09T18:03:40.674+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:41.033 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:40 vm02 bash[44798]: debug 2026-03-09T18:03:40.814+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:41.386 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 18:03:41 vm02 bash[32792]: debug 2026-03-09T18:03:41.030+0000 7f84f2585640 -1 osd.5 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:14.299728+0000 front 2026-03-09T18:03:14.299700+0000 (oldest deadline 2026-03-09T18:03:40.199457+0000) 2026-03-09T18:03:42.038 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:41 vm02 bash[38575]: debug 2026-03-09T18:03:41.690+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:42.038 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:41 vm02 bash[44798]: debug 2026-03-09T18:03:41.806+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:42.386 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 18:03:42 vm02 bash[32792]: debug 2026-03-09T18:03:42.034+0000 7f84f2585640 -1 osd.5 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:14.299728+0000 front 2026-03-09T18:03:14.299700+0000 (oldest deadline 2026-03-09T18:03:40.199457+0000) 2026-03-09T18:03:42.386 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 18:03:42 vm02 bash[32792]: debug 2026-03-09T18:03:42.034+0000 7f84f2585640 -1 osd.5 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.199847+0000 front 2026-03-09T18:03:20.199889+0000 (oldest deadline 2026-03-09T18:03:41.899663+0000) 2026-03-09T18:03:42.985 INFO:journalctl@ceph.osd.5.vm02.stdout:Mar 09 18:03:42 vm02 bash[57518]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-osd-5 2026-03-09T18:03:42.986 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:42 vm02 bash[38575]: debug 2026-03-09T18:03:42.666+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:42.986 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:42 vm02 bash[44798]: debug 2026-03-09T18:03:42.826+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:42.986 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:42 vm02 bash[44798]: debug 2026-03-09T18:03:42.826+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:43.030 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.5.service' 2026-03-09T18:03:43.042 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:43.042 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T18:03:43.042 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T18:03:43.042 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.6 2026-03-09T18:03:43.386 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:43 vm02 systemd[1]: Stopping Ceph osd.6 for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:43.386 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:43 vm02 bash[38575]: debug 2026-03-09T18:03:43.126+0000 7f70133b7640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:43.386 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:43 vm02 bash[38575]: debug 2026-03-09T18:03:43.126+0000 7f70133b7640 -1 osd.6 826 *** Got signal Terminated *** 2026-03-09T18:03:43.386 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:43 vm02 bash[38575]: debug 2026-03-09T18:03:43.126+0000 7f70133b7640 -1 osd.6 826 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:03:44.136 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:43 vm02 bash[38575]: debug 2026-03-09T18:03:43.646+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:44.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:43 vm02 bash[44798]: debug 2026-03-09T18:03:43.818+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:44.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:43 vm02 bash[44798]: debug 2026-03-09T18:03:43.818+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:44.886 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:44 vm02 bash[38575]: debug 2026-03-09T18:03:44.618+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:44.886 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:44 vm02 bash[38575]: debug 2026-03-09T18:03:44.618+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.179109+0000 front 2026-03-09T18:03:20.179131+0000 (oldest deadline 2026-03-09T18:03:44.278927+0000) 2026-03-09T18:03:44.886 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:44 vm02 bash[44798]: debug 2026-03-09T18:03:44.830+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:44.886 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:44 vm02 bash[44798]: debug 2026-03-09T18:03:44.830+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:45.886 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:45 vm02 bash[38575]: debug 2026-03-09T18:03:45.618+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:45.886 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:45 vm02 bash[38575]: debug 2026-03-09T18:03:45.618+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.179109+0000 front 2026-03-09T18:03:20.179131+0000 (oldest deadline 2026-03-09T18:03:44.278927+0000) 2026-03-09T18:03:45.886 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:45 vm02 bash[44798]: debug 2026-03-09T18:03:45.854+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:45.886 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:45 vm02 bash[44798]: debug 2026-03-09T18:03:45.854+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:47.136 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:46 vm02 bash[38575]: debug 2026-03-09T18:03:46.662+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:47.136 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:46 vm02 bash[38575]: debug 2026-03-09T18:03:46.662+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.179109+0000 front 2026-03-09T18:03:20.179131+0000 (oldest deadline 2026-03-09T18:03:44.278927+0000) 2026-03-09T18:03:47.136 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:46 vm02 bash[38575]: debug 2026-03-09T18:03:46.662+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-09T18:03:24.279396+0000 front 2026-03-09T18:03:24.279471+0000 (oldest deadline 2026-03-09T18:03:46.579144+0000) 2026-03-09T18:03:47.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:46 vm02 bash[44798]: debug 2026-03-09T18:03:46.830+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:47.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:46 vm02 bash[44798]: debug 2026-03-09T18:03:46.830+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:48.136 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:47 vm02 bash[38575]: debug 2026-03-09T18:03:47.678+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:11.478253+0000 front 2026-03-09T18:03:11.478450+0000 (oldest deadline 2026-03-09T18:03:37.378151+0000) 2026-03-09T18:03:48.136 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:47 vm02 bash[38575]: debug 2026-03-09T18:03:47.678+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.179109+0000 front 2026-03-09T18:03:20.179131+0000 (oldest deadline 2026-03-09T18:03:44.278927+0000) 2026-03-09T18:03:48.136 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:47 vm02 bash[38575]: debug 2026-03-09T18:03:47.678+0000 7f700f9d0640 -1 osd.6 826 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-09T18:03:24.279396+0000 front 2026-03-09T18:03:24.279471+0000 (oldest deadline 2026-03-09T18:03:46.579144+0000) 2026-03-09T18:03:48.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:47 vm02 bash[44798]: debug 2026-03-09T18:03:47.874+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:48.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:47 vm02 bash[44798]: debug 2026-03-09T18:03:47.874+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:48.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:47 vm02 bash[44798]: debug 2026-03-09T18:03:47.874+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-09T18:03:25.504650+0000 front 2026-03-09T18:03:25.504497+0000 (oldest deadline 2026-03-09T18:03:47.204063+0000) 2026-03-09T18:03:48.487 INFO:journalctl@ceph.osd.6.vm02.stdout:Mar 09 18:03:48 vm02 bash[57703]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-osd-6 2026-03-09T18:03:48.527 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.6.service' 2026-03-09T18:03:48.537 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:48.537 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T18:03:48.537 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T18:03:48.537 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.7 2026-03-09T18:03:48.856 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:48 vm02 systemd[1]: Stopping Ceph osd.7 for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:48.856 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:48 vm02 bash[44798]: debug 2026-03-09T18:03:48.618+0000 7f74ca25a640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:48.856 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:48 vm02 bash[44798]: debug 2026-03-09T18:03:48.618+0000 7f74ca25a640 -1 osd.7 826 *** Got signal Terminated *** 2026-03-09T18:03:48.856 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:48 vm02 bash[44798]: debug 2026-03-09T18:03:48.618+0000 7f74ca25a640 -1 osd.7 826 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:03:49.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:48 vm02 bash[44798]: debug 2026-03-09T18:03:48.854+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:49.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:48 vm02 bash[44798]: debug 2026-03-09T18:03:48.854+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:49.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:48 vm02 bash[44798]: debug 2026-03-09T18:03:48.854+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-09T18:03:25.504650+0000 front 2026-03-09T18:03:25.504497+0000 (oldest deadline 2026-03-09T18:03:47.204063+0000) 2026-03-09T18:03:50.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:49 vm02 bash[44798]: debug 2026-03-09T18:03:49.814+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:50.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:49 vm02 bash[44798]: debug 2026-03-09T18:03:49.814+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:50.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:49 vm02 bash[44798]: debug 2026-03-09T18:03:49.814+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-09T18:03:25.504650+0000 front 2026-03-09T18:03:25.504497+0000 (oldest deadline 2026-03-09T18:03:47.204063+0000) 2026-03-09T18:03:51.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:50 vm02 bash[44798]: debug 2026-03-09T18:03:50.834+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:51.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:50 vm02 bash[44798]: debug 2026-03-09T18:03:50.834+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:51.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:50 vm02 bash[44798]: debug 2026-03-09T18:03:50.834+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-09T18:03:25.504650+0000 front 2026-03-09T18:03:25.504497+0000 (oldest deadline 2026-03-09T18:03:47.204063+0000) 2026-03-09T18:03:52.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:51 vm02 bash[44798]: debug 2026-03-09T18:03:51.834+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:52.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:51 vm02 bash[44798]: debug 2026-03-09T18:03:51.834+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:52.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:51 vm02 bash[44798]: debug 2026-03-09T18:03:51.834+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-09T18:03:25.504650+0000 front 2026-03-09T18:03:25.504497+0000 (oldest deadline 2026-03-09T18:03:47.204063+0000) 2026-03-09T18:03:53.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:52 vm02 bash[44798]: debug 2026-03-09T18:03:52.806+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-09T18:03:13.903409+0000 front 2026-03-09T18:03:13.903309+0000 (oldest deadline 2026-03-09T18:03:36.802985+0000) 2026-03-09T18:03:53.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:52 vm02 bash[44798]: debug 2026-03-09T18:03:52.806+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-09T18:03:20.303677+0000 front 2026-03-09T18:03:20.303540+0000 (oldest deadline 2026-03-09T18:03:42.603469+0000) 2026-03-09T18:03:53.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:52 vm02 bash[44798]: debug 2026-03-09T18:03:52.806+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-09T18:03:25.504650+0000 front 2026-03-09T18:03:25.504497+0000 (oldest deadline 2026-03-09T18:03:47.204063+0000) 2026-03-09T18:03:53.136 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:52 vm02 bash[44798]: debug 2026-03-09T18:03:52.806+0000 7f74c6873640 -1 osd.7 826 heartbeat_check: no reply from 192.168.123.100:6815 osd.3 since back 2026-03-09T18:03:27.204437+0000 front 2026-03-09T18:03:27.204767+0000 (oldest deadline 2026-03-09T18:03:52.504368+0000) 2026-03-09T18:03:53.964 INFO:journalctl@ceph.osd.7.vm02.stdout:Mar 09 18:03:53 vm02 bash[57884]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-osd-7 2026-03-09T18:03:54.014 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@osd.7.service' 2026-03-09T18:03:54.026 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:03:54.026 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T18:03:54.026 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-09T18:03:54.026 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@rgw.foo.a 2026-03-09T18:03:54.288 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 18:03:54 vm00 systemd[1]: Stopping Ceph rgw.foo.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:03:54.288 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 18:03:54 vm00 bash[53845]: debug 2026-03-09T18:03:54.069+0000 7f12a2f5e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:03:54.288 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 09 18:03:54 vm00 bash[53845]: debug 2026-03-09T18:03:54.069+0000 7f12a67cd980 -1 shutting down 2026-03-09T18:04:04.154 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@rgw.foo.a.service' 2026-03-09T18:04:04.169 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:04:04.169 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-09T18:04:04.169 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-09T18:04:04.170 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@prometheus.a 2026-03-09T18:04:04.290 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@prometheus.a.service' 2026-03-09T18:04:04.301 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:04:04.301 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-09T18:04:04.301 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e --force --keep-logs 2026-03-09T18:04:04.396 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T18:04:09.284 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:09.284 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:09.537 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:09.537 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:09.788 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:09.788 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:09.788 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: Stopping Ceph alertmanager.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:04:09.788 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 bash[56982]: ts=2026-03-09T18:04:09.663Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:04:09.788 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 bash[133712]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-alertmanager-a 2026-03-09T18:04:09.788 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@alertmanager.a.service: Deactivated successfully. 2026-03-09T18:04:09.788 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: Stopped Ceph alertmanager.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T18:04:10.215 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:10.215 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: Stopping Ceph node-exporter.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:04:10.216 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:10 vm00 bash[133835]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-node-exporter-a 2026-03-09T18:04:10.216 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:10 vm00 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:04:10.216 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:10 vm00 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T18:04:10.216 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:10 vm00 systemd[1]: Stopped Ceph node-exporter.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T18:04:10.216 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:04:09 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:10.489 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:04:10 vm00 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:11.865 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e --force --keep-logs 2026-03-09T18:04:11.953 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T18:04:16.819 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:16 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:16.819 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:16 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:16.819 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:16 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.097 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:16 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.097 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.097 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:16 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.097 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.097 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:16 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.097 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.375 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.375 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.375 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.636 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:17.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: Stopping Ceph iscsi.iscsi.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:04:17.636 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:17 vm02 bash[48996]: debug Shutdown received 2026-03-09T18:04:17.636 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:17 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:27.861 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:27.861 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 bash[58377]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-iscsi-iscsi-a 2026-03-09T18:04:27.861 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-09T18:04:27.861 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-09T18:04:27.861 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: Stopped Ceph iscsi.iscsi.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T18:04:27.861 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.121 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 09 18:04:27 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.122 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.124 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.124 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: Stopping Ceph grafana.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:04:28.373 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 bash[51223]: logger=server t=2026-03-09T18:04:28.156046187Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-09T18:04:28.373 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 bash[51223]: logger=grafana-apiserver t=2026-03-09T18:04:28.156172583Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-09T18:04:28.374 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 bash[51223]: logger=tracing t=2026-03-09T18:04:28.156204133Z level=info msg="Closing tracing" 2026-03-09T18:04:28.374 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 bash[51223]: logger=ticker t=2026-03-09T18:04:28.156332834Z level=info msg=stopped last_tick=2026-03-09T18:04:20Z 2026-03-09T18:04:28.374 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 bash[58543]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-grafana-a 2026-03-09T18:04:28.374 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@grafana.a.service: Deactivated successfully. 2026-03-09T18:04:28.374 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: Stopped Ceph grafana.a for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T18:04:28.636 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.636 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.636 INFO:journalctl@ceph.grafana.a.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:28.968 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: Stopping Ceph node-exporter.b for 16190428-1bdc-11f1-aea4-d920f1c7e51e... 2026-03-09T18:04:28.968 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:28 vm02 bash[58709]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e-node-exporter-b 2026-03-09T18:04:28.968 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:04:28.968 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T18:04:28.968 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:28 vm02 systemd[1]: Stopped Ceph node-exporter.b for 16190428-1bdc-11f1-aea4-d920f1c7e51e. 2026-03-09T18:04:29.246 INFO:journalctl@ceph.node-exporter.b.vm02.stdout:Mar 09 18:04:29 vm02 systemd[1]: /etc/systemd/system/ceph-16190428-1bdc-11f1-aea4-d920f1c7e51e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:04:29.680 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:04:29.688 INFO:teuthology.orchestra.run.vm00.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T18:04:29.689 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:04:29.689 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:04:29.697 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T18:04:29.697 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583/remote/vm00/crash 2026-03-09T18:04:29.697 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/crash -- . 2026-03-09T18:04:29.740 INFO:teuthology.orchestra.run.vm00.stderr:tar: /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/crash: Cannot open: No such file or directory 2026-03-09T18:04:29.740 INFO:teuthology.orchestra.run.vm00.stderr:tar: Error is not recoverable: exiting now 2026-03-09T18:04:29.741 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583/remote/vm02/crash 2026-03-09T18:04:29.741 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/crash -- . 2026-03-09T18:04:29.751 INFO:teuthology.orchestra.run.vm02.stderr:tar: /var/lib/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/crash: Cannot open: No such file or directory 2026-03-09T18:04:29.751 INFO:teuthology.orchestra.run.vm02.stderr:tar: Error is not recoverable: exiting now 2026-03-09T18:04:29.751 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T18:04:29.751 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'reached quota' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(POOL_FULL\)' | egrep -v '\(SMALLER_PGP_NUM\)' | egrep -v '\(CACHE_POOL_NO_HIT_SET\)' | egrep -v '\(CACHE_POOL_NEAR_FULL\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v CEPHADM_STRAY_DAEMON | head -n 1 2026-03-09T18:04:29.799 INFO:tasks.cephadm:Compressing logs... 2026-03-09T18:04:29.799 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:04:29.840 DEBUG:teuthology.orchestra.run.vm02:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:04:29.848 INFO:teuthology.orchestra.run.vm00.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T18:04:29.848 INFO:teuthology.orchestra.run.vm00.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T18:04:29.849 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.3.log 2026-03-09T18:04:29.849 INFO:teuthology.orchestra.run.vm02.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T18:04:29.849 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.log 2026-03-09T18:04:29.850 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T18:04:29.850 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.c.log 2026-03-09T18:04:29.850 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mgr.x.log 2026-03-09T18:04:29.850 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.log 2026-03-09T18:04:29.851 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mgr.x.log: /var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.b.log 2026-03-09T18:04:29.852 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.5.log 2026-03-09T18:04:29.852 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.1.log 2026-03-09T18:04:29.852 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.7.log 2026-03-09T18:04:29.853 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.c.log: /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mgr.y.log 2026-03-09T18:04:29.854 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.a.log 2026-03-09T18:04:29.856 INFO:teuthology.orchestra.run.vm02.stderr: 93.1% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mgr.x.log.gz 2026-03-09T18:04:29.858 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mgr.y.log: 92.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T18:04:29.859 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.7.log: /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.6.log 2026-03-09T18:04:29.859 INFO:teuthology.orchestra.run.vm02.stderr: 88.1% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.log.gz 2026-03-09T18:04:29.862 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.2.log 2026-03-09T18:04:29.863 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.audit.log 2026-03-09T18:04:29.863 INFO:teuthology.orchestra.run.vm02.stderr: 90.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T18:04:29.864 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-volume.log 2026-03-09T18:04:29.864 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.cephadm.log 2026-03-09T18:04:29.865 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.4.log 2026-03-09T18:04:29.877 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.a.log: 93.4%gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.audit.log 2026-03-09T18:04:29.877 INFO:teuthology.orchestra.run.vm00.stderr: -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.log.gz 2026-03-09T18:04:29.880 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.cephadm.log: 92.4% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.audit.log.gz 2026-03-09T18:04:29.880 INFO:teuthology.orchestra.run.vm02.stderr: 80.1% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.cephadm.log.gz 2026-03-09T18:04:29.893 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/tcmu-runner.log 2026-03-09T18:04:29.896 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-volume.log 2026-03-09T18:04:29.905 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.4.log: /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/tcmu-runner.log: 73.2% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/tcmu-runner.log.gz 2026-03-09T18:04:29.905 INFO:teuthology.orchestra.run.vm02.stderr: 95.8% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-volume.log.gz 2026-03-09T18:04:29.912 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-client.rgw.foo.a.log 2026-03-09T18:04:29.913 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.cephadm.log 2026-03-09T18:04:29.924 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-client.rgw.foo.a.log: gzip -5 --verbose -- /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.0.log 2026-03-09T18:04:29.928 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.cephadm.log: 88.6% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.cephadm.log.gz 2026-03-09T18:04:29.934 INFO:teuthology.orchestra.run.vm00.stderr: 95.3% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph.audit.log.gz 2026-03-09T18:04:29.956 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.0.log: 95.8% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-volume.log.gz 2026-03-09T18:04:30.022 INFO:teuthology.orchestra.run.vm00.stderr: 94.3% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-client.rgw.foo.a.log.gz 2026-03-09T18:04:30.824 INFO:teuthology.orchestra.run.vm00.stderr: 91.3% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mgr.y.log.gz 2026-03-09T18:04:31.757 INFO:teuthology.orchestra.run.vm02.stderr: 92.8% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.b.log.gz 2026-03-09T18:04:32.488 INFO:teuthology.orchestra.run.vm00.stderr: 92.6% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.c.log.gz 2026-03-09T18:04:34.515 INFO:teuthology.orchestra.run.vm00.stderr: 91.9% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-mon.a.log.gz 2026-03-09T18:04:40.773 INFO:teuthology.orchestra.run.vm02.stderr: 94.7% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.7.log.gz 2026-03-09T18:04:40.804 INFO:teuthology.orchestra.run.vm02.stderr: 94.7% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.5.log.gz 2026-03-09T18:04:40.821 INFO:teuthology.orchestra.run.vm02.stderr: 94.7% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.6.log.gz 2026-03-09T18:04:40.868 INFO:teuthology.orchestra.run.vm02.stderr: 94.7% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.4.log.gz 2026-03-09T18:04:40.869 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-09T18:04:40.869 INFO:teuthology.orchestra.run.vm02.stderr:real 0m11.025s 2026-03-09T18:04:40.869 INFO:teuthology.orchestra.run.vm02.stderr:user 0m20.672s 2026-03-09T18:04:40.869 INFO:teuthology.orchestra.run.vm02.stderr:sys 0m1.238s 2026-03-09T18:04:41.364 INFO:teuthology.orchestra.run.vm00.stderr: 94.7% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.1.log.gz 2026-03-09T18:04:41.654 INFO:teuthology.orchestra.run.vm00.stderr: 94.6% 94.6% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.3.log.gz 2026-03-09T18:04:41.654 INFO:teuthology.orchestra.run.vm00.stderr: -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.2.log.gz 2026-03-09T18:04:41.754 INFO:teuthology.orchestra.run.vm00.stderr: 94.7% -- replaced with /var/log/ceph/16190428-1bdc-11f1-aea4-d920f1c7e51e/ceph-osd.0.log.gz 2026-03-09T18:04:41.756 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:04:41.756 INFO:teuthology.orchestra.run.vm00.stderr:real 0m11.913s 2026-03-09T18:04:41.756 INFO:teuthology.orchestra.run.vm00.stderr:user 0m22.282s 2026-03-09T18:04:41.756 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m1.378s 2026-03-09T18:04:41.756 INFO:tasks.cephadm:Archiving logs... 2026-03-09T18:04:41.756 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583/remote/vm00/log 2026-03-09T18:04:41.756 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T18:04:42.662 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583/remote/vm02/log 2026-03-09T18:04:42.663 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T18:04:43.413 INFO:tasks.cephadm:Removing cluster... 2026-03-09T18:04:43.413 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e --force 2026-03-09T18:04:43.504 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T18:04:44.764 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 16190428-1bdc-11f1-aea4-d920f1c7e51e --force 2026-03-09T18:04:44.849 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: 16190428-1bdc-11f1-aea4-d920f1c7e51e 2026-03-09T18:04:46.136 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T18:04:46.136 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T18:04:46.139 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T18:04:46.143 INFO:tasks.cephadm:Teardown complete 2026-03-09T18:04:46.143 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T18:04:46.167 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T18:04:46.167 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T18:04:46.181 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T18:04:46.201 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T18:04:46.201 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T18:04:46.206 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T18:04:46.206 DEBUG:teuthology.orchestra.run.vm02:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T18:04:46.269 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:04:46.278 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:04:46.441 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:04:46.441 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:04:46.460 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:04:46.460 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:04:46.588 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:46.588 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:04:46.588 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T18:04:46.588 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:46.602 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:04:46.602 INFO:teuthology.orchestra.run.vm00.stdout: ceph* 2026-03-09T18:04:46.627 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:46.627 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:04:46.627 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T18:04:46.627 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:46.638 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:04:46.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph* 2026-03-09T18:04:46.774 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:04:46.774 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T18:04:46.823 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T18:04:46.825 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:46.976 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:04:46.977 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T18:04:47.014 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T18:04:47.015 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:47.984 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:04:48.018 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:04:48.035 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:04:48.071 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:04:48.185 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:04:48.185 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:04:48.244 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:04:48.245 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:04:48.369 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:48.369 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:04:48.369 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T18:04:48.369 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:48.383 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:04:48.383 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T18:04:48.427 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:48.427 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:04:48.427 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T18:04:48.427 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:48.437 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:04:48.438 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T18:04:48.623 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T18:04:48.623 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T18:04:48.625 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T18:04:48.625 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T18:04:48.663 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T18:04:48.666 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:48.666 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T18:04:48.668 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:48.683 INFO:teuthology.orchestra.run.vm00.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:48.687 INFO:teuthology.orchestra.run.vm02.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:48.712 INFO:teuthology.orchestra.run.vm00.stdout:Looking for files to backup/remove ... 2026-03-09T18:04:48.713 INFO:teuthology.orchestra.run.vm00.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T18:04:48.715 INFO:teuthology.orchestra.run.vm00.stdout:Removing user `cephadm' ... 2026-03-09T18:04:48.715 INFO:teuthology.orchestra.run.vm00.stdout:Warning: group `nogroup' has no more members. 2026-03-09T18:04:48.716 INFO:teuthology.orchestra.run.vm02.stdout:Looking for files to backup/remove ... 2026-03-09T18:04:48.718 INFO:teuthology.orchestra.run.vm02.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T18:04:48.720 INFO:teuthology.orchestra.run.vm02.stdout:Removing user `cephadm' ... 2026-03-09T18:04:48.720 INFO:teuthology.orchestra.run.vm02.stdout:Warning: group `nogroup' has no more members. 2026-03-09T18:04:48.725 INFO:teuthology.orchestra.run.vm00.stdout:Done. 2026-03-09T18:04:48.732 INFO:teuthology.orchestra.run.vm02.stdout:Done. 2026-03-09T18:04:48.749 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:04:48.755 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:04:48.844 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T18:04:48.846 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T18:04:48.846 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:48.847 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:49.849 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:04:49.879 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:04:49.881 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:04:49.913 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:04:50.063 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:04:50.063 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:04:50.092 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:04:50.092 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:04:50.218 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:50.218 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:04:50.218 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T18:04:50.218 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:50.230 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:04:50.231 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds* 2026-03-09T18:04:50.271 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:50.271 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:04:50.272 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T18:04:50.272 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:50.284 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:04:50.284 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds* 2026-03-09T18:04:50.405 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:04:50.405 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T18:04:50.449 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T18:04:50.451 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:50.458 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:04:50.458 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T18:04:50.503 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T18:04:50.505 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:50.855 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:04:50.906 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:04:50.946 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T18:04:50.948 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:51.003 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T18:04:51.005 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:52.427 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:04:52.461 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:04:52.497 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:04:52.531 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:04:52.654 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:04:52.655 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:04:52.726 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:04:52.727 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:04:52.834 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:52.834 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev 2026-03-09T18:04:52.835 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:52.847 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:04:52.847 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T18:04:52.848 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents* 2026-03-09T18:04:52.916 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:52.916 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout: sg3-utils-udev 2026-03-09T18:04:52.917 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:52.930 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:04:52.930 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T18:04:52.931 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-k8sevents* 2026-03-09T18:04:53.033 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T18:04:53.033 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T18:04:53.067 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T18:04:53.069 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.077 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.099 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.112 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T18:04:53.113 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T18:04:53.131 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.152 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T18:04:53.154 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.190 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.211 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.242 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.641 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T18:04:53.642 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:53.748 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T18:04:53.749 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:55.111 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:04:55.125 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:04:55.144 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:04:55.158 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:04:55.346 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:04:55.347 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:04:55.348 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:04:55.348 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:04:55.519 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:04:55.520 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:55.532 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:04:55.532 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:04:55.532 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:04:55.532 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:04:55.533 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:04:55.546 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:04:55.547 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T18:04:55.697 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T18:04:55.697 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T18:04:55.707 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T18:04:55.707 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T18:04:55.734 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T18:04:55.736 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:55.747 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T18:04:55.749 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:55.793 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:55.804 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:56.207 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:56.222 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:56.638 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:56.686 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:57.020 INFO:teuthology.orchestra.run.vm02.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:57.137 INFO:teuthology.orchestra.run.vm00.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:57.433 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:57.467 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:57.520 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:57.539 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:57.916 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:04:57.949 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:04:58.013 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T18:04:58.015 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:58.017 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:04:58.048 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:04:58.113 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T18:04:58.114 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:58.632 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:58.759 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:59.042 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:59.153 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:59.448 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:59.568 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:59.849 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:04:59.997 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:01.243 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:01.278 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:01.397 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:01.431 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:01.456 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:01.456 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:01.608 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:01.608 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:05:01.609 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:01.624 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:05:01.626 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse* 2026-03-09T18:05:01.635 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:01.635 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:01.787 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:05:01.787 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T18:05:01.824 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T18:05:01.827 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:01.831 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:01.831 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:01.831 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:05:01.831 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:05:01.831 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:05:01.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:05:01.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:05:01.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:05:01.832 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:01.843 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:05:01.843 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse* 2026-03-09T18:05:02.002 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:05:02.002 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T18:05:02.038 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T18:05:02.040 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:02.231 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:05:02.319 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T18:05:02.322 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:02.712 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:05:02.882 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T18:05:02.884 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:04.120 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:04.154 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:04.331 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:04.331 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:04.451 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:04.458 INFO:teuthology.orchestra.run.vm02.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T18:05:04.458 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:04.458 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:04.458 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:05:04.458 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:05:04.458 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:05:04.458 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:05:04.459 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:04.478 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:04.478 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:04.487 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:04.510 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:04.641 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:04.642 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:04.697 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:04.698 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:05:04.783 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:05:04.784 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:05:04.784 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:04.803 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:04.803 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:04.835 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:05:04.846 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:05:04.847 INFO:teuthology.orchestra.run.vm02.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:05:04.847 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:05:04.847 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:05:04.847 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:05:04.847 INFO:teuthology.orchestra.run.vm02.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:05:04.847 INFO:teuthology.orchestra.run.vm02.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:05:04.847 INFO:teuthology.orchestra.run.vm02.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:05:04.847 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:04.864 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:04.864 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:04.896 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:05.036 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:05.036 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:05.081 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:05.082 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:05.153 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T18:05:05.153 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:05:05.154 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:05.169 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:05.169 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:05.202 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:05:05.209 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:05:05.210 INFO:teuthology.orchestra.run.vm02.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:05:05.210 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:05:05.210 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:05:05.210 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:05:05.210 INFO:teuthology.orchestra.run.vm02.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:05:05.210 INFO:teuthology.orchestra.run.vm02.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:05:05.210 INFO:teuthology.orchestra.run.vm02.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:05:05.210 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:05.225 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:05.225 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:05.258 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:05.373 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:05.374 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:05.434 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:05.435 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:05.510 INFO:teuthology.orchestra.run.vm00.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T18:05:05.510 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:05.510 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:05.510 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:05:05.511 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:05.530 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:05.530 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:05.563 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:05.628 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:05.628 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:05.628 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:05.628 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet zip 2026-03-09T18:05:05.629 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:05.643 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:05:05.643 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T18:05:05.762 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:05.762 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-09T18:05:05.882 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:05.896 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:05:05.896 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T18:05:05.913 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T18:05:05.913 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T18:05:05.973 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T18:05:05.975 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:06.125 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T18:05:06.126 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T18:05:06.199 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T18:05:06.200 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:06.257 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:06.262 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:06.267 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:06.286 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:07.364 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:07.399 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:07.424 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:07.459 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:07.600 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:07.601 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:07.646 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:07.646 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:07.741 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T18:05:07.741 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:07.741 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:07.741 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-09T18:05:07.742 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:07.767 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:07.767 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:07.802 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:07.830 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:07.831 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet zip 2026-03-09T18:05:07.831 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:07.846 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:07.846 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:07.879 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:07.981 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:07.981 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:08.061 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:08.062 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:08.119 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:08.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:08.120 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-09T18:05:08.120 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:08.139 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:08.139 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:08.171 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:08.199 INFO:teuthology.orchestra.run.vm02.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T18:05:08.199 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:08.199 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:08.199 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:08.199 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet zip 2026-03-09T18:05:08.200 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:08.218 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:08.218 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:08.250 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:08.354 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:08.354 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:08.433 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:08.433 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:08.474 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:08.474 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:08.474 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:08.474 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:08.474 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:08.474 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:08.474 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-09T18:05:08.475 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:08.486 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:05:08.486 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd* 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet zip 2026-03-09T18:05:08.565 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:08.575 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:05:08.575 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd* 2026-03-09T18:05:08.648 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:05:08.648 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T18:05:08.681 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T18:05:08.682 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:08.728 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:05:08.728 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T18:05:08.765 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T18:05:08.768 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:09.695 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:09.729 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:09.783 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:09.815 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:09.903 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:09.904 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:09.997 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:09.997 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:10.024 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:10.024 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:10.024 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:10.024 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-09T18:05:10.025 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:10.036 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:05:10.036 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev* libcephfs2* 2026-03-09T18:05:10.126 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet zip 2026-03-09T18:05:10.127 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:10.134 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:05:10.135 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-dev* libcephfs2* 2026-03-09T18:05:10.197 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T18:05:10.197 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T18:05:10.231 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T18:05:10.234 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:10.243 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:10.266 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:05:10.293 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T18:05:10.293 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T18:05:10.330 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T18:05:10.333 INFO:teuthology.orchestra.run.vm02.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:10.344 INFO:teuthology.orchestra.run.vm02.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:10.366 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:05:11.316 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:11.350 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:11.439 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:11.473 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:11.560 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:11.561 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:11.681 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:11.681 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:11.730 INFO:teuthology.orchestra.run.vm00.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T18:05:11.730 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:11.730 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-09T18:05:11.731 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:11.755 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:11.755 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:11.788 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:11.833 INFO:teuthology.orchestra.run.vm02.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet zip 2026-03-09T18:05:11.834 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:11.857 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:11.857 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:11.889 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:11.979 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:11.980 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:12.082 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:12.082 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:12.135 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:12.135 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:12.135 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:05:12.135 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:05:12.135 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:05:12.136 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:12.147 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:05:12.147 INFO:teuthology.orchestra.run.vm00.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T18:05:12.147 INFO:teuthology.orchestra.run.vm00.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T18:05:12.232 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:12.232 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:12.232 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:05:12.232 INFO:teuthology.orchestra.run.vm02.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:05:12.232 INFO:teuthology.orchestra.run.vm02.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:05:12.232 INFO:teuthology.orchestra.run.vm02.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:05:12.233 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:12.248 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:05:12.249 INFO:teuthology.orchestra.run.vm02.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T18:05:12.249 INFO:teuthology.orchestra.run.vm02.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T18:05:12.318 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T18:05:12.318 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T18:05:12.358 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T18:05:12.360 INFO:teuthology.orchestra.run.vm00.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.372 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.383 INFO:teuthology.orchestra.run.vm00.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.394 INFO:teuthology.orchestra.run.vm00.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T18:05:12.422 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T18:05:12.423 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T18:05:12.462 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T18:05:12.464 INFO:teuthology.orchestra.run.vm02.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.475 INFO:teuthology.orchestra.run.vm02.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.487 INFO:teuthology.orchestra.run.vm02.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.498 INFO:teuthology.orchestra.run.vm02.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T18:05:12.814 INFO:teuthology.orchestra.run.vm00.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.825 INFO:teuthology.orchestra.run.vm00.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.835 INFO:teuthology.orchestra.run.vm00.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.860 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:05:12.891 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:05:12.916 INFO:teuthology.orchestra.run.vm02.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.928 INFO:teuthology.orchestra.run.vm02.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.939 INFO:teuthology.orchestra.run.vm02.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:12.957 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T18:05:12.959 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T18:05:12.965 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:05:12.999 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:05:13.060 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T18:05:13.062 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T18:05:14.383 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:14.416 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:14.470 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:14.503 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:14.613 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:14.614 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:14.698 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:14.698 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:14.808 INFO:teuthology.orchestra.run.vm00.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T18:05:14.809 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:14.809 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:14.809 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:05:14.809 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:05:14.809 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:05:14.809 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:05:14.810 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:14.838 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:14.839 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:14.871 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:14.885 INFO:teuthology.orchestra.run.vm02.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T18:05:14.885 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:14.885 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:14.885 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:05:14.886 INFO:teuthology.orchestra.run.vm02.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:05:14.886 INFO:teuthology.orchestra.run.vm02.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:05:14.886 INFO:teuthology.orchestra.run.vm02.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:05:14.886 INFO:teuthology.orchestra.run.vm02.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:05:14.887 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:14.915 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:14.915 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:14.949 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:15.062 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:15.062 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:15.147 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:15.147 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:15.192 INFO:teuthology.orchestra.run.vm00.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T18:05:15.192 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:15.192 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:15.192 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:05:15.192 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:05:15.192 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:05:15.192 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:05:15.193 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:15.218 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:15.218 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:15.219 DEBUG:teuthology.orchestra.run.vm00:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T18:05:15.273 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T18:05:15.308 INFO:teuthology.orchestra.run.vm02.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T18:05:15.308 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:05:15.308 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:15.308 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:05:15.309 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:05:15.329 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:05:15.329 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:15.331 DEBUG:teuthology.orchestra.run.vm02:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T18:05:15.351 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:15.386 DEBUG:teuthology.orchestra.run.vm02:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T18:05:15.463 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:15.548 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-09T18:05:15.549 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-09T18:05:15.663 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-09T18:05:15.664 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:05:15.665 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:05:15.816 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-09T18:05:15.816 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:05:15.816 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:05:15.816 INFO:teuthology.orchestra.run.vm02.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:05:15.816 INFO:teuthology.orchestra.run.vm02.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:05:15.816 INFO:teuthology.orchestra.run.vm02.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:05:15.817 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:05:15.834 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T18:05:15.834 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T18:05:15.871 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T18:05:15.873 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:15.889 INFO:teuthology.orchestra.run.vm00.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T18:05:15.900 INFO:teuthology.orchestra.run.vm00.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T18:05:15.910 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T18:05:15.920 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T18:05:15.929 INFO:teuthology.orchestra.run.vm00.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T18:05:15.938 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:05:15.947 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:05:15.956 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:05:15.977 INFO:teuthology.orchestra.run.vm00.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T18:05:15.980 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T18:05:15.980 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T18:05:15.988 INFO:teuthology.orchestra.run.vm00.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T18:05:15.999 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:05:16.012 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:05:16.024 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:05:16.025 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T18:05:16.028 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:16.036 INFO:teuthology.orchestra.run.vm00.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:05:16.043 INFO:teuthology.orchestra.run.vm02.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T18:05:16.047 INFO:teuthology.orchestra.run.vm00.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T18:05:16.055 INFO:teuthology.orchestra.run.vm02.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T18:05:16.059 INFO:teuthology.orchestra.run.vm00.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T18:05:16.066 INFO:teuthology.orchestra.run.vm02.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T18:05:16.070 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T18:05:16.077 INFO:teuthology.orchestra.run.vm02.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T18:05:16.081 INFO:teuthology.orchestra.run.vm00.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T18:05:16.087 INFO:teuthology.orchestra.run.vm02.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T18:05:16.097 INFO:teuthology.orchestra.run.vm02.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:05:16.107 INFO:teuthology.orchestra.run.vm02.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:05:16.115 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T18:05:16.117 INFO:teuthology.orchestra.run.vm02.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:05:16.127 INFO:teuthology.orchestra.run.vm00.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T18:05:16.135 INFO:teuthology.orchestra.run.vm02.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T18:05:16.137 INFO:teuthology.orchestra.run.vm00.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T18:05:16.145 INFO:teuthology.orchestra.run.vm02.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T18:05:16.148 INFO:teuthology.orchestra.run.vm00.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T18:05:16.155 INFO:teuthology.orchestra.run.vm02.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:05:16.158 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T18:05:16.165 INFO:teuthology.orchestra.run.vm02.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:05:16.169 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T18:05:16.175 INFO:teuthology.orchestra.run.vm02.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:05:16.180 INFO:teuthology.orchestra.run.vm00.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T18:05:16.186 INFO:teuthology.orchestra.run.vm02.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:05:16.192 INFO:teuthology.orchestra.run.vm00.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T18:05:16.196 INFO:teuthology.orchestra.run.vm02.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T18:05:16.203 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:05:16.207 INFO:teuthology.orchestra.run.vm02.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T18:05:16.211 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T18:05:16.217 INFO:teuthology.orchestra.run.vm02.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T18:05:16.221 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:05:16.228 INFO:teuthology.orchestra.run.vm02.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T18:05:16.238 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:05:16.250 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T18:05:16.253 INFO:teuthology.orchestra.run.vm02.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T18:05:16.262 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T18:05:16.264 INFO:teuthology.orchestra.run.vm02.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T18:05:16.275 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T18:05:16.275 INFO:teuthology.orchestra.run.vm02.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T18:05:16.286 INFO:teuthology.orchestra.run.vm02.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T18:05:16.290 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T18:05:16.297 INFO:teuthology.orchestra.run.vm02.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T18:05:16.307 INFO:teuthology.orchestra.run.vm02.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T18:05:16.310 INFO:teuthology.orchestra.run.vm00.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T18:05:16.318 INFO:teuthology.orchestra.run.vm02.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T18:05:16.328 INFO:teuthology.orchestra.run.vm02.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T18:05:16.339 INFO:teuthology.orchestra.run.vm02.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:05:16.347 INFO:teuthology.orchestra.run.vm02.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T18:05:16.357 INFO:teuthology.orchestra.run.vm02.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:05:16.374 INFO:teuthology.orchestra.run.vm02.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:05:16.385 INFO:teuthology.orchestra.run.vm02.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T18:05:16.396 INFO:teuthology.orchestra.run.vm02.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T18:05:16.407 INFO:teuthology.orchestra.run.vm02.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T18:05:16.421 INFO:teuthology.orchestra.run.vm02.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T18:05:16.439 INFO:teuthology.orchestra.run.vm02.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T18:05:16.724 INFO:teuthology.orchestra.run.vm00.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T18:05:16.759 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T18:05:16.783 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T18:05:16.839 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T18:05:16.856 INFO:teuthology.orchestra.run.vm02.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T18:05:16.886 INFO:teuthology.orchestra.run.vm02.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T18:05:16.888 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T18:05:16.911 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T18:05:16.939 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T18:05:16.966 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T18:05:16.986 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T18:05:16.998 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T18:05:17.012 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T18:05:17.052 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T18:05:17.061 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T18:05:17.107 INFO:teuthology.orchestra.run.vm02.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T18:05:17.118 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T18:05:17.170 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T18:05:17.311 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T18:05:17.359 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T18:05:17.403 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:17.420 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T18:05:17.450 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:17.470 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T18:05:17.499 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T18:05:17.516 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:17.556 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T18:05:17.560 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:05:17.613 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T18:05:17.614 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T18:05:17.663 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T18:05:17.674 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T18:05:17.709 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T18:05:17.725 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T18:05:17.756 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T18:05:17.773 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T18:05:17.803 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T18:05:17.819 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T18:05:17.851 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T18:05:17.867 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T18:05:17.902 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T18:05:17.918 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T18:05:17.966 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T18:05:18.014 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T18:05:18.025 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T18:05:18.091 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T18:05:18.138 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T18:05:18.142 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T18:05:18.185 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T18:05:18.200 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T18:05:18.234 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T18:05:18.251 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T18:05:18.294 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T18:05:18.301 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T18:05:18.343 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T18:05:18.352 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T18:05:18.394 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T18:05:18.409 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T18:05:18.443 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T18:05:18.460 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T18:05:18.493 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T18:05:18.512 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T18:05:18.543 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T18:05:18.563 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T18:05:18.592 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T18:05:18.613 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T18:05:18.642 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T18:05:18.663 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T18:05:18.689 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T18:05:18.714 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T18:05:18.741 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T18:05:18.765 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T18:05:18.788 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T18:05:18.812 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T18:05:18.815 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T18:05:18.860 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T18:05:18.871 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T18:05:18.906 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T18:05:18.920 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T18:05:18.944 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T18:05:18.954 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T18:05:18.991 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T18:05:19.002 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T18:05:19.038 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T18:05:19.049 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T18:05:19.086 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T18:05:19.098 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T18:05:19.134 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T18:05:19.148 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T18:05:19.182 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T18:05:19.191 INFO:teuthology.orchestra.run.vm00.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T18:05:19.211 INFO:teuthology.orchestra.run.vm00.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T18:05:19.231 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T18:05:19.284 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T18:05:19.328 INFO:teuthology.orchestra.run.vm02.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T18:05:19.350 INFO:teuthology.orchestra.run.vm02.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T18:05:19.634 INFO:teuthology.orchestra.run.vm00.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T18:05:19.644 INFO:teuthology.orchestra.run.vm00.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T18:05:19.662 INFO:teuthology.orchestra.run.vm00.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T18:05:19.677 INFO:teuthology.orchestra.run.vm00.stdout:Removing zip (3.0-12build2) ... 2026-03-09T18:05:19.700 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:05:19.709 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:05:19.749 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T18:05:19.756 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T18:05:19.771 INFO:teuthology.orchestra.run.vm02.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T18:05:19.776 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T18:05:19.782 INFO:teuthology.orchestra.run.vm02.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T18:05:19.800 INFO:teuthology.orchestra.run.vm02.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T18:05:19.817 INFO:teuthology.orchestra.run.vm02.stdout:Removing zip (3.0-12build2) ... 2026-03-09T18:05:19.842 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:05:19.852 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:05:19.897 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T18:05:19.904 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T18:05:19.920 INFO:teuthology.orchestra.run.vm02.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T18:05:21.273 INFO:teuthology.orchestra.run.vm00.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T18:05:21.274 INFO:teuthology.orchestra.run.vm00.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T18:05:21.389 INFO:teuthology.orchestra.run.vm02.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T18:05:21.391 INFO:teuthology.orchestra.run.vm02.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T18:05:23.498 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:23.500 DEBUG:teuthology.parallel:result is None 2026-03-09T18:05:23.567 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:05:23.570 DEBUG:teuthology.parallel:result is None 2026-03-09T18:05:23.570 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm00.local 2026-03-09T18:05:23.570 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm02.local 2026-03-09T18:05:23.570 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T18:05:23.571 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T18:05:23.579 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-09T18:05:23.621 DEBUG:teuthology.orchestra.run.vm02:> sudo apt-get update 2026-03-09T18:05:23.768 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T18:05:23.772 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB] 2026-03-09T18:05:23.799 INFO:teuthology.orchestra.run.vm02.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T18:05:23.809 INFO:teuthology.orchestra.run.vm02.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB] 2026-03-09T18:05:23.810 INFO:teuthology.orchestra.run.vm00.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease [127 kB] 2026-03-09T18:05:23.836 INFO:teuthology.orchestra.run.vm02.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease [127 kB] 2026-03-09T18:05:23.864 INFO:teuthology.orchestra.run.vm00.stdout:Get:4 https://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB] 2026-03-09T18:05:23.890 INFO:teuthology.orchestra.run.vm00.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [3285 kB] 2026-03-09T18:05:23.922 INFO:teuthology.orchestra.run.vm02.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [3285 kB] 2026-03-09T18:05:23.968 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1256 kB] 2026-03-09T18:05:24.008 INFO:teuthology.orchestra.run.vm02.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1256 kB] 2026-03-09T18:05:24.294 INFO:teuthology.orchestra.run.vm02.stdout:Get:6 https://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB] 2026-03-09T18:05:24.343 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 4925 kB in 1s (8174 kB/s) 2026-03-09T18:05:24.835 INFO:teuthology.orchestra.run.vm02.stdout:Fetched 4925 kB in 1s (4658 kB/s) 2026-03-09T18:05:25.126 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-09T18:05:25.138 DEBUG:teuthology.parallel:result is None 2026-03-09T18:05:25.601 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-09T18:05:25.613 DEBUG:teuthology.parallel:result is None 2026-03-09T18:05:25.613 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T18:05:25.615 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T18:05:25.615 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:05:25.616 DEBUG:teuthology.orchestra.run.vm02:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout:============================================================================== 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout:+cp.hypermediaa. 189.97.54.122 2 u 9 128 377 25.053 +0.926 0.301 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout:-vps-fra2.orlean 169.254.169.254 4 u 130 128 377 20.976 +0.984 0.292 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout:+sv1.ggsrv.de 192.53.103.103 2 u 58 128 377 24.982 +0.744 0.202 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout:*static.222.16.4 35.73.197.144 2 u 116 128 377 0.407 +0.630 0.196 2026-03-09T18:05:25.688 INFO:teuthology.orchestra.run.vm02.stdout:-185.125.190.58 145.238.80.80 2 u 39 128 377 32.118 +0.859 13.223 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout:-47.ip-51-75-67. 225.254.30.190 4 u 1 256 377 21.197 +1.448 0.148 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout:*static.222.16.4 35.73.197.144 2 u 80 128 377 0.392 +0.552 0.725 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout:+vps-ber1.orlean 127.65.222.189 2 u 74 128 377 28.799 +1.316 0.120 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout:+vps-fra2.orlean 169.254.169.254 4 u 114 128 377 20.958 +0.585 0.295 2026-03-09T18:05:25.706 INFO:teuthology.orchestra.run.vm00.stdout:-a.chl.la 131.188.3.222 2 u 75 256 377 23.863 -0.143 0.254 2026-03-09T18:05:25.707 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T18:05:25.709 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T18:05:25.709 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T18:05:25.711 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T18:05:25.713 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T18:05:25.715 INFO:teuthology.task.internal:Duration was 3097.042729 seconds 2026-03-09T18:05:25.716 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T18:05:25.717 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T18:05:25.717 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T18:05:25.718 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T18:05:25.743 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T18:05:25.743 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-09T18:05:25.743 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T18:05:25.794 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm02.local 2026-03-09T18:05:25.794 DEBUG:teuthology.orchestra.run.vm02:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T18:05:25.805 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T18:05:25.805 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:05:25.837 DEBUG:teuthology.orchestra.run.vm02:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:05:26.047 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T18:05:26.047 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:05:26.048 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:05:26.054 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:05:26.055 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:05:26.055 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T18:05:26.055 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:05:26.055 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T18:05:26.056 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:05:26.056 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:05:26.056 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T18:05:26.056 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:05:26.057 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T18:05:26.080 INFO:teuthology.orchestra.run.vm02.stderr: 93.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T18:05:26.091 INFO:teuthology.orchestra.run.vm00.stderr: 95.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T18:05:26.092 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T18:05:26.095 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T18:05:26.095 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T18:05:26.143 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T18:05:26.150 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T18:05:26.153 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:05:26.185 DEBUG:teuthology.orchestra.run.vm02:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:05:26.191 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-09T18:05:26.200 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = core 2026-03-09T18:05:26.207 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:05:26.242 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:05:26.242 DEBUG:teuthology.orchestra.run.vm02:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:05:26.251 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:05:26.252 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T18:05:26.254 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T18:05:26.254 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583/remote/vm00 2026-03-09T18:05:26.254 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T18:05:26.293 DEBUG:teuthology.misc:Transferring archived files from vm02:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/583/remote/vm02 2026-03-09T18:05:26.293 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T18:05:26.302 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T18:05:26.302 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T18:05:26.333 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T18:05:26.348 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T18:05:26.351 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T18:05:26.351 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T18:05:26.353 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T18:05:26.353 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T18:05:26.377 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T18:05:26.379 INFO:teuthology.orchestra.run.vm00.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 18:05 /home/ubuntu/cephtest 2026-03-09T18:05:26.392 INFO:teuthology.orchestra.run.vm02.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 18:05 /home/ubuntu/cephtest 2026-03-09T18:05:26.393 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T18:05:26.398 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} duration: 3097.042729139328 flavor: default owner: kyr success: true 2026-03-09T18:05:26.398 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T18:05:26.417 INFO:teuthology.run:pass